id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
5752251
pes2o/s2orc
v3-fos-license
Meta-analysis of factors predicting resistance to intravenous immunoglobulin treatment in patients with Kawasaki disease Purpose Studies have been conducted to identify predictive factors of resistance to intravenous immunoglobulin (IVIG) for Kawasaki disease (KD). However, the results are conflicting. This study aimed to identify laboratory factors predictive of resistance to high-dose IVIG for KD by performing meta-analysis of available studies using statistical techniques. Methods All relevant scientific publications from 2006 to 2014 were identified through PubMed searches. For studies in English on KD and IVIG resistance, predictive factors were included. A meta-analysis was performed that calculated the effect size of various laboratory parameters as predictive factors for IVIG-resistant KD. Results Twelve studies comprising 2,745 patients were included. Meta-analysis demonstrated significant effect sizes for several laboratory parameters: polymorphonuclear leukocytes (PMNs) 0.698 (95% confidence interval [CI], 0.469–0.926), C-reactive protein (CRP) 0.375 (95% CI, 0.086–0.663), pro-brain natriuretic peptide (pro-BNP) 0.561 (95% CI, 0.261–0.861), total bilirubin 0.859 (95% CI, 0.582–1.136), alanine aminotransferase (AST) 0.503 (95% CI, 0.313–0.693), aspartate aminotransferase (ALT) 0.436 (95% CI, 0.275–0.597), albumin 0.427 (95% CI, –0.657 to –0.198), and sodium 0.604 (95% CI, –0.839 to –0.370). Particularly, total bilirubin, PMN, sodium, pro-BNP, and AST, in descending numerical order, demonstrated more than a medium effect size. Conclusion Based on the results of this study, laboratory predictive factors for IVIG-resistant KD included higher total bilirubin, PMN, pro-BNP, AST, ALT, and CRP, and lower sodium and albumin. The presence of several of these predictive factors should alert clinicians to the increased likelihood that the patient may not respond adequately to initial IVIG therapy. Introduction Kawasaki disease (KD) is the most common cause of acquired cardiac disease in children. It can cause systemic vasculitis and lead to serious coronary complications. Although high-dose intravenous immunoglobulin (IVIG) treatment in the acute stage of KD has shown to be effective, 10% to 20% of patients are resistant to initial IVIG treatment. These patients are at the greatest risk of developing coronary artery aneurysm, coronary artery stenosis, myocardial infarction and other serious complications 1) . IVIG resistance is defined as needing second dose of IVIG because of persistent or recrudescent fever, despite initial IVIG treatment. Recent studies have investigated factors for predicting resistance to IVIG for KD, but their results have been conflicting. The aim of this study was to find laboratory factors predicting resistance to high-dose IVIG for KD by searching PubMed and performing meta-analysis on the data using statistical techniques. Materials and methods All relevant scientific publications were identified through PubMed searches. Search terms included "Kawasaki disease, IVIG resistance" or "IVIG resistant Kawasaki disease" combined with a second identifying phrase, "predictive factors or risk factors" Studies spanned the period from 2006 to 2014. Before including a study in our meta-analysis, we applied several inclusion and exclusion criteria Inclusion criteria To be included in the meta-analysis of this study, all articles had to meet the following criteria: (1) Retrospective or prospective cohort study design investigating KD. (2) A population focus on children (aged 0-18 years) receiving IVIG therapy within a hospital setting. (4) Laboratory values before IVIG therapy compared in each group; variables before IVIG compared between groups. (5) Kawasaki disease diagnosed according to the criteria published by the American Heart Association in 2004. Exclusion criteria We excluded studies by title, then abstract, then full text. Exclusion criteria included papers that were not available in English. Initially, 25 abstracts were excluded owing to duplica tions, leav- A total of 96,717 abstracts were excluded because the topic did not include KD. Another 10 studies were excluded because their authors could not provide data. Details of the abstract review process are presented in, a flow diagram (Fig. 1). Statistics Meta-analysis was performed using the comprehensive metaanalysis software (Biostat, Englewood, NJ, USA). Heterogeneity was quantified using the I 2 test, which describes the percentage of total variation across the studies that is the result of heterogeneity rather than chance. In the absence of heterogeneity, studies were combined using the Mantel-Haenszel fixed or random effects method of meta-analysis. If visual inspection of the forest plot or a high I 2 value suggested heterogeneity, potential causes were explored using subgroup analyses, mixed-effects (fixed or random) models, and by looking for methodological differences between the studies. If no explanation could be found, random effects meta-analysis was performed. In addition, the Q test was performed to determine whether studies were homogenous: where W i = investment. In the formula for weight (1/V i ), Y i = the effect size of the study, M = the overall effect size, k = the number of investment. In this study, we calculated the "standardized mean difference" effect size (d). Calculation of normalized average difference effect of (d) the size and distribution (V d ) is as follows: the investment. R was the correlation coefficient between the pre and post mean average presented in the paper analyzed. All summary effects were presented with a 95% confidence interval (CI). Results Our review included data from 2,735 patients comprising; 12 randomized controlled studies reporting patient cohort sizes from 77 to 1,177. Studies were published over the period from 2006 to 2014, and took place across 4 countries, with the largest number of studies originating from South Korea, Japan, United States, China, and Taiwan (1-5 studies each). These studies' characteristics are summarized in Tables 1 and 2. Effect size of white blood cells Twelve included studies calculated the effect size of the white blood cell as predictive factors of IVIG-resistant KD. No significant heterogeneity of these studies was observed (Q=12.446, P>0.001, I 2 =11.619). Meta-analysis demonstrated a small effect size for white blood cells (fixed effects, 0.121; 95% CI, 0.02-0.222; z=2.337; P=0.019). Therefore, this factor had no effect in our analysis (Fig. 2). Effect size of PMNs Ten included studies calculated the effect size of PMNs as a Therefore, we conclude that PMN had an effect (Fig. 3). Effect size of ESR Seven included studies calculated the effect size of ESR as a predictive factor of IVIG-resistant KD. Heterogeneity among these studies was low (Q (6)=11.001, P>0.001, I 2 =45.459). Meta-analysis demonstrated a small effect size for ESR (random effects, 0.150; . Therefore, ESR showed no effect (Fig. 5). Effect size of sodium Ten included studies calculated the effect size of sodium as a predictive factor of IVIG-resistant KD. Heterogeneity among these studies was significant (Q (9)=30.15, P<0.001, I 2 =70.149). Metaanalysis demonstrated a significant effect size for sodium (random effects, -0.604; 95% CI, -0.839 to -0.370; z=-5.047; P< 0.001). Therefore, sodium had an effect (Fig. 12). Fig. 8. The effect size of total bilirubin as a predictive factor of resistance to intravenous immunoglobulin therapy in Kawasaki disease. CI, confidence interval. Discussion Some patients with KD fail initial IVIG therapy for reasons that are not clear. Retrospective studies have identified potential factors that predict which patients will require further therapy for refractory disease. The presence of one or more of these risk factors for treatment failure should alert clinicians to an increased likelihood that the patient may not respond adequately to the initial IVIG therapy. The ability to predict a lack of response to IVIG before initiating therapy, would allow clinicians to identify these patients because they might benefit from more aggressive treatment. Several previous studies have published data that could be used for predicting nonresponse to IVIG therapy and patients who are at high risk for coronary artery lesions (CALs) [2][3][4][5][6][7][8][9][10][11][12][13] . These data include duration of fever, male sex, serum CRP, ESR level, white blood cell count, PMN cell count, hemoglobin, platelet count, trans aminase, total bilirubin and pro-BNP, albumin, and sodium levels. However, these studies did not have consistent results, and no single marker proved to be indicative of KD patients who were resistant to IVIG therapy. In one such study, Kuo et al. 10) reported a univariate analysis of 131 patients. This analysis showed that patients who had initial findings of high neutrophil count, abnormal liver function, low Fig. 9. The effect size of alanine aminotransferase (AST) as a predictive factor of resistance to intravenous immunoglobulin therapy in Kawasaki disease. CI, confidence interval. Fig. 10. The effect size of aspartate aminotransferase (ALT) as a predictive factor of resistance to intravenous immunoglobulin therapy in Kawasaki disease. CI, confidence interval. serum albumin level and pericardial effusion were at risk for IVIG treatment failure. Multivariate analysis using a logistic re gression procedure showed that serum albumin level (≤2.9 g/dL) was an independent predictive factor of IVIG resistance in patients with KD (P=0.006; odds ratio, 40; 95% CI, 52.8-562). In another study, Sano et al. 12) reported a univariate analysis of pre-IVIG data from 112 patients showed that the neutrophil count and serum levels of CRP, total bilirubin, AST, ALT, and lactate dehydrogenase were significantly higher in IVIG-nonresponder than in IVIG-responsive patients. A multivariate analysis selected CRP (P=0.009), total bilirubin (P=0.001), and AST (P=0.002) as independent predictors of nonresponsiveness to initial IVIG treatment. After defining predictive values, patients with at least 2 of 3 predictors (CRP≥7.0 mg, total bilirubin≥0.9 mg, or AST≥200 IU/L) were considered to be non-responsive to IVIG for acute KD. Kobayashi et al. 13) reported risk predictors such as day of illness at initial treatment, age in months, percentage of white blood cells representing neutrophils, platelet count, and serum AST, sodium, and CRP. Tremoulet et al. 14) mentioned high percentage band cell count, long illness day, high gamma-glutamyl transferase, and low ageadjusted hemoglobin as factors for predicting risk of IVIG-nonresponse. Park et al. 7) reported that elevated ALT and total biliru- bin levels were significant in the IVIG-resistant group. Yoshimura et al. 6) reported that 19 of 80 patients developed CAL, despite IVIG administration. These patients had a significantly higher serum N-terminal (NT)-pro-BNP level than patients who did not develop CAL. The NT-pro-BNP cutoff value of 1,300 pg/mL yielded a sensitivity of 95% and a specificity of 85% for predicting CAL. However, 17 of the 80 patients were IVIG non responders who also had significantly higher serum NT-pro-BNP than the IVIG responders. The NT-pro-BNP cutoff value of 800 pg/mL yielded a sensitivity of 71% and a specificity of 62% for predicting IVIG nonresponders. The previous studies had limitations, such as the number of IVIG-nonresponders being too small to represent a sample. In addition, no consensus on risk factors has been reached. Many studies of such risk factors were conducted in Japanese patients, and a scoring system for predicting IVIG-nonresponders using various parameters was proposed. However, when these riskscoring systems were applied to subjects from other ethnic groups, specificity was high but sensitivity was low, so they had difficulty in screening out high-risk patients 11) . In this metaanalysis, we included studies from South Korea, China, Taiwan, and United States, in addition to Japan. Earlier and more effective primary therapy in patients who are predicted to be nonresponsive to IVIG might reduce their risk of coronary artery complications. Combining initial therapy with IVIG as usual plus another agent may be a feasible choice for patients who are predicted to have IVIG-resistant KD. At present, contro versy remains about whether combination therapy is associated with a better outcome in these patients 1) . It may be also important to determine the effectiveness of IVIG soon after therapy, especially in treatment-resistant cases, to assess the patient's need for additional therapy to prevent further dilatation of the coronary artery. Kim et al. 8) suggested guidelines for retreating KD patients who received IVIG treatment; these authors recommend that, in early-stage disease, additional therapy should be administered for febrile patients who have high values of CRP, NT-pro-BNP, and/or neutrophil counts after IVIG therapy. However, no definitive algorithm exists to determine the effectiveness of IVIG therapy and retreatment. In addition, it is also uncertain which agents are effective alternative treatments after failure to defervesce with initial IVIG treatment 1) . In conclusion, controversy remains as to which risk factors predict patient responding to IVIG and what additional treat ments should be applied to reduce the development of CAL. After performing our meta-analysis of large-scale studies, we found several laboratory predictive factors for IVIG-resistant KD. Therefore, in the presence of several such factors, clinicians should be alert to an increased likelihood that the patient may not respond adequately to initial IVIG therapy.
2016-05-04T20:20:58.661Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "29e6af6a77d0a06fb9cfb1d6adc3d2abf6029297", "oa_license": "CCBYNC", "oa_url": "http://kjp.or.kr/upload/pdf/kjped-59-80.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29e6af6a77d0a06fb9cfb1d6adc3d2abf6029297", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
224902325
pes2o/s2orc
v3-fos-license
X-ray Zernike phase contrast tomography: 3D ROI visualization of mm-sized mice organ tissues down to sub-cellular components Thanks to its non-invasive nature, X-ray phase contrast tomography is a very versatile imaging tool for biomedical studies. In contrast, histology is a well-established method, though having its limitations: it requires extensive sample preparation and it is quite time consuming. Therefore, the development of nano-imaging techniques for studying anatomic details at the cellular level is gaining more and more importance. In this article, full field transmission X-ray nanotomography is used in combination with Zernike phase contrast to image millimeter sized unstained tissue samples at high spatial resolution. The regions of interest (ROI) scans of different tissues were obtained from mouse kidney, spleen and mammalian carcinoma. Thanks to the relatively large field of view and effective pixel sizes down to 36 nm, this 3D approach enabled the visualization of the specific morphology of each tissue type without staining or complex sample preparation. As a proof of concept technique, we show that the high-quality images even permitted the 3D segmentation of multiple structures down to a sub-cellular level. Using stitching techniques, volumes larger than the field of view are accessible. This method can lead to a deeper understanding of the organs’ nano-anatomy, filling the resolution gap between histology and transmission electron microscopy. Introduction X-ray tomography is a high-resolution 3D imaging approach suitable for exploring the inner structure of different kind of materials, ranging from engineering to life sciences applications. Biological samples normally suffer from weak absorption contrast in the hard X-ray regime. Hard X-ray tomography is often combined with phase contrast techniques like free-space propagation [1], in-line holography [2], grating interferometers [3], and Zernike phase contrast [4][5][6][7] in order to study structural features in soft tissues [8]. Tomography has the great advantage of collecting 3D information by preserving the native state of the samples. Thanks to its non-destructive nature and the high brilliance offered by synchrotron facilities, X-ray tomography demonstrated to be a powerful tool for assessing the anatomy of human and rodent samples in several pre-clinical studies [9]. However, standard X-ray tomography (i.e. X-ray microtomography) is limited by a spatial resolution of the order of 0 -1 µm. Thus, it is not appropriate for visualizing details at the cellular scale. In order to gain a deeper understanding of the tissues' sub-architecture, X-ray nanotomography has to be applied. A range of studies has been published focusing on holotomography. Kimchenko et al. were one of the first groups performing nano-imaging of unstained human brain tissues with a spatial resolution down to 88 nm [10]. Töpperwien et al. demonstrated that nano-holotomography can complement histological inspections [11] and provide quantitative and statistical analysis with a half-period resolution of 370 nm [12]. Pacureanu et al. disentangled dense neuronal structures in Drosophila melanogaster and mice nervous tissue with measured resolutions from 222 nm down to 87 nm [13]. Massimi et al. explored the amyloid plaques morphology in mouse models of the Alzheimer's disease [14] and Cedola et al. showed the degenerative effects of multiple sclerosis at the level of vascular and neuronal networks with a spatial resolution of 260 nm [15]. Though holotomography is a multi-scale approach bridging the gap between X-ray microtomography and nanotomography, it requires time consuming and computing intense reconstruction algorithms. In a typical holotomography configuration, tomographies have to be performed at 3 -4 propagation distances for each sample for retrieving the quantitative phase signal. Moreover, specific algorithms based on the Fresnel diffraction model need to be applied prior to tomographic reconstruction in order to retrieve the phase shift caused by the sample [16]. Other nano-imaging studies were carried out by laboratory benchtop devices [17] or storage ring-based X-ray microscopes [18] following tissue staining protocols. Zernike phase contrast enables fast, single distance nanotomography scans without the need of staining. In contrast to holotomography, Zernike reconstructions are fast and can even be performed on the fly. Even if the phase shift is not acquired in a quantitative way, the contrast enhancement is sufficient for the structural analysis of many biomedical and biological applications. Most of the transmission X-ray microscope (TXM) instruments using Fresnel Zone Plates (FZP) offer fields of view (FoVs) smaller than 50 µm x 50 µm in absorption and Zernike phase contrast mode [19][20][21]. Therefore ultra-small samples (between 8 -100 µm in diameter) are required for imaging, prepared by focused ion beam scanning electron microscopy (FIB-SEM) [21,22]. This article is the first one to demonstrate that high-quality ROI nanotomography scans (in terms of contrast and resolution) can be obtained from millimeter sized tissue samples, without thinning or staining of the specimens. A large FoV was obtained using a beamshaper specifically designed for the presented TXM (see section 2), having 100 µm sub-fields, instead of the typically used 50 µm sub-fields [19,21]. The FoV can even be enlarged using stitching methods [19]. The 3D assessment and characterization of several tissues using this approach is presented here. The technique is compatible with the classical 2D histological analysis but with the advantage of handling 3D information without physical sectioning of the specimens and without using staining agents for enhancing the visualization of the tissue components. Compared to well-established light microscopes techniques enabling the 3D virtual histology, such as confocal microscopy, two-photon-microscopy, light-sheet microscopy [23][24][25][26][27][28] and nonlinear microscopy [29], X-ray Zernike nanotomography ensures a deep tissue penetration without extensive sample preparation (optical clearing solvents, fluorescence dyes, etc.). Zernike phase contrast allows the visualization of multiple anatomical features enabling spatial resolutions down to the nanometer scale and fast volumetric reconstructions. The achieved contrast and resolution permitted the identification of structural details down to sub-cellular components. Unlike soft X-ray tomography [30][31][32], the high photon energy permits imaging at room temperature and atmospheric pressure. Therefore the herewith presented technique can be considered as a powerful tool accessing samples of large volumes at the nanoscale, by complementing histological analysis and filling the resolution gap between histology and TEM. Mice sample preparation The experiment has been approved by the local and the national ethics committees, following the European guidelines for the use of animals (APAFIS #8782-201732813328550 v1). Mice were sub-cutaneously injected with 1M mice mammalian carcinoma cells, 2 weeks before euthanasia and after deep isoflurane induction. The kidneys, spleen and tumor were collected, dehydrated in ethanol and embedded in paraffin for long-term storage. Then, small samples of organs were isolated and glued on the top of pins to perform the X-ray imaging. X-ray Zernike nanotomography setup The data were acquired at the P05 imaging beamline [33], operated by the Helmholtz-Zentrum Geesthacht (HZG), at the PETRA III storage ring at DESY in Hamburg, Germany. Different mice tissues were imaged by a full field high-resolution X-ray Zernike microscope, optimized for an energy of 11.1 keV [34]. The experimental setup (see Fig. 1) was composed of a beam-shaping condenser lens [35] for illuminating the sample, a Fresnel Zone Plate (FZP) for magnifying the sample image onto the detector and a set of concentric phase rings for achieving the Zernike phase contrast. The beam-shaping optics of 1.8 mm diameter and with 100 µm sub-fields produced a 100 µm x 100 µm square illumination at the sample plane. The FZP of 280 µm in diameter and 50 nm outermost zone was used as an objective lens. The FZP and the negative phase rings were made of gold and produced by electron-beam lithography and electroplating on silicon nitride membranes. The optimal working distance between sample and beam-shaper was 78 cm, while the phase rings were placed in the back focal plane of the FZP, i. e. 125.34 mm. The adopted configuration allowed achieving a FoV of 74 µm x 74 µm and an effective pixel size of 36 nm (binning 2 used, effective pixel size 72 nm). In order to block the direct beam, a beam stop with 800 µm diameter was installed close to the beam shaper; while four order sorting apertures were used to block higher orders. Illumination optics, FZP and phase rings were designed and fabricated by the Paul Scherrer Institut in Villigen, Switzerland. The raw images were acquired with a Hamamatsu X-ray sCMOS camera having a 10 µm Gadolinium oxysulfide scintillator, 2048 x 2048 number of pixels with a physical pixel size of 6.5 µm x 6.5 µm. 50 flat-field projections were acquired at the beginning and at the end of each tomographic scan and 100 dark images were used for the normalization. Each tomography was acquired in less than 30 minutes. Data processing In order to attenuate the noise and improve the density resolution [36], binning 2 was applied on the raw projections prior to the reconstruction; thus, leading to an effective pixel size of 72 nm. Subsequently, a median filter with a radius of 2 pixels was applied over the normalized projections. Moreover, for minimizing the ring artefacts, potential horizontal stripes were removed from the sinograms by applying the Fourier-wavelet method [37]. Tomographic image reconstructions were performed using the gridrec algorithm from tomopy [38,39]. The reconstructed volumes were analyzed with Fiji [40] image-processing package. The experimental resolution was estimated through the Fourier Shell Correlation with a half bit-threshold method [41,42], while the stitching was performed by the NRStitcher software [43] (Visualization 1). The 3D visualization of the data and the segmentation was achieved by Avizo Lite 9.4. Histological analysis Paraffin sections of 4 to 6 µm were obtained from the samples, de-paraffined using toluene, re-hydrated and staining with hematoxylin and eosin solutions. After staining, the samples were dehydrated, mounted, and imaged using a 10x microscope. Results In this work, full field transmission X-ray nanotomography (TXM) operated in Zernike phase contrast microscopy mode was used for a qualitative investigation of different mice organ tissues of macroscopic size (0.5 -1mm). A representative scheme illustrating the adopted configuration is shown in Fig. 1. Fig. 1. Schematic drawing of the Zernike TXM installed at the P05 imaging beamline, at PETRA III (DESY, Germany). The components "OSA" and "FZP" are the short abbreviation for "order sorting apertures" and "Fresnel Zone Plate", respectively. This microscopy setup is currently available at the P05 imaging beamline (PETRA III, DESY) operated by the Helmholtz-Zentrum Geesthacht. This kind of arrangement was first described in Vartiainen et al. [35]. It consists of the following diffractive elements: a beamshaping optics for condensing the X-ray radiation onto the specimen and a Fresnel Zone Plate (FZP) lens for producing a magnified image of the sample onto the detector. In order to block the direct beam and the high orders, a beam stop is mounted downstream of the beamshaper and order sorting apertures (OSA) are installed downstream of the sample. The comparably large FoV of 74 µm x 74µm is achieved by using a beamshaper with 100 µm sub-fields (see section 2). The key element for Zernike phase contrast is a set of concentric phase rings placed in the back-focal plane of the FZP. The phase rings induce a phase shift of the background wave (undeviated by the sample) by pi/2. As a consequence, it interferes with the wave carrying the information about the sample structure and leads to phase-enhanced contrast imaged by the detector [44]. For the results shown in this work, negative phase rings were used that produced a background shift of 270 degree. By exploiting a long sample-detector distance (20.45 m), a magnification of 161 was achieved. This configuration allowed acquiring images with an effective pixel size of 36 nm and enabled accessing data with an experimental half-period resolution equal to 152.11 nm (see section 2), compared to the theoretical one, i.e. Rayleigh resolution [44], equal to 61 nm (i.e. 1.22 times the FZP zone width, 50 nm). The nano-anatomy of three different types of mice tissues were explored: kidney, spleen and mammalian carcinoma. All the imaged tissue samples had a macroscopic size ranging between 0.5 -1 mm. The samples were extracted from paraffin-embedded organ slices of 1 mm thickness. Figure 2(a) shows a virtual slice of the kidney resulting from tomographic reconstruction. The image resolves the arrangement of several renal tubules and allows the identification of tubular ultra-structures in the range of 0.5 -5 µm in size. The orange rectangle in Fig. 2(a) outlines a single tubule and the corresponding magnified image is displayed in Fig. 2(b). According to the grey-scale map, high-density regions are visualized in black and low-density regions are displayed in white. The tubules are nephron segments and are composed of different cells. The achieved contrast and resolution enabled the discrimination of the different elements composing the tubule cells. The tubule perimeter is marked with a blue dashed line in Fig. 2(b), while the lumen is highlighted by a green dashed line. Structures like the ones indicated by the yellow dashed line are the cellular nuclei. Even black sub-structures are observed inside. These are the nucleoli (typically of 1 µm diameter) and are pointed out by an orange arrow. Outside the tubule, the two red arrows indicate a glomerulus. Figure 2(c) is another nanotomographic insight of the tubule cells distribution in the kidney. The contrast is sufficient for recognizing morphological differences among the tubules. For instance, the central tubule is spherical with a large lumen, while the surrounding tubules have a more elongated shape and a differently structured lumen. Very likely, these structures are the distal and proximal tubule cells in the outer part of the kidney. Both of these types of cell structures are present in the kidney cortex, the anatomical region where this sample was derived. Typically, the proximal tubules have a brush border of microvilli that allows to distinguish them from the distal tubules [45][46][47]. This feature was not discriminated with our instrument, being in the size range of 100 -200 nm in length and 50 nm in diameter [48], while it can be easily determined by histology and TEM. However, the distal tubules are known to be small (typically 30 -40 µm in diameter) and to have a round and large lumen, while the proximal tubules are elongated (typically 60 µm in diameter) with an uneven lumen. As mentioned above, these features are visible in Fig. 2(c) and are even more emphasized in Fig. 2(d), where a distal tubule (blue dashed line) appears to be surrounded by three proximal tubules (purple arrows). Some cell boundaries are visible in the reconstructed slice (Fig. 2(c)) but these very fine structures appear to be obscured in the minimum projection image (Fig. 2(d)). An improvement of the contrast to noise of these structures might be obtained using an optimized denoising algorithm prior to image reconstruction. Figure 2(d) presents the projection of the minimum intensity pixel values of 11 nanotomography slices with similar features. This rendering method virtually increases the thickness of the nanotomography slices and facilitates the comparison with histology [17]. The images obtained by nanotomography can be directly compared with histology. The histological analysis was performed on the same mouse kidney after the nanotomography scan, as it requires sectioning and staining of the sample. A histological image representative of the kidney cortex anatomy is illustrated in Fig. 2(e). A higher magnification view of the region within the black rectangle is displayed in Fig. 2(f) allowing a better comparison with the data obtained by nanotomography. Figure 2(e) shows a mixture of tubule cells (proximal and distal tubule cells) and glomeruli. The tubules are labelled in purple (eosin staining) and the nuclei appear violet in color due to the used staining reagents (hematoxylin staining), while the lumen is visualized in white. In contrast to histology, X-ray nanotomography offers the advantage of handling 3D information. Figure 3(a) presents a 3D rendering of the tomographic images acquired from the kidney tissue. The volume of interest is 73 µm in height and the available field of view allows the visualization of several tubule cells. The clear distinction of the borders of the tubule structures allowed mapping their 3D distribution, organization and packing. Figure 3(b) shows a thinner section (18.6 µm in thickness) and a segmented tubular structure is overlaid in color. Thanks to the differences in contrast, ultra-cellular components can be segmented within the tubule. Figure 3(c) shows its 3D rendering where the tubule is displayed in blue, the lumen in green and the nuclei in yellow. Data segmentation is an important achievement for accessing statistical information, such as average diameter and volume of the labeled structures. Figure 4(a) and 4(b) show two virtual slices from the tomographic reconstruction of a mouse spleen sample. The investigated sample was selected at the interface between the red pulp and the white pulp, which are the main constituent matters of the spleen. The red pulp is characterized by a low cell density, while the white pulp exhibits a higher cell density. As indicated in Fig. 4(a) and 4(b), the X-ray method allowed the visualization of the nuclei belonging to multiple cells presented in the imaged ROIs. Similar to Fig. 2(b), the nuclei are the round tiny structures (typical size from 3 to 10 µm) and the nucleoli (central black spots inside the nuclei, typical size 1 µm) are also resolved. Figure 4(a) shows a lower distribution of cells in the cytoplasm and would correspond to a magnified area of the red pulp region. On the other hand, Fig. 4(b) shows a higher concentration of cells and would correspond to a magnified area of the white pulp region. The histological image in Fig. 4(c) is representative of the cellular distribution at the interface between red pulp and white pulp in the same mouse spleen sample. Figure 4(d) is a magnified region obtained from the histological image illustrated in Fig. 4(c). The staining protocol allowed the nuclei visualization in purple color with a higher concentration of cells detected in the white pulp rather than in the red pulp. Thus, the nanotomography results appear to be in good agreement with histology. Similarly, the cellular distribution in a mouse mammalian carcinoma was studied by nanotomography. This tumor type from mice origin is often selected for its huge capacity to accumulate any drugs after intravenous administration, since it has been reported to have an enhanced permeability and retention effect [49]. Figure 5(a) and 5(b) display two minimum intensity projection images selected from the nanotomography dataset. As it is expected for tumor tissues, Fig. 5 displays a chaotic arrangement of the cells with nuclei showing different structural features. The nucleus, indicated by the yellow arrow in Fig. 5(a), is clearly visualized with several black spots (the nucleoli) typically observed in mice cells. Figure 5(c) visualizes a 3D rendering of a volume of interest with 51 µm in height, selected from the nanotomography dataset. From this 3D representation, it is also possible to distinguish these morphological features of the cells composing the tumor tissue. Finally, Fig. 5(d) is a histological image obtained from this same tumor sample. Cell nuclei are labelled with a purple color and nucleolus are displayed in dark blue. Substantial similarities could be detected comparing the histological images with the tomographic ones, confirming the previous analysis. Conclusion and outlook In this study, we have demonstrated that ROI tomography combined with Zernike phase contrast large field of view microscopy is able to produce high-quality 3D images of different types of mice tissue without staining, labelling, sectioning or FIB sample preparation. The presented nano-imaging approach provided reliable 3D investigations of complex biological specimens, such as mouse organ tissues. As a proof of concept, we acquired images of mouse kidney, spleen, and mammalian carcinoma and we have proved that their anatomy can be fully recovered. Without staining or a specific sample preparation, the presented method enabled a high resolution morphological analysis of the tissues at short acquisition times and non-destructively. The quality of the images, in terms of resolution and contrast, allowed the 3D identification and segmentation of the different tissue components. The obtained 3D information can be considered complementary and supportive to well-known microscopy techniques, such as 2D classical histology where small details are quite often irretrievably lost during the sectioning or invisible if the staining method is not appropriate. X-ray nanotomography offers the advantage of observing the 3D spatial organization of the tissues structures from different orientations and in particular to visualize the inter-cell connections. This is not always easy to recognize by standard 2D imaging methods. The 3D knowledge is of primary interest for detecting cell alterations induced by diseases or the evaluation of the tissue response to possible clinical treatments. Thus, the development of X-ray nanotomography may find a wide range of applications. For instance, its results are very attractive for the field of brain imaging, where the 3D information plays a crucial role for understanding the tissue changes in the case of neurodegenerative diseases. In comparison to histology, however, the presented method enables a high-resolution morphological analysis of the tissues at short acquisition times, without staining and non-destructively. Finally, the relatively fast interior tomographic measurement of below 30 min per scan allows imaging multiple ROIs of the same sample, consecutively. Single volumes can be stitched together resulting in larger volumetric datasets. An example of such a stitched volume is displayed in Fig. 6 (Visualization 1). A 2D nanotomography slice obtained from the stitching of 4 reconstructed nanotomography slices is presented. The scans were performed with an overlap of 40 µm at a scan time of 15 min each. The whole volume of 100 x 100 x 74 µm 3 was performed in 1h. The stitched image shows a final cropped FoV equal to 101.66 µm x 104.40 µm. Even larger FoV can be achieved by collecting other tomographies in a tile sequence. This is an added-value for this microscopy technique that offers the possibility of detecting small structures in a large adjustable volume of interest. Therefore, we believe that this imaging technique, bridging the scale gap between TEM and histology, can support the understanding of the cell arrangement, revealing abnormalities induced by diseases and pathologies and the influence of clinical treatments from a microscopic point of view, covering applications from biology up to tissue engineering.
2020-07-16T09:06:38.806Z
2020-09-10T00:00:00.000
{ "year": 2020, "sha1": "a5cdad1a98ef9480c1140c0af34c0cfa41814528", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/boe.396695", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d17a3d4426ac6df6058a18a090469452b4f36b4f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
3554869
pes2o/s2orc
v3-fos-license
The on-off network traffic model under intermediate scaling The result provided in this paper helps complete a unified picture of the scaling behavior in heavy-tailed stochastic models for transmission of packet traffic on high-speed communication links. Popular models include infinite source Poisson models, models based on aggregated renewal sequences, and models built from aggregated on-off sources. The versions of these models with finite variance transmission rate share the following pattern: if the sources connect at a fast rate over time the cumulative statistical fluctuations are fractional Brownian motion, if the connection rate is slow the traffic fluctuations are described by a stable L\'evy process, while the limiting fluctuations for the intermediate scaling regime are given by fractional Poisson motion. Introduction It is well-known that packet traffic on high-speed links exhibit data characteristics consistent with long-range dependence and self-similarity. To explain the possible mechanisms behind this behavior, various network traffic models have been developed where these features arise as heavy-tailed phenomena; see Resnick (2007) [15]. A natural basis for modeling such systems, applied early on during these developments, is the view of packet traffic composed of a large number of aggregated streams where each source alternates between an active on-state transmitting data and an inactive off-state. The traffic streams generate on average a given mean-rate traffic, they have stationary increments and they are considered statistically independent. In particular, the transmission channel is able to accommodate peak-rate traffic corresponding to all sources being in the on-state. To capture in this model the strong positive dependence manifest in empirical trace data measurements, it is assumed that the duration of on-periods and/or off-periods are subject to heavy-tailed probability distributions. It is then interesting to analyze the workload of total traffic over time and understand the random fluctuations around its cumulative average. Our continued interest in these questions comes from the finding that several scaling regimes exist with disparate asymptotic limits. The first result of the type we have in mind is Taqqu, Willinger, Sherman (1997) [16], which introduces a double limit technique. In this sequential scheme, if the on-off model is averaged first over the level of aggregation and then over time the resulting limit process is fractional Brownian motion. As the fundamental example of a Gaussian self-similar process with long-range dependence, this limit preserves the inherent long-range dependence of the original workload fluctuations. On the other hand, averaging first over time and then over the number of traffic sources the limit process is a stable Lévy process. This alternative scaling limit is again self-similar but lacks long memory since the increments are independent. Moreover, having infinite variance the limiting workload is itself heavy-tailed. In Mikosh, Resnick, Rootzén, Stegeman (2002), [4], the double limits are replaced by a single scheme where instead the number of sources grows at a rate which is relative to time. Two limit regimes of fast growth and slow growth are identified and two limit results corresponding to these are established, where again fractional Brownian motion and stable Lévy motion appear as scaled limit processes of the centered on-off workload. The purpose of this paper is to show that an additional limit process, fractional Poisson motion, arises under an intermediate scheme which can be viewed as a balanced scaling between slow and fast growth. In this case the scale of time grows essentially as a power function of the number of traffic sources. As will be recalled, fractional Poisson motion does indeed provide a bridge between fractional Brownian motion and stable Lévy motion. The intermediate limit scheme discussed here is indicated in Kaj (2002) [8], and introduced in Gaigalas and Kaj (2003) [7], where limit results are given for a different but related class of traffic models under three scaling regimes referred to as slow, intermediate and fast connection rate. The workload process is again the superposition of independent traffic streams with stationary increments but now each source generates packets according to a finite mean renewal counting process with heavy-tailed interrenewal cycle lengths. The link to the class of on-off models is that each pair of an on-period and a successive off-period forms a renewal cycle and the number of such on-off cycles generate a heavy-tailed renewal counting process. Moreover, if we associate with each renewal cycle a reward given by the length of its on-period and apply a suitable interpretation of partial rewards, then the corresponding renewal-reward process coincides with the on-off workload process. To explain briefly the limit result in [7] under intermediate connection rate, let (N i (t)) i be i.i.d. copies of a stationary renewal counting process associated with a sequence of inter-renewal times of finite mean µ and a regularly varying tail functionF (t) ∼ L(t)t −γ , characterized by an index γ, 1 < γ < 2, and a slowly varying function L. Let m → ∞ and a → ∞ in such a way that mL(a)/a γ−1 → µc γ−1 for some constant c > 0. Then the weak convergence holds, where Y γ (t) is an almost surely continuous, positively skewed, non-Gaussian and non-stable random process, which is defined by a particular representation of the characteristic function of its finite-dimensional distributions. Additional properties of the limit process are obtained in Kaj (2005) [9] and Gaigalas (2006) [6], where it is shown with two different methods that Y γ can be represented as a stochastic integral with respect to a Poisson measure N (dx, du) We call this process fractional Poisson motion with Hurst index H = (3 − γ)/2 ∈ (1/2, 1). With we may put Y γ (t) = σ γ P H (t) and obtain the standard fractional Poisson motion P H . A calculation reveals Cov(P H (s), P H (t)) = 1 2 (|s| 2H + |t| 2H − |t − s| 2H ). For comparison, fractional Brownian motion of index H has the representation where M (dx, du) is a Gaussian random measure on R × R + which is characterized by the The covariance functions of B H and P H coincide. The fast connection rate limit for the model of aggregated renewal processes applies if mL(a)/a γ−1 → ∞ and the slow connection rate limit if mL(a)/a γ−1 → 0. For suitable normalizing sequences, the limit processes under these assumptions are fractional Brownian motion with Hurst index H = (3 − γ)/2 in the case of fast growth and a stable Lévy process with self-similarity index 1/γ in the slow growth situation, see [7]. A number of other models have been suggested for the flow of traffic in communication networks. The superposition of independent renewal-reward processes applies more generally to sources which attain random transmission rates at random times, and not merely switch between on and off. For a model where the length of a transmission cycle as well as the transmission rate during the cycle are allowed to be heavy-tailed, Levy and Taqqu (2000) [11], Pipiras and Taqqu (2000) [13], and Pipiras, Taqqu and Levy (2004) [14], established results for slow and fast growth scaling analogous to those for the on-off model. In addition, they obtained as a fast growth scaling limit a stable, self-similar process with stationary but not independent increments, coined the telecom process. A further category of models for network traffic with long-range dependence over time starts from the assumption that long-lived traffic sessions arrive according to a Poisson process. The sessions carry workload which is transmitted either at fixed rate, at a random rate throughout the session, or at a randomly varying rate over the session length. Such models, called infinite source Poisson models, are widely accepted as realistic workload processes for internet traffic. Indeed, it is natural to assume that web flows on a non-congested backbone link are initiated according to a Poisson process while the duration of sessions and transmission rates are highly variable. The conditions under which slow, intermediate and fast scaling results exist and fractional Brownian motion, fractional Poisson motion, stable Lévy processes and telecom processes arise in the asymptotic limits are known in great detail for variants of the infinite source Poisson model, see Kaj and Taqqu (2008) [10]. In [10], Y γ is called the intermediate telecom process. Mikosh and Samorodnitsky (2007) [5] consider scaling limits for a general class of input processes, which includes as special cases the models already mentioned as well as other cumulative cluster-type processes. It is shown that fractional Brownian motion is a robust limit for a variety of models under fast growth conditions, whereas the slow growth behavior is more variable with a number of different stable processes arising in the limit. Our current result completes the picture for the intermediate scaling regime, where neither of the mechanisms of fast or slow growth are predominant. In this case, where the system workload is under simultaneous influence of Gaussian and stable domains of attraction, we show that the fluctuations which build up in the on-off model are robust and again described by the fractional Poisson motion, parallel to what is known to be valid for infinite source Poisson and renewal-based traffic models. In the next section 2 we introduce properly both the on-off model and the renewal-based model to be used as an approximation and we state the relevant background results for these models. In section 3 we state the main result and give the structure of the proof. Section 4 is devoted to remaining and technical aspects of the proof. The on-off model and background results We begin by introducing the on-off model using similar notations as in [4]. Let X on , X 1 , X 2 , . . . be i.i.d. non-negative random variables with distribution F on representing the lengths of onperiods. Similarly let Y off , Y 1 , Y 2 , . . . be i.i.d. non-negative random variables with distribution F off representing the lengths of off-periods. The X-and Y -sequences are supposed to be independent. For any distribution function F we writeF = 1 − F for the right tail. We fix two parameters, α on and α off , such that and assume that with L on , L off arbitrary functions slowly varying at infinity. Hence both distributions F on and F off have finite mean values µ on and µ off but their variances are infinite. Assumption (2) agrees with that of [4]. However, thanks to a simple symmetry argument, we can also cover the case α on > α off . The case α on = α off , for which the on-off process is an alternating renewal process, falls outside of the class of processes we are able to study within the methodology developed here. We consider the renewal sequence generated by alternating on-and off-periods. For the purpose of stationarity we introduce random variables (X 0 , Y 0 ) representing the initial onand off-periods as follows: let B, X eq on , Y eq off be independent random variables, independent of {X on , (X n ), Y off , (Y n )}, and such that B is Bernoulli with P(B = 1) = 1 − P(B = 0) = µ on /µ, and X eq on and Y eq off have distribution functions respectively. Now, let Note that X 0 and Y 0 are conditionally independent given B but not independent. At time t = 0 the system starts in the on-state if B = 1 and in the off-state if B = 0. With this initial distribution, the alternating renewal sequence is stationary and the probability that the system is in the on-state at any time t is µ on /µ. Renewal events occur at the start of each on-period. Inter-renewal times are given by the independent sequence where Z i has distribution F = F on * F off and mean µ = µ on + µ off for i ≥ 1, and Z 0 has distribution function F eq (x) = 1 µ x 0F (s) ds. The renewal sequence (T n ) n≥1 with delay T 0 is defined by and we denote by N (t) the associated counting process Note that N (t) has stationary increments and expectation E[N (t)] = t/µ. Moreover, because of (2), the tail behavior of the inter-renewal times is given bȳ see Asmussen [1], Chapter IX, Corollary 1.11. The on-off input process is the indicator process for the on-state defined by The source is in the on-state if I(t) = 1 and in the off-state if I(t) = 0. The input process I(t) is strictly stationary with mean The associated cumulative workload defined by is a stationary increment process with mean E[W t ] = tµ on /µ. Let (I j , W j , N j ) j≥1 denote i.i.d. copies of the input process I, the accumulative workload process W , and the renewal counting process N for the stationary on-off model. For m ≥ 1, consider a server fed by m independent on-off sources. We define the cumulative workload of the m-server system as the superposition process and the renewal-cycle counting process for m aggregated traffic sources by In this paper, we are mainly concerned with the asymptotic properties of the cumulative workload when the number of sources, m, increases and time t is rescaled by a factor a > 0. Thus, we consider the centered and rescaled process where the renormalization b(a, m) will be precised in the sequel. The asymptotic is considered when both m → ∞ and a → ∞. The relative growth of m and a have a major impact on the limit. Let a = a m be the sequence governing the scaling of time and suppose a m → ∞ as m → ∞ (we will often omit the subscript m). Following the notation in [7], we consider the following three scaling regimes: • fast connection rate mL on (a)/a αon−1 → ∞; (FCR) • slow connection rate mL on (a)/a αon−1 → 0; (SCR) • intermediate connection rate In [4], the asymptotic behavior of the cumulative total workload is investigated under conditions (FCR) and (SCR). Theorem 2.2 (Gaigalas-Kaj) Under condition (ICR) and with the normalization b(a, m) = a, the following convergence of processes holds: where σ αon is given in (1) and P H (t) is the standard fractional Poisson motion with Hurst index H = (3 − α on )/2. Intermediate limit for the on-off model In this section, we investigate the intermediate scaling limit for the on-off model. The following is our main result. with σ αon in (1) and P H (t) the standard fractional Poisson motion in (5). where P 1 H , P 2 H , . . . are i.i.d. copies of P H . Consider also the sequence c ′ m = m −1/(αon−1) . For any m, Hence, by tracing the limit process in Theorem 3.1 as c m → ∞, we recover in distribution the succession of all aggregates 1≤i≤m P i H , m ≥ 1. Also, by letting c ′ m → 0 we find that the limit process represents successively smaller fractions which sum up to recover fractional Poisson motion. These relations explain the fact that fractional Poisson motion acts as a bridge between the stable Levy process and fractional Brownian motion. First, {c H P H (t/c)} converges weakly 1≤i≤m P i H (t) and the central limit Theorem yields the Gaussian limit as m → ∞. The required tightness property is shown in [6]. Moreover, it is shown in [6] that c 1/αon P H (t/c) converges in distribution as c → 0 to the α on -stable Levy process. To see that the limit must be α on -stable, take d = c · c ′ m for any c > 0. Then and, assuming that the rescaled process (c 1/αon P H (t/c)) t≥0 converge to some non-trivial limit process L, we must have as c → 0 (and hence d → 0) This indicates that the limit L must be α on -stable. Heuristics of the proof of Theorem 3.1 To motivate that the limit process in the intermediate connection rate limit appears naturally, we discuss a decomposition of the centered on-off process based on its representation as a renewal-reward model. We first note that the single source cumulative workload has the form Similarly, focusing on off-periods rather than on-periods, we have The centered single source workload is therefore Thus, for the workload of m sources, using obvious notations. The balancing of terms under the scaling relation (ICR), makes it plausible that both terms R j (at) vanish in the scaling limit. This suggests asymptotically, and so Theorem 2.2 would imply Theorem 3.1. In the next final section, we will compare rigorously the two processes in (7). Proof of Theorem 3.1 The proof of Theorem 3.1 relies on the following three lemmas: Proof of Lemma 4.1. We construct an alternative representation of the random variable in the left hand side of (8). Define For m ≥ 1, let N m (at) = m j=1 N j (at). The random variables N j (at), j ≥ 1 are i.i.d and for have the same distribution (note that the uni-dimensional marginal distributions are equal but not the multidimensional distributions). This representation will enable us to prove that under assumption (ICR), in the space of cád-lág functions on R + endowed with the Skorokhod topology, we have the convergence Moreover, Equations (9) and (10) together imply and this proves the lemma. Thus, it remains to prove (9) and (10). To this aim, recall that the random variables Y 1 i , i ≥ 1, are i.i.d. with distribution such that the tail functionF off is regularly varying with index −α off . Hence there exists a regularly varying function L such that the centered and rescaled sum converges in the space of cád-lág functions to some α off -stable Lévy process (see [12], the exact form of L or of the limit process are not needed here). This implies the convergence property (9), since a >> (am) 1/α off L(am) under the scaling assumption (ICR) with α on < α off . We now prove equation (10). The stationary renewal process N (t) has mean t/µ and variance given asymptotically by [7], Equation (30), and references therein. Hence, 1 am N m (at) has mean t/µ and variance under scaling (ICR), such that This shows that 1 am N m (at) converges in distribution to t/µ, which is (10). This ends the proof of Lemma 4.1 By stationarity, the random variables Y j 0 and (T j N j t − t) ∧ Y j N j (t) have the same distribution; they represent the remaining time after 0 and t, respectively, of the first off-period after 0, and after t. Since both sums have the same distribution, we only consider the first one. Using Karamata's Theorem (see [3]), the tail functionF eq off satisfies as x → ∞. This implies that the random variable Y 0 has a regularly varying tail with index −(α off − 1) and hence belongs to the domain of attraction of an (α off − 1)-stable distribution. Therefore there exists a slowly varying function L, such that Proof of Lemma 4.3. The proof given in [4] for fast scaling (FCR) can be adapted to our settings. We recall only the main lines. According to Billingsley [2], Theorem 12.3, it is enough to prove that for any t 1 , t 2 with |t 1 − t 2 | ≤ 1 and for some ε > 0, there exists a constant C > 0 and an a 0 > 0, such that for all a ≥ a 0 E 1 a |(W m (at 2 ) − mat 2 µ on /µ) − (W m (at 1 ) − mat 1 µ on /µ)| 2 ≤ C|t 2 − t 1 | 1+ε . Using the definition of W m , centering and stationarity, it is enough to prove that for all t ∈ [0, 1] and a ≥ a 0 , m a 2 Var[W at ] ≤ Ct 1+ε (11) (the constant C may change from one appearance to another). However, according to [4], Equation (7.1), This relation and the scaling (ICR) together imply, as a → ∞, By (12), the function a → Var[W a ] is regularly varying with index 3−α on . Then, using Potter bounds (see [3]), we conclude that there exist a 0 > 0 and ε < 1 − α on /2, such that for all t ∈ (0, 1) and a ≥ a 0 /t, (see the proof of Lemma 13 in [4] for details). This implies that for all t ∈ (0, 1) and all a such that at ≥ a 0 , m a 2 Var[W at ] ≤ C 1 − ε t 3−αon−ε ≤ Ct 1+ε . On the other hand, if t ≤ a 0 /a, then, for a large enough, In the last inequality, we use the fact that 2 − α on − ε > 0 and so a 2−αon−ε L on (a) → ∞ as a → ∞; taking a 0 large enough, we can suppose that for a ≥ a 0 , a 2−αon−ε L on (a) remains bounded away from zero. By combining the estimates for the cases at ≥ a 0 and at ≤ a 0 we obtain (11), which completes the proof.
2010-08-14T20:21:10.000Z
2010-08-14T00:00:00.000
{ "year": 2011, "sha1": "bb6d8079a3df9009fad4bf576bfa3b717663d837", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.2472", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "98e2ab41da628e5845c063f95f7ddd0932c8f427", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
174802721
pes2o/s2orc
v3-fos-license
Automated Classification of Seizures against Nonseizures: A Deep Learning Approach In current clinical practice, electroencephalograms (EEG) are reviewed and analyzed by well-trained neurologists to provide supports for therapeutic decisions. The way of manual reviewing is labor-intensive and error prone. Automatic and accurate seizure/nonseizure classification methods are needed. One major problem is that the EEG signals for seizure state and nonseizure state exhibit considerable variations. In order to capture essential seizure features, this paper integrates an emerging deep learning model, the independently recurrent neural network (IndRNN), with a dense structure and an attention mechanism to exploit temporal and spatial discriminating features and overcome seizure variabilities. The dense structure is to ensure maximum information flow between layers. The attention mechanism is to capture spatial features. Evaluations are performed in cross-validation experiments over the noisy CHB-MIT data set. The obtained average sensitivity, specificity and precision of 88.80%, 88.60% and 88.69% are better than using the current state-of-the-art methods. In addition, we explore how the segment length affects the classification performance. Thirteen different segment lengths are assessed, showing that the classification performance varies over the segment lengths, and the maximal fluctuating margin is more than 4%. Thus, the segment length is an important factor influencing the classification performance. I. INTRODUCTION E PILEPSY is a chronic neurological disorder, in which brain activity becomes abnormal, leading to sensations and sometimes impaired consciousness. It is characterized by recurrent, unprovoked seizures. A seizure is a transient symptom of synchronous neuronal activity in the brain [1]. In the world, more than 50 million people suffer from epilepsy [2]. The patients with epilepsy are subjected to lifestyle limitations, such as acquiring and using a driving licence, and the social stigmatisation that often accompanies epilepsy [3]. In current clinical practices, electroencephalography (EEG), which records the electrical activities generated by neurons in the brain, is commonly used for the epilepsy diagnosis. Long-term EEG records are reviewed and analyzed by welltrained neurologists in order to identify the occurrence of seizures and localize their zones in the brain. The seizure information is critical for physicians making therapeutical decisions, especially before any surgical operation in the cortex. However, this manual way of reviewing and analyzing is labor-intensive and time-consuming, because it usually takes several hours for a well-trained neurologist to analyze one-day recordings from one patient [4]- [9]. This limitation motivates research of automatic seizure detection. In this paper, we will focus on developing an automatic approach to classify seizure segments and nonseizure segments from off-line EEG data records. The automatically classified seizure segments are provided to neurologists to make further analyses. One major problem in the seizure/nonseizure classification is the significant variations of EEG signals for seizure states and nonseizure states across individuals. Also, it is a problem that the signal properties of seizures may resemble the characteristics of normal EEG signals [10]. Furthermore, EEG signals usually contain physiologic artifacts from involuntary body or organ movements and non-physiologic noise [10]. Machine learning techniques and signal processing technologies have been applied to address the problems. Patientspecific detectors are developed to detect seizure onsets [7], [8], [11]- [14]. These studies convert the problem of seizure detection into the seizure/nonseizure classification problem but more of a real-time flavor. Using traditional machine learning methods, hand-crafted features are usually needed to capture characteristics of seizure manifestations in EEG. Recently, epilepsy researchers have focused on developing seizure detection approaches based on deep learning techniques [5], [10], [14]- [17]. Most of these deep learning-based approaches are developed based on classical neural network models, like convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). Their architectures are shallow. And they process data on different channels in the same way. A shallow neural network usually has limited capability of extracting seizure features. Different brain regions have different contributions to seizures. EEG data from different brain regions need be differentiated. Thus, we center on developing a deep neural network, which can directly process raw EEG data, differentiate data on different channels, and automatically classify seizures and nonseizures. EEG data are one-dimensional, dynamic and non-linear [18]. RNN is better at processing one dimensional sequence data than CNN. However, RNN usually suffers from the gradient vanishing or exploding problem, and its two variants, i.e., LSTM and GRU, do not effectively support stacking multiple layers because of the gradient decay over layers [19]. An emerging variant of RNN, independently recurrent neural network (IndRNN), addresses the above limitations. By taking the Hadamard product (i.e., element-wise product) over the recurrent inputs [19], it overcomes the gradient vanishing or arXiv:1906.02745v1 [eess.SP] 5 Jun 2019 exploding problems, and supports computations over multiple layers efficiently. Multiple layers help to address considerable variations of seizure morphologies. IndRNN is also able to process longer sequence data than LSTM. On the other hand, as information passes through many layers in a deep neural network architecture, it may possibly decay. A densely connected structure, in brief, dense structure, in which one layer connects with all preceding layers, can help preserve maximum information flow between layers [20]. Additionally, EEG signals from different brain regions have different strengths of signifying seizures. Based on the above observations and analyses, we integrate IndRNN with the dense structure and an attention mechanism to design a deep learning approach with multiple layers for the seizure/nonseizure classification. Firstly, an attention layer is designed to differentiate data from different brain regions. It adaptively generates weights on channels and outputs weighted data. A group of IndRNN layers and batch normalization layers are organized according to the dense structure, and they constitute a dense block. Batch normalization layers are used to help reduce the over-fitting risk in deep neural network. The dense block extracts temporal spatial features from the weighted data. Predominant features at a specific temporal scale are extracted from the outputs of a dense block by deploying a max-pooling layer. After several runs of feature computations in dense blocks and max-pooling layers, the overall features within the whole temporal duration are calculated by an average pooling layer. Lastly, two fully connected (FC) layers are deployed for further integrating features and for final classification. We perform cross-validation (CV) experiments over the noisy EEG data set of CHB-MIT to evaluate our proposed approach. In our experiments, we obtain the average sensitivity, specificity and precision of 88.80%, 88.60% and 88.69%, respectively, and the corresponding standard deviations of 0.0252, 0.0250, and 0.0215, respectively. The results exceed the performance of current state-of-art approaches [17] [21]. Besides, we explore how the segment length affects the performance of seizure/nonseizure classification. Our experimental results with different segment lengths show that the performance of seizure classification fluctuates over the segment lengths, and the maximal fluctuating margin is more than 4%. The main contributions of our paper include the following: i) An emerging deep learning model, IndRNN, is applied to seizure/nonseizure classification for the first time, and temporal spatial features are extracted with a deep architecture; ii) Dense structure and attention mechanism are integrated with IndRNN for a deep neural network, and they help improve the capability of discriminating seizures from nonseizures; iii) The relationship between the segment length and the performance of seizure/nonseizure classification is investigated. The rest of this paper is organized as follows. Section II describes related research about automatic seizure/nonseizure classification. Section III illustrates our proposed approach of dense IndRNN with attention. The proposed approach is evaluated in CV experiments in Section IV. Section V explains the attention mechanism and validates main modules in our designed approach. In Section VI, how the segment lengths affect the performance of classifying seizure/nonseizure is explored. Section VII discussed the approach of dense IndRNN with attention. And conclusions and future work are described in Section VIII. II. RELATED WORK Seizure/nonseizure classification distinguishes seizure segments from nonseizure segments, which can be used to detect whether a data segment contains a seizure or not. For this task, extensive studies have been performed. Because seizure detection, which is often of a real-time flavor, is often treated as the seizure/nonseizure classification problem, many machine learning methods have been developed [8], [11]- [14], [22]- [25]. Recently, deep learning techniques have been applied to the seizure detection problem [5], [10], [15]- [17], [21], [26]. Esbroeck et al. developed a multi-task learning framework to detect patient-specific seizure onset [13]. In the framework, distinguishing the windows of each seizure from nonseizure data was treated as a separate task, and discriminating individual-seizures as another task. Evaluation results over the CHB-MIT data set indicated that the framework performed better in most cases compared to the standard SVM. Kiranyaz et al. presented a systematic approach for patientspecific classification of long-term EEG [10]. The approach can be divided into three main steps. The first step is to process data through band-pass filtering, feature extraction, epileptic seizures aggregation and morphologic filtering. The second step is to classify signal from each channel of the processed data by using a collective network. The third step is to integrate the initial classification results over each channel and make final classification decision for each EEG data segment. Over the data set of CHB-MIT, the obtained average sensitivity and specificity were 89.01% and 94.71%, respectively. In the approach, the many classifiers increased the computational complexity. Based on signal processing techniques, Zandi et al. decomposed EEGs from each channel by wavelet packet transform, and separated the seizure and nonseizure states by developing a patient-specific measure [7]. Using the measure, a combined seizure index was derived for each epoch of every EEG channel. The combined seizure index was inspected and it triggered alarms. Acharya et al. proposed a method to automatically detect the normal, pre-ictal, and ictal conditions from EEG signals [27]. The method firstly extracted four entropy features, including approximate entropy, sample entropy and two phase entropies, then fed the four features to classifier to do classification. Zhou et al. designed a seizure detection algorithm based on lacunarity and Bayesian linear discriminant analysis (BLDA) [28]. The critical step in the algorithm was feature extraction. Firstly, EEGs were performed wavelet decomposition with five scales, and the wavelet coefficients at scales 3, 4, and 5 were selected. At the three scales, features including lacunarity and fluctuation index were extracted. Then they were passed on to the BLDA for training and classification. Over intracranial EEG data from the Epilepsy Center of the University Hospital of Freiburg, the obtained average sensitivity was 96.25%, with an average false detection rate of 0.13 per hour and a mean delay time of 13.8s. The obtained precision results for eleven patients were less than 50%. Fan and Chou leveraged a complex network model to represent EEG signals, and integrated it with spectral graph theory to extract spectral graph theoretic features for detecting seizure onsets in real-time [29]. The method was evaluated over the CHB-MIT data set. The resulting patient-specific average sensitivity was 98%, and the latency was 6s. The methods developed in [7] [10] [13] [27] [28] [29] are mostly based on signal processing methods and traditional machine learning methods. They often need crafted features, which may not be optimal. Shoeb and Guttag leveraged the support vector machine (SVM) to develop a patient-specific detection method [8]. Filters were applied to extract spectral features over channels. The feature vectors were concatenated according to a fixed time length and then taken as inputs to train the SVM model. The method achieved a sensitivity of 96%, a median detection delay of 3 seconds and a median false detection rate of 2 per 24 hour. The results are often used as a benchmark for patient-specific seizure detection on the data set CHB-MIT. The authors observed that the identity of channels could help differentiate between the seizure and the nonseizure activity. Amin and Kamboh designed an algorithm RUSBoost to process imbalanced seizure/nonseizure data, and combined RUSBoost with the decision tree classifier to do patientspecific seizure detection [11]. The algorithm extracts spectral, spatial and temporal features from each channel. Over the CHB-MIT data set, the obtained average accuracy, sensitivity and false detection rate were 97%, 88%, and 0.08 per hour, respectively. Using the method, the training was fast. Hunyadi et al. presented a nuclear norm regularization to convey multichannel information of ictal patterns [12]. The regularization integrating with extracted features helped reach a median sensitivity of 100%, false detection rate of 0.11 per hour and alarm delay of 7.8s over the CHB-MIT data set. The proposed method processed spatial information in the same way. In [22], Fergus et al. developed a seizure detection method based on selected features from special brain regions. Over the data set of CHB-MIT, an average sensitivity of 88% and specificity of 88% were achieved. Truong et al. designed an automatic seizure detection method, in which one important step is to select channels that contribute the most to seizures [30]. Features, such as spectral power and correlations between channel pairs, were extracted. The classifier of Random Forest was used for classification. Over an intracranial electroencephalography (iEEG) data set, the method reached the state-of-the-art computational efficiency while maintaining the accuracy. In the method, the step of selecting channels was to reduce the number of channels, thereby improving the computational efficiency. The approaches developed in [ Thodoroff et al. presented a recurrent convolutional neural network to capture spectral, spatial and temporal features of seizures [5]. EEG signals were firstly transformed into images. Created images were fed to CNN. Output vectors of the CNN were organized to be sequences in the chronological order. The sequences were passed on to the bidirectional RNN to make classification. Both patient-specific experiments and cross-patient experiments were performed. In the cross-patient testing, the obtained average sensitivity was 85% with the false positive rate of 0.8 per hour. Transfer learning technique was utilized to overcome the problem of small amount of data in the patient-specific experiments. Vidyaratne et al. proposed a deep recurrent architecture by combining cellular neural network with bidirectional RNN [14]. The bidirectional RNN was deployed into each cell of the cellular neural network to extract temporal features in the forward and the backward directions. Each cell interacts with its neighboring cells to extract local spatial-temporal features. The method was evaluated in patient-specific experiments over five patients from CHB-MIT. The obtained sensitivities are 100%. The experiments were limited, because that only five patients were tested. Golmohammadi et al. explored two kinds of neural networks over TUH EEG Corpus [16]. Their experimental results showed that convolutional LSTM network outperformed convolutional GRU network. Different initialization and regularization methods were also tested. Hussein et al. designed a deep neural network for seizure/nonseizure classification by using LSTM [21]. The neural network extracted temporal features by using LSTM. Acharya et al. presented a 13-layers deep neural network for seizure/nonseizure classification by using CNN [17]. The two approaches were evaluated over the same EEG data set provided by University of Bonn [18]. The LSTM approach achieved performances of 100%. The CNN approach obtained a sensitivity of 95% and specificity of 90%. Each record in the Bonn EEG data set contains only one channel and has no artifacts. And thus the Bonn data set is regarded as an easy data set. Ansari et al. aimed to automatically optimize feature selection for seizure detection [26]. They utilized deep CNN to extract optimal features, and then fed the features to random forest to do classification. In evaluation experiments, EEG recordings of 26 and 22 neonates were taken as training data and testing data, respectively. A false alarm rate of 0.9 per hour and a sensitivity of 77% were achieved. The proposed method needed no predefined features, and surpassed three classic feature-based approaches. The deep learning approaches developed in [ [26] are based on classic neural network models, including CNN, RNN, LSTM, and GRU. The four models have limitations for processing EEG data. CNN is good at processing two or more dimensional data but not suited to one dimensional sequence data. RNN suffers from the gradient vanishing or exploding problem. Even though LSTM and GRU improved on RNN, the two variants still have the gradient decay problem when multiple layers are deployed. We aim to build a deep neural network architecture to overcome the variability of seizure morphologies well. A. Model Design EEG signals are dynamic one-dimensional data. The data at one time point in an EEG signal correlate with the past data. RNN and its variants are good at processing such kind of one-dimensional data. Additionally, EEG signals for seizure and nonseizure states across individuals manifest considerable variations, and the signal properties of one patient's seizure may closely resemble the characteristics of a normal EEG signal from the same patient or other patients [10]. In deep learning methods for classification, deep architecture is generally helpful to extract discriminative features. A variant of RNN, IndRNN, supports stacking multiple layers [19]. And it can address the gradient vanishing and exploding problems, which restrict the effectiveness of RNN. The two variants, LSTM and GRU, suffer from the gradient decaying problem over layers. IndRNN can also process longer sequences than LSTM [19]. Therefore, we choose to use IndRNN as the main module to construct a deep architecture for classifying seizures against nonseizures. In a deep neural network, as information about inputs or extracted features passes through many layers, it may vanish by the time it reaches the end of the network [20]. A densely connected structure can maximize information flow between layers in a network [20]. We will leverage a densely connected structure to inter-connect IndRNN layers to preserve information. A brain activity state is jointly described by EEG signals from different brain regions. Different brain regions have different contributions to a seizure state. The EEG signals at different brain regions can be differentiated when identifying seizures and nonseizures. We will introduce an attention mechanism to generate weights for channels. The weight describes the importance of EEG signal on a channel in discriminating seizures against nonseizures. Based on the above observations, we will integrate IndRNN with a dense structure and attention mechanism to design a new approach for the seizure/nonseizure classification. The inputs are firstly passed on to an attention layer, in which attention weights are generated and multiplied with signal data. The outputs of the attention layer are weighted signals. The weighted signals are fed to the first dense block. A dense block consists of multiple IndRNN layers and is to extract temporalspatial features in signals. In a dense block, each IndRNN layer passes its outputs to all subsequent layers; the inputs of the first IndRNN layer are also fed to all the other layers; and the outputs of the last IndRNN layer is taken as the outputs of the dense block. Each dense block is followed by a max-pooling layer. The max-pooling layer extracts predominant features at a specific time scale, and its processed results are passed on to the next dense block. The outputs of the last max-pooling layer, which follows the last dense block, are fed to an average pooling layer. In the average pooling layer, the overall features over time steps are extracted. Two FC layers follow the average layer in order to extract further features and make predictions. B. Model Architecture The proposed integrated IndRNN with a dense structure and attention mechanism is called dense IndRNN with attention (ADIndRNN). It consists of an attention layer, dense blocks, max-pooling layers, an average pooling layer, and two FC layers. Each dense block further comprises of IndRNN layers and bach normalization layers. The architecture of the ADIndRNN is presented in Fig. 1. 1) Attention Layer: The attention layer adaptively generates weights for channels and executes element-wise multiplication between data segments and weights. It outputs weighted signals. Its work flow is given in Fig. 2, and its computation is described in (1)−(6). The attention weights are generated according to an attention mechanism. Each data segment executes one linear transformation based on a kernel matrix and a bias matrix. After the linear transformation, the sof tmax(·) function is applied to each time step separately. The activation function outputs weights on channels at each time step. In order to diminish differences of channel weights among time steps, the weights over time steps are averaged. And the averages are taken as weights on channels at these time steps. The attention mechanism is illustrated in (1)−(5). Here, X 0 denotes an input tensor of size (n sm , n sp , n ch ). Symbols n sm , n sp , n ch represent the numbers of samples, time steps, and signal channels, respectively. Y 1 is a matrix of size (n ss , n ch ), n ss = n sm * n sp , W al a weight matrix of size (n ch , n ch ), a bias matrix B al of size (n ss , n ch ), and Y 2 with size (n ss , n ch ). sof tmax(·) is a normalized exponential function. Y 3 is a matrix of size (n sm , n sp , n ch ), Y 4 of size (n sm , n ch ), Y 5 of size (n sm , n sp , n ch ), and Y al an output matrix of attention layer with shape (n sm , n sp , n ch ). Functions f re1 (·) and f re2 (·) are to reshape a matrix, f av (·) is a function of computing averages along with the second axis of matrix, and f cy (·) is an copying operation to share the averages over all the time steps. The symbol means an element-wise multiplication between matrices. 2) Dense Block: A dense block organizes its components in a densely connected fashion to ensure maximum information flow between layers. Each component consists of one IndRNN layer and one batch normalization (BN) layer, and the IndRNN layer is followed by the BN layer. Outputs of one component are passed on to all subsequent components. And the inputs of the first component are fed to other each component. The structure of a dense block is shown in Fig. 3. An IndRNN layer processes input sequences in forward order, and extracts time-dependent features. Its computation Here, X IR t is a matrix of size (n sm , n f e ), which represents an input of IndRNN layer at the time step t. n f e is the number of features. H t is a matrix of size (n sm , n hid ), and it means hidden outputs at the time step of t in IndRNN layer. n hid represents the number of hidden states. W hid is a input weight matrix with shape (n f e , n n hid ), U hid for a recurrent weight matrix of size (n sm , n hid ). B hid and B out are two bias matrices of size (n sm , n hid ). W out is an output weight matrix of size (n hid , n hid ). Y IR t is a matrix of size (n sm , n hid ), which means an output at the time step t in the IndRNN layer. σ hid and σ out are activation functions such as ReLU. For the first IndRNN layer in the first dense block, its input at the time step t is the output of the attention layer at the time step t, i.e., X IR t consists of all the elements at the entry of t along the axis of 1 in the matrix Y al . BN layer is inserted after each IndRNN layer. It is used to speed up training and reduce overfitting [31]. 3) Max-pooling Layer: Each dense block is followed by a max-pooling layer. The max-pooling layer extracts predom-inant features from output sequences of dense block at a specific temporal scale. 4) Average Pooling Layer: Average pooling layer is designed to extract overall features across time scales for the final classification. Its output is a two-dimensional matrix, in which each row corresponds to one segment sample and elements in columns are features. It is inserted after the last max-pooling layer. 5) FC Layers: Two FC layers are deployed behind the average pooling layer. The first FC layer aims to integrate features from the outputs of the average pooling layer and make further extractions. The second is to perform final classification of seizure/nonseizure. IV. EVALUATION In order to evaluate our proposed approach of ADIndRNN, we conduct CV experiments over the noisy EEG data set of CHB-MIT, and measure its performance in five metrics. The five metrics are sensitivity, specificity, F1 score, precision and accuracy, respectively. The proposed approach is compared with two current state-of-the-art approaches, including one LSTM-based approach [21] and one CNN-based approach [17]. The CV is that, data segments from all the patients are put into a pool, then the segments in the pool are randomly split into three disjoint sets according to a ratio, including training dada set, validation data set and testing data set. The training data are to train a model, the validation data are to tune parameters in the model based on validation performance, and the testing data are to test the generalization capability of trained model. To reduce the variability of testing results, ten rounds of CVs are performed for each approach, then averages and standard deviations for the ten CV results are calculated as the performance of seizure/nonseizure classification approach. A. Data Set The data set of CHB-MIT [9] contains 686 EEG recordings from 23 subjects of different ages ranging from 1.5 years to 22 years. The recordings include 198 seizures. The sampling frequency is 256 Hz. Most recordings are one hour long, and others are two-hour long or four-hour long. The EEG recordings are grouped into 24 cases. In each case, the data recordings are from a single subject. Case Chb21 was obtained 1.5 years after Case Chb01 from the same subject. Each data file contains data on 23 or more channels. There exist channels on which data are missing. Thus, we only consider those channels containing all the original data. Three data files, including Chb12 27.edf, Chb12 28.edf, and Chb12 29.edf, have different channel montages from other data files. In our experiments, we remove these three data files. B. Data Segmentation In order to extract effective seizure features, 17 common channels are chosen. According to a segment length of 23 seconds, each data record in each case is split into data segments from the beginning to the end. Data segments except for the last one of a data record have no overlap. If the duration of a data record is not divided by the segment length and there is a seizure happening in the remaining part, we will ensure that the last segment will have the same length but will overlap with its prior segment. If the remaindering part contains no seizure, then it is dropped. Using annotation files for this data set, we determine whether a data segment contains a seizure or not. In our experiments, if a segment contains any seizure data, it is considered as a seizure segment; otherwise, it is a nonseizure segment. Using the above segmentation method, 665 seizure segments are obtained. In these seizure segments, the lengths of seizures vary from 1s to 23s, with the average length being 16.9s. Among all seizure segments, segments containing seizure data of less than 7s comprise 14.7%, those containing more than 17s comprise 59.8%, and those containing more than 10s comprise 76.1%. All the seizure segments are taken as a part of our experiment data. We randomly choose 665 nonseizure segments in each experiment. The 1330 seizure/nonseizure segments are randomly split into training set, validation set and testing set with a ratio of 70:15:15 in each experiment, and we adopt the repeated random sub-sampling validation as a strategy for CV. C. CV Results for the Proposed Approach Based on the architecture in Fig. 1, we build a model by stacking three dense blocks, each dense block consisting of three IndRNN layers and three BN layers. The constructed model is denoted by ADIndRNN-(3,3). In our CV experiments using the model ADIndRNN-(3,3), the total loss contains two parts. One part is losses produced by predicted labels of the model and ground truth labels. The other part is L2 losses of all the trainable variables. For the second loss part, we set a coefficient of weight decay to take its specific percentage into the total loss. The main parameters in the model are set as follows: A kernel initializer in the attention layer is a truncated normal initializer with mean value of 0 and standard deviation of 0.1, a bias initializer in the attention layer initializes tensor to 0; the state size of each IndRNN layer in the first dense block is 80, that in the second dense block is 120, that in the third dense block is 160; each max-pooling layer has a window size of 2 and stride of 2; weight initializers in the two FC layers are Xavier initializers, two bias initializers in the two FC layers are constant initializers with value of 0.001, the number of output units in the two FC layers are 100 and 2, respectively; the weight decay for the trainable variables loss is 0.01; the optimizer is Adam, the learning rate is 0.0004; the batch size for training is 30, the epochs is 60. Overall, ten rounds of CVs are performed. The obtained results of these ten experiments over 23s segments using ADIndRNN-(3,3) are given in Table I D. Comparison with the LSTM and CNN Approaches LSTM as a main module has been used to detect seizures [21]. The LSTM approach is evaluated through CV experiments over an EEG data set from University of Bonn [18], showing state-of-the-art performance. A CNN-based approach has been proposed for seizure/nonseizure classification [17], which also demonstrates state-of-the-art performance over the Bonn data set. Because the Bonn data set is heavily processed and contains no artifacts, and its size is small, we compare the proposed approach with the LSTM approach and the CNN approach over the noisy EEG data set of CHB-MIT. For the LSTM approach and the CNN approach, we implement them according to the descriptions in the related literature. And the implementations are tested. Our obtained testing results reach the reported performances. Using the two implementations, CV experiments are performed for the LSTM approach and the CNN approach separately. The LSTM approach consists of one LSTM layer, one time-distributed computing layer, one average pooling layer and one FC layer. In the experiments using the LSTM approach, our parameter setting is as follows: The number of hidden states is 120 in the LSTM layer, that in the time-distributed computing layer is 60, the optimizer is RMSprop, the learning rate is 0.0007, the batch size is 30, and the number of epochs is 30. For the CNN approach, it contains five convolutional layers, five max pooling layers, and three FC layers. The parameters are set as follows: the number of hidden states in the first two convolutional layers is 100, that in each of the second two convolutional layers is 200, that in the fifth convolutional layer is 260, that in the first FC layer is 100, that in the second FC layer is 50, the parameter alpha is 0.01 in the LeakyReLU activation function, the optimizer is Adam, the learning rate is 0.001, the batch size is 30, and the number of epochs is 50. Using the LSTM approach, ten rounds of CVs over 23s segments from CHB-MIT are performed, and their obtained results are shown in Table II. The obtained average sensitivity is 84.40%, the average specificity is 84.30%, and the average precision is 84.70%. For the CNN approach, ten rounds of CV experiments over 23s segments are also conducted, and the results are given in Table III. The achieved average sensitivity, the average specificity and the average precision are 84.80%, 81.00%, and 82.56%, respectively. Comparing CV results in Tables I-III, we can conclude that the average performance, in either one of the metrics of sensitivity, specificity, F1 score, precision, and accuracy, using the proposed approach is at least 4% greater than that using the LSTM approach or the CNN approach, and the obtained standard deviation over each metric using our approach is smaller. The above comparisons show that the proposed approach outperforms the LSTM approach and the CNN approach in seizure/nonseizure classification. V. MODEL ANALYSES Our designed architecture in Fig. 1 contains the attention layer, the dense structure, and the IndRNN layer. In this A. Interpretations of Attention Mechanism Our attention mechanism differentiates information on different channels, and assigns different weights to them. The attention mechanism contains a kernel matrix and a bias matrix, which are trained together with the whole model of ADIndRNN. A data segment is combined with the two trained matrices to do multiplying and adding operations. Then, the sof tmax(·) function and an averaging operation are applied to calculate weights on channels. The obtained attention weights depend on the trained kernel matrix, the trained bias matrix, and data segment. Based on a well-trained model ADIndRNN- (3,3), Fig. 4 shows attention weights over channels for a 23s seizure segment from Chb04, and Fig. 5 presents weights over channels for a 23s seizure segment from Chb10. The weights in Fig. 4 and Fig. 5 are based on the same welltrained kernel matrix and bias matrix, but on different data segments. The two weights distributions in the two figures are different, and their differences indicate that the attention mechanism can adaptively calculate weights over channels. The magnitude of an attention weight manifests the size of differences between seizure signals and nonseizure signals on a channel. The larger an attention weight is, the better the signals on the corresponding channel can distinguish seizures from nonseizures. Fig. 6(a) is a visualization of nonseizure signals on five channels from the patient Chb04, and Fig. 6(b) for seizure signals on the same five channels from the same patient. The five channels in Fig. 6 are FP1-F3, P8-O2, P7-O1, P3-O1, and F4-C4, respectively. The visualized seizure segment in Fig. 6(b) is the one whose channel weights are shown in Fig. 4. In Fig. 4, weights on channels P7-O1 and FP1-F3 are the first two largest, and weights on channels P3-O1 and P8-O2 are the two smallest. Comparing seizure signals in Fig. 6(b) with nonseizure signals in Fig. 6(a), signals on the channel P7-O1 (i.e., signals with the magenta color) change the most. The magenta seizure signal in Fig. 6(b) oscillates with the largest magnitude range and the largest frequency. Its magnitude ranges in five zones. The seizure signal on Channel Fig. 6(b) (i.e., the blue seizure signal) fluctuates more frequently than the seizure signals on channels F4-C4 and P8-O2 (i.e., the black seizure signal and the green seizure signal). The seizure signal on Channel P3-O1 (i.e., the olive green seizure signal) oscillates in a range of two zones, which is smaller than the blue seizure signal and the black seizure signal. So, the weight values in Fig. 4 mostly characterize the sizes of differences between seizure signals and nonseizure signals in Fig. 6. Fig. 7(d) visualizes the seizure segment that the channel weights are presented in Fig. 5. Fig. 7 shows nonseizure signals and seizure signals on five channels from the patient Chb10. The five channels are CZ-PZ, F8-T8, P8-O2, P7-O1, and P3-O1, respectively. The seizure signal on Channel P7-O1 (i.e., the magenta seizure signal) in Fig. 7(d) oscillates in six zones, and fluctuates faster than the magenta nonseizure signal in Fig. 7(c). Magnitudes of the other four seizure signals range in at the most five zones. The magenta signals change the most from a nonseizure state to a seizure state. It is just why the weight on Channel P7-O1 in Fig. 5 is the largest. The seizure signals on channel F8-T8 and CZ-PZ (i.e., the blue seizure signal and the brown seizure signal, respectively) in Fig. 7(d) oscillate mostly in four zones, and the blue seizure signal fluctuates a little more frequently than the brown seizure signal. The magnitude in the seizure signal on Channel P8-O2 (i.e., the green signal) in Fig. 7(d) changes more slowly than the magnitude in the blue seizure signal, although the green signal oscillates a little more frequently than the blue seizure signal. The seizure signal on Channel P3-O1 (i.e., the olive seizure signal) in Fig. 7(d) oscillates mostly in three zones and less frequently than the blue seizure signal. So, the weight on Channel P3-O1 is smaller than weights on channels CZ-PZ and F8-T8 in Fig. 5. B. Validation of Modules In order to validate modules in our architecture of Fig. 1, we take the model ADIndRNN-(3,3) as a representative and conduct CV experiments over 23s segments. The model ADIndRNN- (3,3) contains an attention layer and three dense blocks, each block including three IndRNN layers. So, we evaluate a model IndRNN-9. In this model, nine IndRNN layers are used; each IndRNN layer is followed by one BN layer and one max-pooling layer orderly; the last max-pooling layer is followed by one average pooling layer and two FC layers. A model AIndRNN-9 is constructed by inserting one attention layer before IndRNN-9 in order to test the contributions of attention mechanism. And also a model DIndRNN- (3,3), which is obtained by removing the attention layer in the model ADIndRNN- (3,3), is evaluated. The three models, including IndRNN-9, AIndRNN-9, and DIndRNN- (3,3), are separately performed ten rounds of CVs based on their tuned parameters. For each model, the averages and standard deviations over its achieved ten testing results are presented in Table IV. Comparing results in Table IV with results in Table I, we can see that, using IndRNN only, or the combination of IndRNN and attention mechanism, or the combination of IndRNN and dense structure does not reach the performance of the model ADIndRNN-(3,3). After adding the attention layer, the model AIndRNN-9 is a little better than IndRNN-9, and ADIndRNN-(3,3) outperforms DIndRNN-(3,3) by at least 1%. By using dense structure, DIndRNN-(3,3) is better than IndRNN-9, and the model ADIndRNN- (3,3) improves 4% in the sensitivity compared with AIndRNN-9. Thus, the attention layer and the dense structure in our architecture of Fig. 1 provide positive contributions for seizure/nonseizure classification. C. Analyses for deeper IndRNN-related models The model ADIndRNN-(3,3), as a representative of the architecture in Fig. 1, shows good performance in Table I, which is better than the LSTM approach and the CNN approach. It uses nine IndRNN layers. A natural question is what are the performances for models containing more In-dRNN layers. We evaluate six models, including IndRNN , which contains four dense blocks with three IndRNN layers per block, is obtained by removing an attention layer from the model ADIndRNN- (4,3). Using the data segmentation method in Section IV-B, each one of the above six models is performed for ten rounds of CV tests based on their tuning-parameters feed backs. And then the average and standard deviation of ten testing results for each model are calculated and shown in Table V. Compared to the results in Table IV, the accuracy for IndRNN-12 in Table V is a little better than that for IndRNN-9, and the accuracy for IndRNN-15 better than that for IndRNN-12; the performance of AIndRNN-12 is better than that of AIndRNN-9, and AIndRNN-15 better than AIndRNN-12. The accuracy of DIndRNN-(4,3) is close to that of DIndRNN- (3,3). The accuracy for ADIndRNN-(4,3) is a little better than that for AIndRNN-12, and equals to that of DIndRNN-(4,3). The comparisons indicate that, using more IndRNN layers can help improve the performance. However, the performances for the six models are not as good as the performance of the model ADIndRNN- (3,3). The reasons may be possibly two folds: One is that the data size of 1330 segments is not big enough to train a deeper neural network well; The other is that, the dense structure and the attention mechanism are more effective than using more layers for the seizure/nonseizure classification. VI. EFFECTS OF SEGMENT LENGTHS ON SEIZURE/NONSEIZURE CLASSIFICATION In this section, we will explore relationship between the segment length and the performance of classifying seizures/nonseizures. Generally, seizures last for less than two minutes. We select 13 temporal lengths less than 2 min and separately use each length to segment EEG signals in CHB-MIT. Besides the length of 23s, other 12 lengths are 30s, 35s, 40s, 45s, 50s, 55s, 60s, 70s, 80s, 90s, 100s, and 110s, respectively. Their segmentation methods are similar to the case of 23s in Section IV-B. For each length, ten CV experiments are performed based on a group of tuned optimal parameters by using the model IndRNN-12. In the experiments for these 12 lengths, the tuned parameters mainly include the learning rate and the number of epochs, and the numbers of hidden states are set the same as in the case of 23s. The obtained CV results over the 12 different segment lengths are listed in Table VI. Visualizations of these experiment results are shown in Fig. 8. The abbreviation of Seg. Len. means segment length. Fig. 8. Performance over data segments with different lengths Fig. 8 shows that the relationship between the data segment lengths and the performances of seizure/nonseizure classification is not clear-cut. The classification performance does not always go up or go down as the segment length increases, and it fluctuates over the segment lengths. With six lengths, including 60s, 70s, 80s, 90s, 100s, and 110s, the performances manifest wide fluctuating margins. With six lengths of 23s, 30s, 35s, 40s, 45s, and 50s, the differences of performances are relatively small. The best performance is obtained with the segment length of 90s. For the three metrics of Sensitivity, Specificity and Precision, their maximal gaps are all more than 4%. For the F1 Score and Accuracy, the maximal differences are more than 3%. It can be seen that the influence of the segment length can not be overlooked for the seizue/nonseizure classification. Different segment lengths result in different numbers of seizure segments and different amounts of seizure data in seizure segments. The amount of seizure data can be characterized by statistics, such as the average temporal length of seizure data per seizure segment and the percentage of segments with special seizure duration in all the seizure segments. The abbreviation of Num. Sei. Seg is for the number of seizure segments, Avg. Sei. Len. is for an average seizure duration per seizure segment, and Perc. Seg. Type-k is for a percentage of Type-k segments in all the seizure segments, k = 1, 2, 3. A Type-1 segment is a seizure segment in which the seizure duration is less than or equal to one quarter of segment length. A Type-2 segment is a seizure segment that its seizure duration is more than or equal to one half of segment length. A Type-3 segment is a seizure segment such that the seizure duration is more than or equal to three quarters of segment length. Generally, the four statistic values, including the number of seizure segments (abbrev., Num. Sei. Seg.), the average of seizure duration per seizure segment (abbrev., Avg. Sei. Len.), the percentage of Type-2 segments (abbrev., Perc. Seg. Type-2) and the percentage of Type-3 segments (abbrev., Perc. Seg. Type-3), are the greater, the performance of seizure/nonseizure classification is the better. And the percentage of Type-1 segments (abbrev., Perc. Seg. Type-1) is the smaller, the performance is the better. Actually, as the segment length increases, seizure segments decrease and the average of seizure duration per seizure segment increases. For our selected 13 segment lengths, the statistics of seizure data are presented in Table VII. Besides the changes in the number of seizure segments and the average of seizure length in Table VII, the percentage of Type-2 segments and that of Type-3 segments mostly decrease, and that of Type-1 segments increases in most cases. According to Fig. 8, the best trade-off is achieved with the segment length of 90s for the seizure/nonseizure classification. For the segment length of 90s, the average seizure duration is the largest, and the percentage of Type-1 segments is not large relatively, around one third of seizure segments. VII. DISCUSSION Keeping in mind a labor-intensive clinical practice that neurologists manually review off-line EEG data records for diagnosing epilepsy, we have developed an automatic approach of ADIndRNN to identify seizure segments and nonseizure segments. Each EEG data record is segmented into segments, and then the obtained segments are classified into two categories, seizure versus nonseizure. The classified seizure segments are provided to neurologists to make further analyses. For the task of classifying seizures/nonseizures to aid neurologists with offline annotations, the metrics such as sensitivity, specificity and precision are more relevant than the metrics of false alarm rate and latency. So, we evaluate our proposed approach with five metrics, including sensitivity, specificity, F1 score, precision, and accuracy. By integrating IndRNN with a dense structure and an attention mechanism, we propose the deep learning approach of ADIndRNN for the seizure/nonseizure classification. Our proposed approach outperforms the LSTM approach and the CNN approach with the improvements of at least 4%. IndRNN is a variant of RNN. It supports stacking multiple layers and can handle longer sequences than LSTM and RNN [19]. The IndRNN layer extracts features from the forward direction. The dense structure is to ensure maximum information flow between layers [20], and the attention mechanism is used to extract spatial features. The validation results in Section V-B demonstrate that the dense structure and the attention mechanism have positive effects on the performance of our approach. In the development of the approach, a bidirectional structure of IndRNN layers is constructed in order to extract features in two directions. Our experiments show that the bidirectional structure of IndRNN is time-consuming in computation while the performance gain is marginal. We utilize the attention mechanism to generate attention weights over channels instead of adopting direct training method. For the directly training way, parameters representing weights on channels are learned. After training, the learned weights are the same for all the data segments. In fact, one patient possibly experiences different seizure types. Seizures with different types may originate in different brain regions. And different patients may have different seizure patterns. The weights on channels, which describe the strengths of signifying seizures, need to change with seizure types and seizure patterns. So, without dwelling on the directly training method we choose to use the attention mechanism. In the attention mechanism, a kernel matrix and a bias matrix are obtained by training. The two trained matrices are combined with data segments to adaptively generate weights through transformations. When segmenting EEG records, we attempt to ensure that the obtained data segments are close to a real-world scenario. A seizure segment could contain seizure data and nonseizure data. It is unrealistic in the real world that all the seizure segments only contain seizure data. As the LSTM approach and the CNN approach were evaluated over signals with duration of 23.6 seconds from Bonn EEG data set [17], [21], a segment length of 23s was selected for evaluating the proposed approach against the LSTM and CNN approaches. In the exploration of relations between segment lengths and performances of classifying seizure/nonseizure, we adopt the model IndRNN-12. The choice is based on the following three considerations: (1) For models containing more IndRNN layers or attention layer or dense structure, more parameters need to be trained, while the number of seizure segments decreases as the segment length increases, which increases the over-fitting risk. (2) The model IndRNN-12 has relatively good results over the 23s segments, as shown in Table V. (3) The used model in the exploration needs to be kept the same for the 13 segment lengths. VIII. CONCLUSIONS This paper considers automatical classification of seizures/nonseizures for assisting neurologists in making epilepsy diagnosis. We propose a new approach of dense IndRNN with attention by integrating an emerging neural network model, IndRNN, with a densely connected structure and an attention mechanism. The IndRNN supports stacking multiple layers to capture seizure patterns, the dense structure ensures maximum information flow between layers, and the attention mechanism helps extract spatial features. Evaluations of our proposed approach are performed in CV experiments over the noisy data set of CHB-MIT. The obtained average sensitivity, specificity, F1 score, precision and accuracy are 88.80%, 88.60%, 88.71%, 88.69%, and 88.70%, respectively. These results exceed the LSTM approach [21] and the CNN approach [17] with an improvement of at least 4%. Additionally, we explore how the segment length affects the performance of seizure/nonseizure classification. Our CV experiments over 13 segment lengths indicate that the classification performance fluctuates over the segment lengths, with the maximal fluctuating margin being more than 4%. The segment length is thus an important factor influencing the seizure/nonseizure classification performance. As a future research line, we will further investigate how to use the dense IndRNN with attention for real-time seizure detection.
2019-06-05T04:55:40.000Z
2019-06-05T00:00:00.000
{ "year": 2019, "sha1": "77c38323fe9ded4ec7641a5039c12a41f83be25d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "aeb14f354f2208eedb8f346a9366a7b18e7d74b7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Mathematics" ] }
231620963
pes2o/s2orc
v3-fos-license
Calcium imaging of adult-born neurons in freely moving mice Summary Adult-born neurons (ABNs) in the dentate gyrus bestow unique cellular plasticity to the mammalian brain. We recently found that the activity of ABNs during sleep is necessary for memory consolidation. Here, we describe our method for Ca2+ imaging of ABN activity using a miniaturized fluorescent microscope and sleep recordings. As preparatory surgery and post-recording data processing can be major obstacles, we provide detailed descriptions and problem-solving tips. For complete details on the use and execution of this protocol, please refer to Kumar et al. (2020). Step-by-step method details Induction of Ca 2+ sensor expression in ABNs Timing: 1-8 h Express GCaMP3 via tamoxifen-inducible Cre recombinase in adult neural stem/progenitor cells. 1. Prepare 20 mg/mL tamoxifen solution. a. Heat corn oil to 42 C for 30 min. b. Dissolve tamoxifen in corn oil at a concentration of 20 mg/mL in an ultrasonic bath (generic consumer glassware is sufficient) or with continuous shaking at 37 C. Store the prepared solution at 4 C. Ghosh et al., (2011). For a more generalized protocol to image other brain regions, please refer to Resendez et al. (2016) for a recent overview. In vivo Ca 2+ imaging of ABNs in the DG was first described by Danielson et al. (2016) in awake head-fixed mice. Note: We implanted EEG/EMG electrodes to identify sleep stages in combination with ABNs activity. Mammalian sleep contains two major orthogonal states: rapid eye movement (REM) sleep and non-REM (NREM) sleep. Each sleep stage is characterized by prominent neural oscillatory patterns. For example, slow waves in the cortex are abundant during deep NREM sleep, whereas theta oscillations in the hippocampus occur during REM sleep, both of which play critical roles in memory consolidation (Boyce et al., 2016;Marshall et al., 2006). Details of sleep recording and analysis are available in Oishi et al. (2016). 5. Open a cranial window sufficiently large for lens and electrode implantation. a. Shave hair from the surgical site. b. Secure the head of the mouse in a stereotaxic frame. c. Cover the eyes of the mouse with white petrolatum cream to prevent dryness. d. Disinfect the skin at the surgical site. e. Make a small incision in the skin using a spring scissor along the sagittal line of the skull. f. Clean the exposed skull with cotton swabs. g. Scratch the skull surface using a drill bit to make it rough, which will facilitate later adhesion of glue and dental cement. h. Obtain a flat skull position by matching the height of two parallel points along the sagittal axis (G3 mm lateral from bregma) as well as the height of bregma and lambda points. i. Mark the desired stereotaxic coordinates for the GRIN lens and EEG electrodes on the skull surface using the stereotaxic manipulator. j. Drill five holes in the skull: one for the GRIN lens, two over one hemisphere for EEG electrodes, and two over the contralateral hemisphere for anchoring electrodes (i.e., not attached to EEG wires; Figure 2, Methods Video S1, 00:00-00:30). Note: The electrode holes should be just large enough to accommodate the tip of the electrodes. In Figure 2, the holes were drilled above the frontal and parietal cortices on both sides of skull as an example: anteroposterior (AP) +1.5 mm and À3 mm and mediolateral Optical section of the dentate gyrus (DG) with confocal microscopy showing ABNs expressing the Ca 2+ sensor GCaMP3 induced by tamoxifen injection. The letters g, h, and ml represent the granule cell layer, hilus, and molecular layer, respectively. The arrowhead indicates the subgranular zone, and the dotted line indicates the border between the DG and cornu ammonis 1 (CA1). Scale bar, 100 mm. (ML) G1.5 mm and G1.7 mm, respectively. The anchoring electrodes prevent the implant from becoming detached at later points in the experimental protocol. k. Make a cranial window for GRIN lens implantation using a drill (Methods Video S1, 00:30-00:41). Note: The cranial window should be just large enough to accommodate the GRIN lens implant. We recommend using an Inscopix GRIN lens 1 mm in diameter and 4 mm in length to record ABNs in the dorsal DG. In this case, a 1.2-mm 2 cranial window centered at AP À2 mm and ML +0.7 mm works well. 6. Implant the EEG electrodes, anchoring screws, and GRIN lens. a. Insert the EEG electrodes and anchoring screws epidurally (i.e., not completely penetrating the skull; Figure 2, Methods Video S1, 00:42-01:31). b. For the lens hole, aspirate cortical tissue above the region of interest using a glass pipette connected to a vacuum pump in circular movements from the center to the periphery of the cranial window (Methods Video S2, 00:00-00:36). While aspirating the cortex, the brain tissue will first have a homogeneous appearance. Stop aspiration when white matter tracts of the corpus callosum and alveus hippocampus are visible as identified by striations ( Figure 3). Note: From this point onward, some bleeding is normal during the procedure. Continuously irrigate the exposed brain tissue with sterile saline while aspirating the cortex (Methods Video S2, 00:13-00:23). CRITICAL: A surgical microscope is needed to observe distinctions in brain structure during surgery. In our experience, damaging the CA1 by aspiration usually destroys the structural organization of the DG. c. After reaching the corpus callosum fibers, keep the exposed brain tissue irrigated with sterile saline while aspirating any blood present in the region to maintain visibility (Methods Video S2, 00:37-00:53). d. Attach a GRIN lens to a stereotaxic manipulator using a clip holder ( Figure 4) and align it above the center of the cranial window (Methods Video S3, 00:00-00:15). OPEN ACCESS Note: We made a lens holder by gluing a generic spring clip holder to the base of a manipulator bar. We recommend holding the GRIN lens at the tip ($0.5-mm length) of the clip holder and keeping the lens straight (Methods Video S3, 00:00-00:09). Methods Video S3. Implanting the GRIN lens, refer to step 6d-h e. Carefully lower the lens using 0.1-mm dorso-ventral steps into the hippocampus. The final coordinate of the objective surface of the GRIN lens is 1.3 mm lower than the top of the skull (Methods Video S3, 00:16-00:34). Note: We lower the lens at a rate of $500 mm/min. Tissue swelling and relaxation affect the quality of subsequent recordings. We do not recommend using a smaller diameter lens because the resulting reduction in field of view makes it substantially more difficult to observe a population of ABNs that is sparse in both number and activity. f. Fix the GRIN lens to the skull and all four screws with a thin layer of cyanoacrylate adhesive (Loctite 454, referred to henceforth as Loctite glue) and allow it to cure for $5-10 min (Methods Video S3, 00:35-00:53). Note: We find that adding dental cement liquid on top of Loctite glue rapidly solidifies the adhesive and reduces this step to a few minutes. g. Release the lens from the stereotaxic manipulator (Methods Video S3, 00:54-01:00). h. Cover the skull using a layer of Loctite glue and dental cement liquid around the insertion point (Methods Video S3, 01:00-01:23) and allow it to completely cure. i. Attach the EEG/EMG socket to the skull using Loctite glue, add a drop of dental cement liquid, and allow it to completely cure (Methods Video S4, 00:00-00:11). CRITICAL: The EEG/EMG socket should be fixed far enough from the lens to avoid later collision between the socket and microscope. . White matter tracts with white striations exposed during aspiration of cortical tissue The area of white matter tracts is surrounded by the light blue dotted line. The cranial window (blue dotted line) is larger than the exposed area. Scale bar, 1 mm. j. Insert two wires into the cervical portion of the trapezoid muscles for EMG recording (Methods Video S4, 00:12-00:33). k. Apply carbon black powder-mixed dental cement (liquid + powder) to any exposed area of the skull as well as to the area where the Loctite glue meets the GRIN lens and EEG/EMG wires and let it solidify for $10 min (Methods Video S4, 00:34-00:49). CRITICAL: Completely cover the EEG/EMG wires with Loctite glue to prevent damage when the mouse scratches its head ( Figure 5). l. Apply Loctite glue to the outer border of the dental cement to connect it to the surrounding skin and wait until it completely cures (Methods Video S4, 00:50-01:00). 7. Cover the GRIN lens and surrounding area with silicone to protect it from damage resulting from mouse activity in the home cage ( Figure 6, Methods Video S4, 01:01-01:12). Optional: Attach a rectangular metal frame (12 3 19 3 1 mm) to the skull using dental cement and let it solidify for 5 min. This frame will prevent the mouse from scratching the lens and aid in manipulating the awake mouse when attaching the miniaturized fluorescent microscope (miniscope) and EEG/EMG cables before recording. 8. Finish the surgical procedure. a. Stop anesthesia. b. Remove the mouse from the stereotaxic holder. c. Administer postoperative 5% glucose solution and ibuprofen (30 mg/kg). d. Place the mouse in a clean cage partially covering a heating pad to prevent hypothermia and allow it to recover. 9. House the operated mouse individually to avoid damage to the implant. Note: To prepare ibuprofen (30 mg/kg) solution, dissolve 30 mg ibuprofen powder in 100 mL of 100% ethanol, add 1 mL sunflower oil, and evaporate the ethanol by centrifuging. Postoperative recovery Timing: at least 1 week Allow operated mice to rest for at least 1 week with minimal manipulation. OPEN ACCESS Note: We recommend keeping mice in the sleeping chamber/room during recovery to habituate them to the environment. Note: Manipulation of mice during this period may cause detachment of the lens and electrodes, as some degree of inflammation might still be present, making the implant more vulnerable to detachment from the skull. Miniscope baseplate attachment Timing: 30 min per mouse Attach a magnetic baseplate for the miniscope to lens-implanted mice. 10. Anesthetize the mouse with an anesthetic of choice (e.g., isoflurane). 11. Secure the head of the mouse in the stereotaxic frame. 12. Prepare the miniscope. a. Attach the magnetic baseplate to the miniscope. b. Secure the miniscope to its gripper tool and attach it to a micromanipulator. c. Connect the recording hardware to the computer. d. Position the miniscope above the head of the mouse. e. Start the image acquisition software. 13. Attach the magnetic baseplate. a. Remove the silicone protection from the skull of the mouse. b. Adjust the position of the miniscope using the micromanipulator, centering it above the implanted lens. Figure 5. Electrodes and GRIN lens fixed with dental cement The skull and electroencephalogram (EEG)/electromyogram (EMG) wires are completely covered with black dental cement. Note that the EEG/EMG socket (green arrow) and GRIN lens (blue arrow) are not covered. c. Turn on the LED light of the miniscope and lower it until the top surface of the lens becomes visible through the camera. d. Carefully continue lowering the miniscope until the DG becomes visible. Note: The DG will be recognizable by its abundant vasculature. Due to GCaMP3 expression in ABNs and our implantation method, this will be the first focusable structure. e. From this point, to adjust the imaging focus to the granular/subgranular layer, lower the miniscope $50 mm. The vasculature should become slightly out of focus at this point ( Figure 7A). Optional: When the outlines of GCaMP3-expressing ABNs can be visualized, we recommend switching to the nVista software DF/F function to confirm the presence of dynamic changes in fluorescence corresponding to Ca 2+ transients ( Figure 7B). Note: Due to the low signal-to-noise ratio of GCaMP3 and the small number of ABNs labeled using our protocol, the fluorescence signal from ABNs may be undetectable at this point. It may still be possible to observe ABN activity with post-recording processing of the images. The success rate is $50%. f. After choosing the optimal focal plane, secure the baseplate to the skull using dental cement (liquid + power) and allow it to completely cure ( Figure 8, Methods Video S5). CRITICAL: When fixing the baseplate, there are three points at which dental cement should not be directly applied: the surface of the implanted lens, the screw for the magnetic baseplate, and the miniscope. Note: Fixing the baseplate without gaps will prevent dust and ambient illumination from interfering with the field of view during imaging. Note: Generic dental cement may shrink to a small degree when it becomes solid. If the position of the baseplate changes significantly, setting the baseplate 5-10 mm above the optimal focal plane before applying dental cement or using C&B Metabond as recommended by Inscopix may be helpful. In addition, the focal plane can be adjusted at the time of recording by turning the microscope as instructed by Inscopix. 14. Finish the imaging preparation. a. After the dental cement has solidified completely, release the miniscope from the gripper tool. b. Detach the miniscope from the magnetic baseplate, which should remain attached to the skull. c. Attach the baseplate cover to prevent dust on the lens. d. Release the mouse and return it to its home cage. Note: At this point, mice are in principle ready for recording. Habituation for EEG/EMG recording and Ca 2+ imaging during sleep Timing: $6 h per day for at least 3 days per mouse Habituate mice to the miniscope and EEG/EMG cables. 15. Attach the dummy miniscope and dummy EEG/EMG cables to the mouse. CRITICAL: The mouse will stop moving if its eyes are covered (Methods Video S6). Avoid applying too much force to the implant, which could cause its detachment later. b. Remove the baseplate cover and attach the dummy miniscope and dummy EEG/EMG cables ( Figure 9, Methods Video S6, 00:24-01:39). Note: Mice might initially show difficulties moving around with the miniscope and EEG/EMG cables. We find that a minimum of 9 days of habituation is necessary for mice to freely move. As the goal is only to habituate mice, it is not necessary to use the real recording system at this point. Note: Periodically visually confirm whether the mouse sleeping. Mice should be able to fall asleep more quickly after each habituation session. 17. Finish the habituation protocol. a. Remove the mouse from the sleeping chamber. b. Detach the dummy materials and attach the baseplate cover. c. Return the mouse to its home cage. 18. Repeat the habituation protocol on the following day until the mouse is able to sleep and freely move around inside the sleeping chamber. Note: For illustrative purposes, we present a protocol for cued fear conditioning as previously described (Kumar et al., 2020). Record EEG/EMG and Ca 2+ transients during sleep. 19. Prepare the mouse for EEG/EMG recording and Ca 2+ imaging. a. Start the acquisition software and set the imaging parameters. Note: For consistency, use the same imaging settings/parameters for all mice. For lengthy imaging sessions, we recommend setting the acquisition rate at $5 Hz to reduce the file size. For 5 Hz recordings, we find that an LED intensity of 10%-30% of its maximum power for 4 h does not oversaturate any detected pixel. We usually do not observe cell death resulting from phototoxicity during lengthy imaging sessions, but a small degree of photobleaching is inevitable. As the signal-to-noise ratio of GCaMP3 is not as good as that for newer-generation Ca 2+ sensors, we recommend leaving gain at 1 to reduce imaging noise. b. Secure the miniscope and EEG/EMG cables above the recording chamber. CRITICAL: To avoid damage to the cables, ensure that they do not hang close to the mouse, which can bite and damage them. At the same time, there should be sufficient slack on the cables to avoid movement-induced cable twisting and strain. Note: We placed the EEG/EMG and miniscope cable close to each other so that their rotation axes are similar. During our recording period (maximum $4 h), we did not experience substantial tangling of the EEG/EMG and miniscope cables, presumably because we typically perform experiments during the early half of daytime, when mice are often sleeping. If tangling occurs, we recommend using a slip ring for the miniscope with a hole to accommodate the EEG/EMG cable passing through it (or vice versa). c. Remove the baseplate cover and attach the miniscope and EEG/EMG cables. CRITICAL: Periodically monitor the state of the mouse and cables to ensure that that mouse is behaving naturally and there is no twisting of the cables. Note: If necessary, it is possible to record the activity of neurons during the foot shocks of fear conditioning (Grewe et al. 2017). In this case, we recommend covering the interior walls of the conditioning chamber with a shock-absorbing material to prevent damage to the miniscope if the mouse hits the walls when reacting to the foot shocks. Note: Avoid detaching the miniscope and cables from the head of the mouse when transferring it to the sleeping chamber to avoid a change in the field of view. If necessary, disconnect the cables from the computer port and reconnect them after transferring the mouse. Note: If the mouse is not properly habituated to the experimental manipulation, it is possible that some procedures (e.g., plugging and unplugging the cables/miniscope) could stress the mouse and affect its freezing response. Therefore, we suggest confirming whether freezing is specifically observed in the shocked context. For a no-learning control group, a no shock or immediate shock protocol can be considered. Note: Attaching the miniscope during the test tends to reduce the overall amount of freezing (Kumar et al., 2020). To avoid confounding effects, we habituated mice to the weight of the miniscope using a dummy miniscope and cables. A similar approach is widely adopted in tetrode experiments. We found that although mice tend to show low freezing behavior when the miniscope is attached, they freeze only in the learning context (Kumar et al., 2020). If the freezing duration is still too short, longer habituation with the dummy microscope or counterbalancing the microscope should be considered. Alternatively, a slip ring could be used. 28. Finish the recording as done on the previous day. Anatomical confirmation of the recorded signals Timing: dependent on the desired histological method Verify the position of the implanted GRIN lens post-mortem. 29. Obtain a brain sample a. Euthanize the mouse according to institutional guidelines. b. Transcardially perfuse the mouse with cold 4% paraformaldehyde (PFA). c. Remove the brain and place it inside a tube with 4% PFA at 4 C for 24 h. Pause point: The brain can be stored at 4 C for several days. In this case, exchange the PFA for phosphate-buffered saline (PBS) after 24 h and protect the tube from light, but expect a natural decay of the fluorescence signal over time. Long exposure to PFA may interfere with immunohistochemical detection of antigens, some of which are critical for adult neurogenesis research, such as doublecortin (Moreno-Jimenez et al., 2019). Optional: A peristaltic pump can be used for perfusion. Replace the 4% PFA with 20%-30% sucrose solution if a cryostat will be used to slice the brain. After the brain sinks, remove it from the tube and cryopreserve it in an OCT mounting medium block at À80 C. a. Slice the brain using a cryostat or vibratome at 50 mm thickness and collect the slices in plastic wells or dishes with desired buffer. Note: We use 60% glycerol/PBS solution for long-term storage of brain slices in a freezer. In case slices must be kept at 4 C, we recommend adding 0.1% sodium azide to the buffer to prevent bacterial contamination. 31. Confirm the location of the implanted lens and GCaMP3 fluorescence signal. a. Mount the brain slices on microscope slides using fluorescence-preserving medium. b. Image the slides with a fluorescence microscope ( Figure 10). Note: GCaMP3 fluorescence is usually sufficiently bright to be detected without immunohistochemical staining. If it is not possible to observe the fluorescence signal due to a particular reason (e.g., acetone pretreatment), a simple antibody stain targeting the GFP molecule is sufficient to confirm GCaMP3 expression. We found that using Malinol mounting medium is an economical option for rapid confirmation of the signal. Expected outcomes Successful completion of this method allows the detection of ABN Ca 2+ transients in freely moving mice. A recording session configured with the parameters described here will produce a raw recording file with a size of $1,327 kb/frame. A representative recording video of Ca 2+ transients is provided in our previous article (Kumar et al., 2020). Quantification and statistical analysis Here, we present steps for processing raw video files into Ca 2+ fluorescence time series. We use Mosaic software for preprocessing and motion correction and the MATLAB implementation of the constrained non-negative matrix factorization (CNMF)-E algorithm (Zhou et al., 2018) and custom scripts to optimize extraction of ABN Ca 2+ transients (available through the GitHub link below). Note: We recommend the following system requirements: R128 GB RAM; MATLAB 2020 or above; most recent Intel Core i7 processor; 1 TB SSD. 1. Decompress the raw files using the Inscopix image decompressor. Note: We recommend downsampling the video by a factor of 4 during decompression to save disc space and reduce loading times. Decompressing video files into .hdf5 format substantially improves loading times in Mosaic and MATLAB. OPEN ACCESS other software while considering the specific characteristics of ABN activity as described below. Note: Ideally, perform optimal motion correction by choosing a reference point in the field of view where most ABNs are located with clear spatial markers (i.e., blood vessels). This is important because movements of the brain occur in three dimensions and might be non-rigid. Thus, perfect motion correction in the entire field of view is not possible in most cases. Instead, choose reference regions that minimize motion in the region where ABNs are located and consider cropping out unnecessary portions of the video, which will reduce the number of false positives and decrease processing time. Moreover, some regions in the field of view may move asynchronously. This can occur when tissue or blood clots get stuck between the lens and focal plane, which creates different planes of movement in the image. Try to avoid analyzing regions that contain desynchronized movement. Note: Detecting ABN activity by the naked eye can be challenging. This is because ABNs are low in number ($5-25 ABNs per field of view) and very sparsely active ($1 transient/min). Moreover, the GCaMP3 signal is weaker than newer-generation GCaMPs. To improve the identification of regions with active ABNs, adjust the contrast to between 40%-95% of the maximum observed signal. An estimation of neuronal location can be obtained in the motion correction panel when you check ''subtract spatial mean,'' ''invert image,'' and ''apply spatial mean,'' which makes ABNs visible as black circular shapes in the field of view. CRITICAL: After performing motion correction, check that no major signs of motion are visually detected. A single run of the motion correction algorithm is usually not sufficient to solve all problems. Instead, try using different combinations of reference region and motion correction type (e.g., translation, rotation). We recommend first using ''translation only'' as the motion correction type. If there is still motion that was not corrected, check whether this is due to rotations or expansions in the field of view and change the motion correction type accordingly. Note: For information on how to perform motion correction when tracking the same ABNs through different recording sessions, see the ''tracking the same neurons'' section in our previous article (Kumar et al., 2020 Note: Delete extracted components with temporal traces or spatial shapes that do not correspond to real neurons. If you must delete too many neurons, consider repeating the analysis with higher peak-to-noise ratio (PNR) or local correlation (CORR) thresholds. b. Run ''neuron.show_contours(0.6, [], neuron.PNR.*neuron.Cn, 1)'' to see the contours of the extracted components over the PNR image multiplied by the CORR image. Note: Enlarged non-circular regions usually reflect motion artifacts. If contours are not drawn over several neuron-like regions, this suggests that the PNR or CORR threshold were set too high. If contours are drawn over several non-neuron-like regions, this suggests that the PNR or CORR threshold were set too low. c. Run ''implay(cat(2,mat2gray(neuron.PNR_all),mat2gray(neuron.Cn_all)),5);'' to see the PNR and CORR images from consecutive temporal segments of 1,000 frames. d. Check for signs of motion artifacts and the presence of a visual representation of the mean activity of ABNs across different times. e. Run stackedplot(neuron.C_raw); to see several Ca 2+ transients at the same time. Note: Correlated ensemble activity is common in ABNs; however, perfectly overlapping rising dynamics with square-like shapes may indicate motion artifacts. Check the video in the specific frames to corroborate whether this is real ensemble activity (example in Methods Video S8). Pause point: To extract Ca 2+ transients, we use CNMF-E (Zhou et al., 2018) with some modifications. We recommend first reading the original CNMF-E article to understand the main concepts behind the algorithm. We choose CNMF-E because it is particularly good at extracting noisy signals, and the code is easily adapted to a wide range of experimental needs. We made some minor modifications to the original code that are necessary for lengthy recordings (>10 min), which is available in GitHub link below. For shorter recordings, we recommend using the original code. The parameters that mainly affect CNMF-E results are the minimum local correlation of a pixel and its neighbors ( Figure 11A, left; noted as ''min_corr'') and the minimum PNR of a pixel ( Figure 11A, right; noted as ''min_pnr''). These parameters are thresholds used to obtain the first estimation of the spatial and temporal components that will be used to initialize the CNMF algorithm. For sparse data, such as that from ABNs, total video length may affect the estimation of CORR. This happens because in one-photon imaging, pixels associated with a neuron are highly correlated only when that neuron is active. Given that ABNs are usually inactive, increasing the video length may be translated into a poor CORR, especially for files that are several hours long. To solve this problem, we segment the video in short overlapping batches of 1,000 frames, which allows us to estimate the spatial component of ABNs from discrete temporal windows in which they are highly active. This is depicted in Figure 11B, in which the CORR and Ca 2+ transients of three close ABNs are shown for three consecutive batches. Note that ABNs display prominent Ca 2+ transients that translate into a higher CORR, particularly in the third batch. Because the spatial component of each neuron is shared across batches, only this batch is necessary to initialize those ABNs, even if other batches have CORR comparable to the background level. Having obtained a good initial estimation of spatial components, the CNMF algorithm can extract temporal traces from other batches, even if its CORR or PNR is below the defined threshold. An overlapping batch approach produces a cleaner estimation of CORR, in contrast to the analysis of the entire video sequence ( Figure 11C). The multi-batch algorithm is implemented in the original CNMF-E article (Zhou et al., 2018); however, we find that in some cases, artifacts are introduced in the concatenation point between batches. Thus, we implement a multi-batch algorithm using overlapping batches. We also include some utility functions to correct for variations in baseline noise between batches and some minor bugs producing empty matrices given the sparse activity of ABNs. Our code contains optimized parameters that can extract Ca 2+ transients from ABNs in most cases. To illustrate the differences between the overlapping batches (OB) method and the conventional CNMF-E (CC) method (i.e., running CNMF-E on the whole video sequence), we compared the spatial and temporal components extracted by both methods ( Figure 11D). True-positive neurons are expected to display circular shapes and Ca 2+ transients distinguishable from noise. Neurons extracted by the OB method display higher circularity ll OPEN ACCESS and PNR than those extracted by the CC method ( Figure 11E). Indeed, several neurons extracted by the CC method display spatial components that largely differ from a circular shape. Considering those spatial components with circularities five standard deviations below those extracted by the Figure 11. Analysis of Ca 2+ imaging data Overlapping batch analysis. (A) Local correlation (CORR) and peak-to-noise ratio (PNR) image of three neurons in a recording file. (B) Ca 2+ transients detected and CORR for three neurons in three consecutive batches. Black trace shows the final estimated signal for the entire recording. (C) Normalized CORR for a 3-h video obtained by analyzing the entire video sequence versus the maximum projection of the CORR images obtained by overlapping batch implementation. Note that CORR of the entire video sequence includes many highly correlated non-circular shapes that are unlikely to represent real ABNs. Analysis of the entire video will introduce several false positives and noisier estimation of Ca 2+ transients in this case. (D) Spatial and temporal features of Ca 2+ transients extracted by conventional CNMF-E (CC) and overlapping batches (OB) methods. The two transients were randomly chosen from those extracted by CC or OB methods. (E) Circularities (isoperimetric quotient) of spatial components and PNR of temporal traces. (F) Example of spatial components extracted from an individual batch (i), the OB method (ii), or the CC method (ii) in the same time period. Because there are more neurons in the entire recording period than in an individual batch, spatial components were weighted by average activity during the period. (G) Left: Cosine similarity of spatial components for the individual batch versus CC method and individual batch versus OB method. The star symbol in the figure corresponds to the spatial components shown in (F). Right: Temporal correlation of Ca 2+ transients of common neurons for the individual batch versus CC method and individual batch versus OB method. A neuron extracted by the CC or OB method was considered to be the same as that in an individual batch if its spatial component had a cosine similarity >0.8. If more than two pairs of neurons satisfied this condition, the pair with the higher temporal correlation was used for calculations. OB method, we estimated that 34% of components extracted by the CC method are false positives ( Figure 11E, left, red dots). Next, we examined how the final temporal and spatial components of the transients extracted by the OB or CC method (analysis of >30,000 frames) differ from those extracted from the analysis of an individual batch (analysis of $1,000 frames) ( Figure 11F). Components extracted by the OB method were spatially and temporally more similar to those extracted from an individual batch ( Figure 11G). We estimate that, on average, 61% of neurons remain undetected in individual batch analysis compared with the OB method ( Figure 11H, left) because they display PNR significantly below the detection threshold. These results indicate that the OB method minimizes the number of false positives without compromising true-positive detection. 6. To track ABNs across different sessions, motion correct each session as previously described (Ghandour et al., 2019). In Mosaic software, this is done as follows: a. Motion correct each session independently. b. Extract one frame from one session to use as a reference for other sessions. c. Motion correct each session relative to the extracted frame. Note: Sometimes using different reference frames from different sessions provides better results. Consider the points mentioned in ''Quantification and statistical analysis: step 3.'' d. Concatenate the recording sessions, ensuring that sessions are correctly aligned. Pause point: Tracing ABNs across several days using tracking algorithms (Sheintuch et al., 2017) may not be possible given the small number of ABNs. However, we were able to track ABNs across different recording sessions within the same day by concatenating different sessions into one file. Spatial markers, such as blood vessels or ABNs with persistent activity, can be used to align different sessions. Analyzing sessions independently and then identifying the same ABNs based on their footprints assumes that all ABNs can be detected in each recording session. However, this is not always the case, especially considering the sparse activity of ABNs and the low signal-to-noise ratio of GCaMP3. This issue is illustrated in Figure 11B; note that in the first batch, three ABNs are active with CORR comparable to the background local correlation. If lower CORR and PNR thresholds are used to detect these ABNs, many false-positive neurons would be included in the analysis, further hindering the tracking of ABNs. Instead, we initialize these ABNs in a specific period in which they show high activity (e.g., third batch in Figure 11B). The CNMF algorithm is then used to extract Ca 2+ transients from other batches with a lower signal-to-noise ratio. This is particularly important when tracking the same ABNs because some may be more active in some recording sessions than in others and, hence, be wrongly labeled as inactive if these sessions are analyzed independently. 7. After extracting the raster plot of ABN activity, we further analyzed data for each sleep stage in 10-s bins using Sleep Sign software (example raw data are available in Kumar et al., 2020). Alternatively, several other types of software are available for sleep analysis (e.g., Sleep, Combrisson et al., 2017). More details of sleep recording and analysis are available in Oishi et al. (2016). Limitations Although GCaMP imaging and miniscope technologies fuel new discoveries of how neuron activity generates specific behaviors in freely moving animals, their limitations are well known. One limitation is related to the resolution of the z-axis. The miniscope used in our study (Kumar et al., 2020) Another limitation is that the experimental preparation for DG imaging in freely moving mice results in a partial lesion of the ipsilateral CA1 area when implanting the GRIN lens. Less invasive techniques could be used in the future to prevent possible circuit reorganization induced by surgery, as previously recommended (Hainmueller and Bartos, 2018;Kirschen et al., 2017;Pilz et al., 2016). Finally, the developing nature of ABNs imposes a time constraint on the experimental schedule. As the activity and behavioral significance of ABNs change as they transition from immature to mature granule cells (e.g., Kumar et al., 2020), it can be challenging to use the same mice in multiple behavioral tasks and ensure that the recorded ABNs have the same physiological properties across the task periods. A final limitation is the overall cost of the necessary equipment. Using the described hardware makes the experiment faster and easier to perform, but it costs more than $150,000 USD in total. Fortunately, more budget-friendly options are available. A similar miniscope can be built by a researcher using the open-source miniscope platform (UCLA Miniscope). In addition, simpler over-the-counter computers can be used for the experiments, but this would likely increase the data processing time. Potential solution The presence of a small number of GCaMP3-labeled neurons can be due to inefficient Cre induction/ recombination by tamoxifen administration. The dose-dependency of GCaMP3 expression should be confirmed in each experimental setting. Always prepare fresh tamoxifen solution, protecting it from light. Ensure that tamoxifen is completely dissolved to achieve a sufficient dose. Also, when injecting, leave the needle tip in the intraperitoneal space for a few seconds to avoid reflux of tamoxifen. The injected dose can be systematically increased to determine whether this solves the problem. When using different versions of GCaMP, confirm immunohistologically that Cre recombinase is expressed and that there is no cell death induced by cytotoxicity of Ca 2+ sensors in ABNs. Potential solution This problem can occur due to variations in different stereotaxic equipment or unintended changes in lens position during implantation. Confirm that the lens is oriented precisely within the hole and re-evaluate the coordinates for each experimental setting. Implanting the lens into the CA1 generates resistance, which can force the lens out of the brain. Only release the lens from the stereotaxic micromanipulator after the glue/dental cement cures; otherwise, the position of the lens may shift when released from the grip. Problem 3 The implant detaches from the mouse (steps 6-28). Potential solution Implant detachment can occur when it is not well secured to the skull. When applying several layers of different adhesives, ensure that each layer completely cures before applying the next one, as mixing them could compromise their stability. In some cases, standard black cement may compromise the longevity of the implant. One possibility for this case is to use regular light dental cement and cover it with black paint or nail polish to prevent ambient light-derived artifacts. Infection and tissue inflammation can also decrease the adhesive strength of the glue/dental cement; ensure adequate prophylactic practices during surgery and postoperative recovery to prevent this problem. In addition, do not immobilize mice by holding the baseplate, as physical stress at the implant can cause its detachment. Potential solution Ensure that the baseplate is fixed properly on the skull and that the miniscope is correctly attached to the baseplate and tightly secured in place by its screw. Also, low frame rates may increase the amount of motion blur, which can impair performance of the motion correction algorithm. Potential solution ''False'' fluorescence signals are usually derived from autofluorescent materials (e.g., damaged tissue or blood cells below the lens) and not from GCaMP3. To prevent these, ensure that the cortex above the target coordinates is completely aspirated and that there is no residual bleeding before implanting the lens. As lowering the lens too quickly into the brain can damage the tissue, ensure that the lens is implanted slowly and allow the tissue to settle after each 150-mm step. If necessary, retract the lens G25 mm before each step to further alleviate pressure and prevent unnecessary damage. Problem 6 Data processing is extremely slow or impossible (Quantification and statistical analysis steps). Potential solution As sleep recordings can be >3 h long, the final compressed file can be >200 GB. To be processed, the file will need to be decompressed and spatially down-sampled (43), resulting in a $216 kb/ frame file. However, because some MATLAB scripts require double precision, we recommend that the PC has at least four times more RAM than the down-sampled file size. When it is not possible to reduce the size of the data file by temporal and/or spatial binning, the full-length recording can be split into two shorter files. The initial processing steps can be performed separately for each file, which can later be concatenated before detecting active ABNs using batch implementation and disabling parallel computing (although this can take several hours). Problem 7 Few or no ABNs are present in the imaging file (Quantification and statistical analysis step 4). Potential solution If there are no problems with GCaMP3 expression or lens implantation coordinates, this problem can be solved by changing parameters in the CNMF-E script. Start by modifying the minimum CORR and PNR thresholds. In this case, visual inspection of cell shape and temporal dynamics of Ca 2+ transients become more important, as the rate of false cell detection increases with less stringent parameters.
2021-01-07T09:07:39.492Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "43af5dfecd26e56863069eeb04e955ba8ed9952b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.xpro.2020.100238", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "901cf2c92fb61e458cc0df3600020854aad37233", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253408617
pes2o/s2orc
v3-fos-license
Utilization of Scrap Metals as Reductants for Improved Ni and Cu Recoveries in Copper Smelting This study investigated a novel approach of using Al and Al–Mg scrap as heat providers and reductants that do not cause direct carbon-containing emissions in pyrometallurgical copper processing. Aluminum and magnesium are typical elements in metal wastes, such as WEEE, and they oxide easily under copper smelting conditions. In the reduction experiments, a copper- and nickel-rich industrial slag was equilibrated under Ar gas atmosphere at 1300 °C, after which a reductant metal piece was dropped on top of the slag. The slag-reductant samples were drop quenched in brine after 2–128 min of reduction. Thermodynamic calculations were executed with MTDATA to evaluate the phase equilibria and thermochemistry of the copper slag in metallothermic reduction. All the results proved that Al and Al-5wt% Mg alloys can be used as reductants in copper processes to enhance the recoveries of nickel and copper in metal/matte. Cu concentration in slag decreased from 2 to 1.2 wt% and Ni from 1.7 to 1.2 wt% in 30 min in aluminothermic reduction experiments, despite an immediate formation of a solid alumina layer on the surface of the reductant, hindering the reduction kinetics. The heat produced was calculated as 31 kWh/ton slag or 2.1 kWh/kg added Al or Al–Mg. Introduction Copper and many other valuable metals contained in electronic scrap can be recycled efficiently in pyrometallurgical copper processes. However, the composition of the feed materials will influence the process chemistry, energy balance, and total metal recovery efficiency. If metals that burn, i.e., form oxides easily, such as Al and Mg, are present in the feed, they can behave as reductants for more noble metals and produce additional heat for the process. The Ellingham diagram in Fig. 1a includes typical copper and nickel smelting slag compounds (in liquid state) and partial pressures of oxygen between 10 -10 and 10 -6 atm (standard Gibbs energies of formation of the compounds ∆G o f = RTlnpO 2 ). Figure 1 is drawn based on HSC 9.4.1 software and its database [1]. Figure 1a shows that all the metals in the system, except Ni and Cu, have a tendency to form oxides in the copper smelting conditions (1300 °C, pO 2 ≈ 10 -8 atm). Especially MgO, Al 2 O 3 , and SiO 2 are very stable oxides, and thus, the oxidation of these metals (Mg, Al, Si) will occur quickly after feeding into the furnace. Simultaneously, more noble metals will reduce based on the oxidation-reduction interaction reactions. On the other hand, an addition of Al and Mg can also reduce SiO 2 to metallic silicon unintentionally. When metals are purposefully used as reductants in a reduction process, the process can be called metallothermic reduction. The most typical reductant metals are Al, Si, FeSi, Mg, Ca, and Na, which have low standard Gibbs energies of formation for their oxidic compounds. Additionally, for instance, iron is used as reductant for NiO and MoO 3 in arc furnaces. [2] Obviously, metallothermic reactions and interactions occur 'naturally' in all pyrometallurgical processes. The main metallothermic reaction in the reduction is presented as simple anion exchange reaction as follows: where Me is the reductant metal and M the desired recoverable metal. When using aluminum or magnesium as reductants, the processes are called aluminothermic or magnesiothermic reduction, respectively. The Ellingham diagram for the main aluminothermic reductions of Cu 2 O, NiO, FeO, and SiO 2 is presented in Fig. 1b and shows that all oxides can be reduced by aluminum. In metallothermic reduction, Al is added in metal form, but also Al 2 O 3 , CaO, and MgO additions in feed will influence the slag chemistry and metals behavior as shown in multiple previous studies [3][4][5]. When scrap metals are used as reductants, no direct carbon dioxide emissions are produced during the reduction process, contrary to the more typical carbon-based reductants, including metallurgical coke, used in several metallurgical processes. Additionally, Al-containing non-ferrous scrap used as a raw material and reductant would promote a sustainable use of waste materials and carbon-free processing with lowered external energy need in smelting. Moreover, energy and electricity savings on mechanical recycling plants can be achieved if less pre-treated material feed can be used in the smelting process. This less pre-treated secondary feed can, furthermore, improve recoveries of valuable metals that would otherwise mistakenly end in the wrong scrap fractions or 'disappear' in the mechanical recycling circuit [6]. The mechanically treated copper-rich scrap fraction contains multiple elements, especially non-ferrous precious metals, Al, Zn, Sn, and Pb, but also for example iron that will be fed in the copper smelter as secondary raw material. The elements influence and behave differently in the prevailing process conditions, depending on their chemical, physical, and thermodynamic properties. The concept of employing certain metals, such as base metals Al and Fe, for pre-reduction and heat production in copper smelting is rather new. Study by Heo et al. [7] focused on the aluminothermic process for copper smelting slag to remove hazardous elements (As, Bi, Pb, and Sb) and to recover iron. The experiments were conducted employing a rod-dipping technique from iron metal-copper smelting slag system with an addition of different ratios of Al/FeO shots under an inert atmosphere at 1500 °C. The presented reaction mechanisms showed how the oxides in the slag, especially FeO, firstly reacted with Al and produced an irregular Al 2 O 3 oxide layer on the interface of an Al particle, simultaneously reducing Fe droplets. Spinel phase was precipitated due to crucible corrosion and interaction with alumina layer, whereas further from the Al particle, olivine was shown to be decomposed, at least, at the crucible wall. This can simultaneously re-oxidize some of the Fe particles back into FeO. The time interval for all this to occur was extremely short, within 5 min according to their observations. Rinne et al. [8] studied aluminum-rich battery scrap as a reducing agent for industrial copper smelting slag at 1300 °C under Ar. They found that Cu concentration in slag decreased 0.3 wt% after 60 min reduction period and that many hazardous metals were removed efficiently from the slag. Other studies on the aluminothermic reduction related to steel/iron production can be found quite extensively [e.g., [9][10][11], as well as studies on the recovery of different metals from wastes, such as red mud and chromite concentrate, by Al and various ferroalloys (FeAlSi, FeMo etc.) [e.g., 11,12]. Heo et al. have quite extensively investigated the metallothermic reduction in the sense of Fe recovery from electric arc furnace slag in steelmaking by Al [9], Al dross [13] or Al-C composite pellets [14] at 1500-1550 °C in 3/2FeO(l) + Al(l) = 3/2Fe(l) + 1/2Al2O3(l) 3/2NiO(l) + Al(l) = 3/2Ni(l) + 1/2Al2O3(l) 3/2Cu2O(l) + Al(l) = 3Cu(l) + 1/2Al2O3(l) 1.5/2SiO2(l) + Al(l) = 1.5/2Si(l) + 1/2Al2O3(l) Fig. 1 a The Ellingham diagram of the main compounds in copper smelting conditions. The black-dotted trend lines represent the partial pressures of oxygen from 10 −10 atm (lowest line) to 10 −6 atm (highest line). b The Ellingham diagram for aluminothermic reduction reactions for Ni, Cu, Si, and Fe oxides as a function of temperature magnesia crucibles and the influence of Al/FeO ratio and fluxes (CaO, MnO) on Fe recovery. Sun and Mori [15] investigated the oxidation rate of aluminum in molten Fe(-Al)-CaO-SiO 2 -Al 2 O 3 -FeO-MnO system in alumina crucibles at 1600 °C under argon atmosphere. This study employed experimental and computational techniques to investigate the influence of typical non-ferrous metals present in WEEE, Al, and Mg, on the thermochemistry and valuable metal behavior in copper flash smelting/direct-to-blister/converting process conditions. Kinetic experiments and thermodynamic calculations using MTDATA software [16] were employed to investigate the metallothermic reduction produced by pure Al and Al-Mg 5056 alloy in copper smelting process at 1300 °C. Materials The slag investigated was industrial 'non-cleaned' copper smelting slag with an addition of copper and nickel oxides. The initial industrial slag composition is shown in Table 1a. Copper and nickel were noticed to exist as mechanical segregations (matte droplets in slag) instead of being chemically dissolved in the industrial slag. Thus, additional Cu 2 O and NiO were mixed with the slag in order to follow the reduction kinetics and efficiency. The added amounts corresponded to typical Cu and Ni concentrations in slag before slag cleaning. The cuprite (Cu 2 O) employed was prepared by oxidizing pure A-grade cathode copper (Boliden Harjavalta Oy) pellets in air at 1025 °C for 120 h in a chamber furnace. An estimated purity of 99.99% was achieved [28]. The nickel monoxide (NiO) employed was commercial grade with purity of 99.99% supplied by Sigma-Aldrich (Merck KGaA, Darmstadt, Germany). The industrial slag was grinded, further mixed, and homogenized with the NiO and Cu 2 O reagents in an agate mortar. NiO and Cu 2 O additions in the slag were 2.6 wt% and 2.3 wt% (2 wt% Cu and Ni), respectively. The details of the reductant metals employed in the experiments are presented in Table 1b. The inert gas used in the experiments was argon supplied by Linde plc (Germany, 99.999 vol% purity). Cone-shaped silica crucibles employed in the experiments were produced from Heraeus HSQ®300 glass (fused quartz with purity of > 99.998%) supplied by Finnish Special Glass (Espoo, Finland). The crucibles were 25 mm wide and 15 mm high. The slag weight in the experiments was 2.0 g and the added reductant material 0.03-0.033 g (~ 1.5 wt% of sample weight) per experiment. This was calculated with MTDATA to be adequate Al-reductant addition to reduce all solid magnetite in the industrial slag. Experimental Equipment The experimental furnace type was LTF 16/50/450 supplied by Lenton (Hope Valley, UK) using Eurotherm controller Experimental Procedure and Analytical Techniques Metallothermic reduction experiments were executed by dropping an Al or Al-Mg piece on top of equilibrated slag. This was considered to simulate flash smelting furnace conditions when Al-containing scrap material is fed via concentrate burner or alternatively directly dropped to the furnace and it hits on top of the slag surface. Table 2. The samples were prepared for analyses employing wet metallographic techniques. The samples were imaged, and preliminary phase compositions were analyzed by employing SEM-EDS (Tescan MIRA 3; Brno, Czech Republic) equipped with an UltraDry Silicon Drift Energy Dispersive X-ray Spectrometer (EDS) supplied by Thermo Fisher Scientific (Waltham, MA, USA) at Aalto University. The imaging was executed by BSE detector employing a systematic approach, meaning that the same imaging settings were used for all the samples. These settings were WD: 20.00 mm, SEM HV: 15 kV, SEM Mag: × 113. Also, the brightness and contrast were kept same for all the samples. Different phases in the specimen display different brightness in the black and white BSE images based on the atomic number of the elements present. This means that higher brightness indicates a higher average atomic number, i.e., heavier elements/phases and lower brightness lighter elements/phases. This provides an opportunity to follow the progress of the reduction when systematically imaging the samples. SEM-EDS phase composition analyses were executed as the preliminary analysis technique for the aluminum and aluminum-magnesium reduction experiments, and primary analyses were executed with EPMA at the Geological Survey of Finland (GTK). The EPMA analyses were conducted from the polished cross sections by an SX100 (Cameca SAS, France) microprobe equipped with five wavelength dispersive spectrometers (WDS). Systematical approach also with EPMA was necessary in order to achieve comparable results and to understand the reduction kinetics and progress. The acceleration voltage used for the EPMA analysis was 20 kV and the emission current was 60 nA. The slag was analyzed mainly as area analyses with a 100 µm defocused beam, except for the slag composition profile measurements, where focused beam settings were used. For the matte, area analyses with 50-100 µm beam were employed and for the reductant material, the beam size was varied from 1 to 100 µm with the aim to analyze all different phases present. 6 analyses were taken from the slag phase, 3-5 analyses were taken from the matte and speiss phases. The standard materials used were natural minerals and synthetic metals: O K α (hematite), Na K α (tugtupite), Mg K α and Ca K α (diopside), Al K α (Al 2 O 3 ), Si K α (quartz), S K α (pentlandite), K K α (sanidine), Cr K α (chromite), Fe K α (hematite), Co K α ja As K β (cobaltite), Ni K α (Ni), Cu K α (Cu), Zn K α (ZnS), Sb L α (SbTe), Pb L β (PbS), Bi L α (BiSe), and Mo L α (Mo) ja [29] and normalized to 100%. The detection limits achieved for each element in each phase are presented in the supplementary material (Table S1). Heat Balance Calculations The energy balance calculations were executed with MTDATA and its TCFE, MTOX, and MTOXGAS databases [16] to evaluate the heat produced and the adiabatic temperature by metallothermic reduction of Al and Al-Mg in copper smelting conditions. The pure Al-Mg alloy included FCC Al-rich phase with intermetallic Al-Mg solid phase at 25 °C. The system was defined to correspond to the experimental compositions, despite simplifying it to a pure iron-silicate slag containing copper and nickel oxides. The 'start' composition for slag was Al/Al-Mg-free slag and 'end' composition contained 1.45 wt% Al/Al-Mg. The oxygen partial pressure in the system without the addition of Al or Al-Mg was 10 -7.1 atm (start composition), whereas the addition of Al or Al-Mg decreased the pO 2 down to 10 -9.1 atm (end composition). No differences in the phase compositions existed between different reductants, except small MgO concentration increase in slag (max 0.15 wt% MgO) with Al-Mg reductant. The liquid metal phase started to form when 0.25 wt% of reductant was added in the slag system. Although alumina concentration in slag was maximum 3 wt% and MgO 0.15 wt%, nickel and copper oxide concentrations decreased significantly from their original values (2 wt% Cu and Ni each in slag). With the highest Al and Al-Mg additions (1.45 wt% in the system), Ni and Cu concentrations in slag decreased to 1 and 0.6 wt%, respectively. The heat produced at constant total pressure and temperature by the addition of Al or Al-Mg in the copper smelting slag was calculated as follows: where the enthalpy change for slag at 1300 °C is and the heating enthalpy of the reductant metal is Table 3 collects the numerical data used for the heat calculations and provides the heat results. The results show that the addition of Al or Al-Mg in the copper smelting slag is an exothermic process, and it produces heat of 31 kWh/ton slag or 2.1 kWh/kg added Al or Al-Mg. The adiabatic temperature was calculated and for 1.45 wt% Al concentration in slag, the temperature increased to 1385 °C (see Fig. 2). Heo et al. [7] computationally calculated (Factsage) temperature increase of 350-800 °C produced by addition of Al/FeO shots in iron-copper smelting slag system. Results The sample imaging and phase composition analyses were carried out and examined systematically in order to investigate the reduction progress. Normalized results were employed, and uncertainties were calculated as standard Figs. 3 and 4, respectively. The visual evolution of the metallothermic reduction can be seen in the figures, and dramatical changes clearly occurred between 30 and 60 min. As aluminum and Al-magnesium alloy are less dense (densities for liquid Al 2.21 g/cm 3 and Al-5wt% Mg 2.15 g/ cm 3 at 1300 °C calculated by Factsage© FTLite database) than slag (3.3-4.1 g/cm 3 [30]) and matte (3.9-5.2 g/cm 3 [30]), the reductant metal pieces dropped on top of the slag remained floating on top of the slag layer. Even after 120 min, the highly reacted Al and Al-Mg pieces showed a tendency to stay close to the upper edge of the slag, which might indicate high viscosity and/or influence of surface and interfacial tensions of the system. Instead, the more dense and heavier matte phase was found at the bottom of the crucible. The following chapters examine more profoundly the compositional results of matte/speiss ("Matte and Speiss Phases"), slag ("Slag"), as well as the reductant metal and its surroundings ("Reductants"). Matte and Speiss Phases The microstructures image series for matte, taken by SEM-BSE with × 113 magnification, are shown in Figs. 5 and 6. One main matte droplet was settled to the bottom of the crucible in all experiments, except OT19 (4 min reduction by Al) which had two separate matte droplets (the bigger one included in the figure). The formed matte clearly included two phase compositions and structures, as shown in Figs. 5 and 6. In the cross section of sample OT20 (8 min reduction by Al), only one phase structure was visible (speiss). Both phase compositions in matte were measured with EPMA (3-5 area analysis on both phases), and the results showed that the brighter phase is indicated to speiss type of composition and the gray phase to white metal (Cu 2 S) composition. The relatively high arsenic and antimony concentrations revealed that the heavier matte phase was a speiss type of intermetallic phase. Speiss in non-ferrous metal production is a complex mixture of copper, nickel, iron, and/or silver as arsenides and antimonides that can also contain some sulfur and lead [31][32][33][34][35]. According to Figs. 5 and 6, it seemed that the overall matte droplet size increased, and the amount of brighter phase (speiss) increased as the reduction time increased. Note that the cross sections presented were chosen randomly and will most likely vary to some extent throughout the samples, indicating that especially matte droplet size and the white metal/speiss ratio will in reality vary in the samples. Figure 7 presents the main element results (Cu, S, and Ni) for the matte phase with both reductants. The uncertainties (± 1σ) are included for the Al-reduction results, but not for the Al-Mg-reduction results as they were smaller than the used symbol for every sample. The zero-time interval results (OT17 sample) were achieved by equilibrating the slag for 120 min in inert gas atmosphere without metal reductant addition. The analytical results for the speiss phase are presented in Fig. 8. Only uncertainties for nickel and copper were included in the figure, because the uncertainties of other elements (S, As and Sb) were smaller than their symbols. Traces of Bi (0.03-0.3 wt%) and Pb (˜0.1 wt%) were measured in the samples. Cobalt and zinc concentrations were close to their detection limits, around 100-300 ppmw. Although the white metal composition stayed constant as a function of time, the composition of the speiss phase changed considerably as a function of time. Figure 8 shows that nickel concentration in speiss first increased close to 60 wt% and after 5 min started to decrease and stabilized at around 30 wt% in 60 min. Copper instead first decreased down to 25 wt% followed by an increase to 50 wt%. Arsenic showed small increase for the first 5 min, after which it started to decrease as a function of time and stabilized at 5 wt%. Sulfur and antimony concentrations remained relatively constant throughout the investigated time period. The speiss phase did not have homogeneous phase structure, but more as a net-type pattern with different mixtures of phases. Thus, as the analyses were conducted with a defocused beam (50-100 µm), the speiss composition results are more suggestive and contain more variation than the white metal results (Fig. 7), probably due to the low melting point of the As-Fe system [36]. In addition to the literature dealing with speiss in non-ferrous metal processing [31][32][33][34][35], studies on phase equilibrium and phase diagrams on Cu-Ni-As/Sb systems exist. Uhland et al. [37] calculated phase diagrams for Cu-As-Ni alloys and showed that these three elements have great tendencies to form multiple different phase assemblies depending on temperature and their concentrations in the system. Itakagi et al. [36] studied ternaries of the As-Cu/Fe-S system at 1150 °C including phase boundaries with different Cu/Fe ratios. Their results show how a miscibility gap appears into the system producing copper matte and low-sulfur speiss in the presence of As. The diagram also presents how the Cu/ Fe ratio influences the miscibility gap and the compositions of the matte and speiss. In the present study, the experiments were conducted for white metal smelting slag where iron concentration in matte was very low (around 1 wt%), see Fig. 7. In general, arsenic has a high tendency to volatilize in the copper smelting and converting conditions, and its distribution behavior and activities have been broadly studied the in matte-metal-slag-gas(-dust) systems, e.g., [38][39][40][41][42][43]. High arsenic concentration is not permitted in the blister copper and it has maximum limit determined; thus, its removal is critical and its accumulation in the flue dust that is recycled into smelting circuit should be avoided. Additionally, speiss formation in, e.g., flash smelting furnace can have detrimental effects on furnace lining and bottom integrity due to its infiltration tendency in the refractories [44,45]. Slag Slag was homogeneous, i.e., well quenched in all the samples, as can be seen from the micrographs in Figs. 3, 4, 5, and 6. Overall, only some small brighter dispersions were visible locally in the samples, and in some samples close to the reductant metal, slag seemed visually somewhat different (darker in BSE figures). The slag was systematically analyzed from different parts of the sample (3 separate locations) in order to investigate the overall reduction of the slag and to determine the interaction area of the reductant metal. The slag results in Figs. 9 and 10 are the calculated averages and standard deviations from the analyses of two different locations: close to the matte (3 analysis) and opposite corner from the Al piece and matte (3 analyses). Only alumina concentration was varying between these locations and had greater standard deviation when compared to other components and elements, see Fig. 9. Third analysis location for slags was close to the reductant and its interaction volume, but these results were not included in the result Figs. 9 and 10. Iron and the stable oxide results for slag are presented in Fig. 9. The uncertainties (± 1σ) were added only for iron, silica, and alumina as the standard deviations for the other components were smaller than their symbols. In the Alreductant experiments, most of the elements/oxide concentrations remained constant as a function of time, only alumina and silica increased as time increased, whereas Na 2 O seemed to decrease slightly. For the Al-Mg experimental series, all the components in Fig. 9 stayed constant as a function of time. In general, the reductant metal did not influence the slag composition, even the MgO concentration was equal for both series. Alumina concentration in slag was dependent on the location at 64 and 128 min experiments, and thus, presents larger uncertainties in Fig. 9. Copper, nickel, zinc, and other minor metal results are presented in Fig. 10. The uncertainties (± 1σ) are shown for Ni, Cu, and Zn. The presented 'other metals' include sum of As, Sb, Pb, and Bi concentrations. Sulfur and chromium were also detected in the slags in the levels of 100-300 ppmw each, as well as traces of molybdenum were measured in some samples (0.1-0.3 wt%). Until 30 min, the slag composition remained relatively constant for Al-reductant experiments, after which Cu concentration dropped from 2 to 1.2 wt%, Ni from 1.7 to 1.2 wt%, and Zn from 1 to 0.5 wt%. For Al-Mg-reduction experiments, zinc concentration seemed to stay constant throughout the entire time range investigated, and (possibly) the decrease for Ni and Cu was not as radical as for Al-reductant experiments. Clear reaction zones in the slag close to the reductant piece, where the slag was visually and compositionally different, existed in the samples. This zone enlarged as a function of increasing reduction time. Slag composition profiles of the slags were measured, and they showed that aluminum concentration was higher close to the reductant and decreased as the distance increased, whereas nickel and copper concentrations were almost zero close to the reductant, indicating that they had been effectively reduced from the reaction zone. The distribution coefficients of copper and nickel between different phases (speiss, matte, and slag) as a function of reduction time are presented in the supplementary material (Fig. S1). The distribution coefficients between matte and slag (L m/s ) increased as a function of time from 35 to 60 and from 3.5 to 5-7 for copper and nickel, respectively. Nickel was enriched in the speiss with the distribution coefficient L speiss/m values between 4 and 8, whereas copper was enriched in the matte phase with the distribution coefficient L speiss/m values between 0.2 and 0.7. The distribution coefficient of As between speiss and matte was between 10 and 40 without dependency on reduction time. If compared to the reduction studies of DON slag by gas reductant (methane), battery scrap rich in graphite, biochar, and coke [24,25,27], the metallothermic reduction used in this study seems to have slower reduction kinetics and achieve lower distribution coefficients between matte and slag. (Tables S2 and S3). The composition (Al, Fe, Si, Cu, and Ni concentrations) of the reductants and the darker crust formed around the reductant are shown in Fig. 13 up to 32 min, after which the reductant metal pieces were decomposed to entirely different compositions and phases. As seen in the micrograph series and composition results ( Fig. 13 and supplementary material), several phases: solids and liquids formed, reduced, and decomposed during the reduction. The reaction interface appeared very irregular and metal alloys reduced (brighter, i.e., heavier phases) started to appear immediately after the reductant metal addition and their amounts increased as the reduction proceeded. The reductant compositions stayed rather constant up to 32 min, see Fig. 13a and b. Immediately at 2 min, 4-5 wt% Fe and Si were dissolved in the Al reductant, as well as 0.6 wt% Ni and 2 wt% Cu. Ni and Cu concentrations doubled in four minutes in the Al reductant. The Al-Mg reductant dissolved two or three times more Fe, Cu, and Ni than pure Al reductant, (Table S3) as shown in Fig. 13b. At 64 min Al-reductant experiment, the reductant material had approximately a composition of Ni(Al,Fe) 4 (Cu,O) 4.5 (22.5 mol% O, 22.5 mol% Cu, 20 mol% Fe, 20 mol% Al, and 10 mol% Ni). Similar phase composition was measured at one analysis spot for 60 min Al-Mgreduction experiment. Nevertheless, at 128 min Al reduction and 60 min Al-Mg-reduction experiment, the reductant composition was reacted close to mullite structure. Magnesium was dissolved from the reductant quickly and was not detected at any of analyses conducted from the Al-Mg reductant pieces. If Al and Al-Mg reductants are compared visually, it seems that Mg-containing reductant reduced and reacted faster with the slag than the pure Al reductant. Nevertheless, both reductants were present up to 30 min experiments and were severely degraded at 60 min experiments. The crust formed around the reductant, typically between 20 and 60 µm thick, was of relatively pure Al 2 O 3 with traces of other metals, as shown in Fig. 13c. In the first 4 min, the concentrations of Si, Fe, Cu, and Ni increased in the crust, after which they decreased, and the crust composition stayed relatively unchanged between 8 and 32 min. Phase Assembly Phase equilibrium of a Al-SiO 2 -FeO-Cu 2 O-NiO-O 2 system was computationally examined to confirm and clarify all the possible phases present during the aluminothermic at 1300 °C and pO 2 = 10 -8 atm. Figure 14 presents the ternary diagram of Al-SiO 2 -FeO system with constant Cu 2 O = 2.3 wt% and NiO = 2.6 wt% calculated employing MTDATA software and its MTOX and SGTE_SOL databases [16]. The oxygen amount in the system was defined to show the phase equilibrium with gas phase from the SP + COR + MUL + LIQ phase assembly onwards to the SiO 2 -FeO quasibinary, but without the gas towards the Al corner. This enabled to predict all the possible phases, from metallic aluminum to the slag, during the reduction. The phase diagram shows that liquid metal alloy and metal solids with FCC and BCC structures (Fe(s)) can form during the aluminothermic reduction. Additionally, typical solid oxides present according to Fig. 14 (Tables S2 and S3). Kinetic Analysis Various rate-controlling steps in an aluminothermic reduction have been considered: a chemical reaction, a penetration of [16] molten aluminum into slag phase, and a diffusional transfer in slag phase [7,8]. The reductant-slag interface was irregular and unstable, in addition to spontaneous exothermic reactions at the interface causing dynamic condensed phase flow and temperature imbalance (Marangoni flow). As shown, multiple different phases were visible and measured in the reaction interface, this made it impossible to determine the reaction area and volume. Additionally, the analyzing results of slag focused on investigating the overall aluminothermic reduction of the slag further from the interaction area. Moreover, the combination of matte and speiss as the metallic phase(s) as well as the various phases in the aluminum reductant made it difficult to analyze the composition change of the metallic phase(s) as a function of time. Due to these characteristics, the apparent reaction rate equation based on mass transfer was employed for kinetic analysis for the slag phase, as in previous study [7]: where the function f(t) was determined by finding the best fitting lines for the experimental results of C MO (wt% of NiO and Cu 2 O calculated from EPMA results) as a function of time, (C MO ) 0 and (C MO ) t were metal oxide concentrations in slag initially and at time t, respectively, and k App MO was the apparent mass transfer coefficient of metal oxide. The fitted lines and their details are presented in Fig. 15. These fitted lines were employed to evaluate the apparent mass transfer coefficients of copper and nickel oxides. The natural logarithm of NiO and Cu 2 O concentration changes as a function of time are presented in Fig. 16, from where the apparent mass transfer coefficients can be determined from the slopes of the curves. The blue lines are calculated purely based on the values of the fitted lines in Fig. 15, the black lines employ the measured C 0 values with fitted C t values and the orange symbols present the results based on the experimental results measured by EPMA. The concentration values of Ni and Cu in slag based on EPMA analyses stayed constant until 32 min, indicating no mass transfer occurred into these measured slag areas (further from the slag-reductant interface) at these stages. In the first stage(s), it is expected that the reduction of ferric iron to ferrous (Fe 3+ → Fe 2+ ) occurs in the industrial copper and nickel smelting slags. Thus, the apparent mass transfer coefficients were possible to evaluate only between 32 and 64 min from the experimental results, being 0.10 and 0.06 min −1 for Cu 2 O and NiO, respectively. (5) f (t) = C MO , The apparent mass transfer coefficients of nickel oxide defined from the slopes of the curves set for − ln(C NiO 0 − C NiO t ) fitted and − ln(C NiO 0 measured − C NiO t fitted) between 16-40 min and 25-45 min were 0.10 min −1 . For copper oxide, the apparent mass transfer coefficients defined from the slopes of the curves set for − ln(C Cu2O 0 -C Cu2O t ) fitted and − ln(C Cu2O 0 measured-C Cu2O t fitted) between 16-40 min and 30-40 min were 0.22 and 0.26 min −1 , respectively. More data from different times (especially between 30 and 60 min) and from the interfacial area are needed to determine properly the mass transfer coefficients and other kinetic parameters. Additionally, longer time experiments were equilibrium with the reductant has been reached, with and without stirring, would bring valuable information for aluminothermic reduction in copper smelting processes. Conclusions WEEE is the fastest growing waste stream in the world with the annual growth rate of 3-6%, and the evaluated amount is already over 50 Mtons/year. In addition of being valuable secondary metal resource, WEEE fractions can be used for other functional purposes, such as fuel or metallothermic reductants. This work investigated experimentally and computationally the pre-reduction potential of Al and Al-Mg alloy in pyrometallurgical copper processing. Metallothermic reduction experiments were executed as a function of time (2-128 min) under inert Ar gas atmosphere at 1300 °C employing drop-quenching technique followed by SEM-BSE imaging and EPMA. Additionally, MTDATA software with MTOX, TCFE and SGTE_SOL databases was employed to calculate the phase equilibrium and heat content data of the investigated system and forecast the phases present during the aluminothermic reduction. Based on the computational and experimental results, 1.5 wt% addition (or even less) of Al or Al-Mg is enough to improve the recoveries of Cu and Ni in the matte and metal phases, i.e., to use for metallothermic reduction of Ni, Cu, and other more noble metals in copper smelting processes. For efficient process, the reductant metals will need proper stirring and turbulent conditions in the furnace, as a solid corundum crust is rapidly formed around the reductant pieces on the slag surface, and process monitoring to control possible accretions or other solid formations.
2022-11-09T16:51:02.593Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "378f7650f4ec80da640840e27a15db5d58995ae6", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40831-022-00614-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "df5a0ac736e3febda22659a59fd1b5ed1c304bfa", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
119129988
pes2o/s2orc
v3-fos-license
Hypocoercive estimates on foliations and velocity spherical Brownian motion By further developing the generalized $\Gamma$-calculus for hypoelliptic operators, we prove hypocoercive estimates for a large class of Kolmogorov type operators which are defined on non necessarily totally geodesic Riemannian foliations. We study then in detail the example of the velocity spherical Brownian motion, whose generator is a step-3 generating hypoelliptic H\"ormander's type operator. To prove hypocoercivity in that case, the key point is to show the existence of a convenient Riemannian foliation associated to the diffusion. We will then deduce, under suitable geometric conditions, the convergence to equilibrium of the diffusion in $H^1$ and in $L^2$. 1. Introduction. Let L be a hypoelliptic Kolmogorov type diffusion operator on a smooth and connected manifold M, which admits an invariant probability measure µ. We are interested in the problem of exponential convergence to equilibrium for the semigroup e tL . To address this problem, several tools have been developed in the last few years. A functional analytic approach, based on previous ideas by Kohn and Hörmander, relies on spectral localization tools to prove exponential convergence to equilibrium with explicit bounds on the rate. For this approach, we refer to Eckmann and Hairer [11], Hérau and Nier [18], and Heffer and Nier [20]. On the other side, L. Wu in [30], Mattingly, Stuart and Higham in [23], Bakry, Cattiaux and Guillin in [2], Talay [25] and F.Y. Wang [29] use Lyapunov functions and probabilistic tools to prove exponential convergence to equilibrium in several cases. A fundamental contribution, closer to the approach of the present paper, is due to Villani [27], who introduced in his memoir the important notion of hypocoercivity (see also Dolbeault, Mouhot and Schmeiser [10]). Villani's theory was since revisited where ∇ denotes the Riemannian gradient for the metric g. Let us denote by Γ the carré du champ operator of L. It is proved in [4] that if there exist positive constants K 1 , K 2 such that for every f ∈ C ∞ 0 (M), and that if µ satisfies a Poincaré inequality then e tL converges exponentially fast to equilibrium in H 1 (µ). The rate of convergence may moreover be estimated explcitly in terms of K 1 , K 2 and λ (see [4]). Finding general intrinsic conditions on L so that there exists a metric g satisfying (1) seems to be a very difficult open problem. Theorem 18 in Villani [27] (see also Monmarché [24]) may actually be interpreted as providing such sufficient conditions and gives a way to explicitly construct the metric g in some cases. Another, more geometric, approach is taken in [5] to construct g. In [5] , one considers totally geodesic Riemannian foliations whose leaves are determined by the principal symbol of L. The inequality (1) is then equivalent to bounds on simple Ricci-like tensors associated to the foliation. Non-totally geodesic foliations It turns out that for several interesting diffusion operators L, including the generator of the velocity spherical Brownian motion which is thoroughly studied in the present paper, it does not seem possible to find a totally geodesic foliation satisfying (1). However, one will exhibit a non totally geodesic foliation for which this works. For this reason, it is interesting to generalize the results of [5] to the case where the foliation is not totally geodesic. This is what we do in Section 2 of the present paper. In our main result, we give a tensorial expression of T 2 , from which one can easily deduce sufficient conditions ensuring that (1) is satisfied. In the nontotally geodesic case, the Bochner's type identities are much more involved and an important tool to prove the tensorial expression of T 2 is to work with a connection introduced by Hladky in [21]. Velocity spherical Brownian motion In Section 3 we study the convergence to equilibrium, and prove hypocoercivity, for a class of diffusions called Velocity Spherical Brownian Motions. Results of Section 2 can not be applied directly, because the main problem here is precisely to find a metric for which a generalized Bakry-Émery estimate is satisfied. The velocity spherical Brownian motion is a diffusion process valued in T 1 M, the unit tangent bundle of a Riemannian n-dimensional manifold M of finite volume, and was originally introduced in Angst-Bailleul-Tardif [1] under the name kinetic Brownian motion. It is a velocity/position process (T 1 M is seen as a phase space) where the velocities live on the tangent sphere and have Brownian dynamics. The generator takes the form where σ and κ are parameters, ∆ v is the vertical Laplacian on the spherical fibers and ξ is the vector field of the unit tangent bundle generating the geodesic flow. The diffusion L is a hypoelliptic Kolmogorov type diffusion operator on the manifold T 1 M. For example, when M = R n and σ = κ = 1, it may be identified with where (θ t ) t≥0 is a Brownian motion on the (n − 1)-dimensional sphere seen as a submanifold of R n . The velocity spherical Brownian motion may be seen as the Riemannian counterpart of the relativistic diffusion introduced by Franchi and Le Jan in [13]. An interesting property of the motion is that it interpolates between the geodesics (when σ = 0, κ = 1) and the Brownian motion on M (when κ = σ goes to infinity), see [1] but also [22] for this homogenization result. It is also close to Bismut's Hypoelliptic Laplacian [9], which lives on T M and where the velocities have the dynamics of an Ornstein-Uhlenbeck process on the fiber instead of spherical Brownian motion one. The velocity spherical Brownian motion is also a natural Riemannian generalization, with zero-potential, of the spherical Langevin process which is studied for example in [17], [16] and [15] (see also references therein). The latter process arises in industrial applications as the so-called fiber lay-down process in modeling virtual nonwoven webs. The hypocoercivity of the spherical Langevin process in R n has been proved in [15] extending abstract Hilbert space strategy developed by Dolbeault, Mouhot and Schmeiser in [10]. Nevertheless, quoting [15] "it is an open problem to apply the methods from Villani's memoir [27] to the spherical velocity Langevin equation". Indeed, as remarked in [15] , Villani's approach seems to be, a priori, not adapted due to the geometry of Lie brackets relations between vector fields occurring in the decomposition of L. More precisely, we look at a generator L which is locally of the form where to get all the directions in the tangent space, (to get Hörmander condition of hypoellipticity) we need to consider A i , [A i , B] for i = 1, . . . n − 1 and the last direction B is obtain via the 3-order bracket (for some i). It is not possible to get all the directions considering only brackets of the form [ · · · [C 2 , [C 1 , B]B] · · · B], as it is needed with Villani's method (see his memoir for the notations). Nevertheless, we will be able to extend Villani's methodology to construct a metric g on T 1 M for which a generalized Bakry-Émery estimate is satisfied. Note that T 1 M is endowed with a natural Riemannian metric, the so called Sasaki metric, which is obtained from the metric given on M (denoted by ·, · ). Recall that each tangent space T (x,u) (T 1 M) splits in a direct sum V (x,u) ⊕ H (x,u) of vertical and horizontal tangent vectors (the vertical space is the kernel of dπ : T (T 1 M) → T M and the horizontal space is defined via the Levi-Civita connexion of M). Any X ∈ T (x,u) (T 1 M) can be written as the sum V V + U H , where U, V ∈ T x M (but depending on (x, u)) and V V is the vertical lift of V and U H is the horizontal lift of U . Then, by definition, the Sasaki metric is given by X 2 := U 2 + V 2 and makes the vertical and horizontal spaces orthogonal. The metric g on T 1 M we consider to find a generalized Bakry-Émery estimate, is of the form: with b 2 < ac. The vector field ξ, which appears in L and generates the geodesic flow in T 1 M, is horizontal and letH be the orthogonal complement (for the Sasaki metric) of Vect(ξ) in H. Then H = Vect(ξ) ⊕H and the metric g takes also the form where V ξ := V, u u andṼ := V − V, u u. The coefficients a, b, c, d are positive and satisfy b 2 < ac. It is worth noting that the metric g, though equivalent to the Sasaki metric on T 1 M, is associated to a non-totally geodesic Riemannian foliation on T 1 M, as studied in Section 2. We deduce then the following main theorem of the paper. be the generator of the velocity spherical Brownian motion on T 1 M. Let us assume that the Riemannian curvature tensor of M is bounded and that M is complete. Let us moreover assume that the normalized Sasaki-Riemannian measure µ on T 1 M satisfies a Poincaré inequality Then, there exist C 1 , C 2 > 0, such that for every t ≥ 0, and f ∈ H 1 (µ), and ∇ is the Riemannian gradient for the standard Sasaki metric on T 1 M. Moreover, the convergence also holds in L 2 (µ), i.e. there exist C 3 > 0, such that for every t ≥ 0, and f ∈ L 2 (µ), . To find the metric g which gives Bakry-Émery estimates, one computes explicitly T 2 (f ) := 1 2 (Lg(∇f ) − 2g(∇f, ∇Lf )) where ∇f is the gradient relatively to the Sasaki metric on T 1 M. At (x, u) ∈ T 1 M, ∇f decomposes in the sum where ∇ v f , ∇hf and ∇ ξ f are in T x M but depending on (x, u). Then, by definition of the metric g, one has the following decomposition of T 2 : Each Γ 2 is explicitly computed (see Lemma 3.1) involving terms of the Hessian of f and the Riemannian curvature tensor of M. Using basic tools such as Young's inequality one shows that it is possible to find explicit constants a, b, c, d > 0 with b 2 < ac such that Finally, to get the convergence in L 2 norm we prove an interesting regularization result (see Lemma 3.3 below) by adapting Hérau's method (see [19]) to our case. Once again, computations are more tedious due to the particular geometry of the Lie brackets and our regularization result is certainly not optimal. 2.1. Generalized Γ-calculus for Kolmogorov type operators. Let M be a smooth, connected manifold with dimension n + m. We assume that M is equipped with a Riemannian foliation with m-dimensional leaves. We denote by ∆ V the vertical Laplacian of the foliation, ∇ V the vertical gradient and ∇ H the horizontal gradient. Definition 2.1. We call Kolmogorov type operator a hypoelliptic diffusion operator L on M that can be written as where Y is a smooth vector field on M. The geometry of a Riemannian foliation can locally be described in local orthonormal frames. Let {X 1 , · · · , X n , Z 1 , · · · , Z m } be a local orthonormal frame of smooth vector fields, it is adapted if the X i 's are horizontal and the Z l 's vertical, that is tangent to the leaves. Since the leaves are integral sub-manifolds, one can write the structure constants as follows: We will always stick to the convention that the letter Z is reserved for vertical fields and the letter X for horizontal fields. The greek indices will be for summations on vertical directions and the latin indices for summations on horizontal directions. We observe that the Riemannian foliation is bundle-like (see [26] page 56) if and only if ω k βi = −ω i βk , and moreover totally geodesic (see [26] page 58) if and only if moreover FABRICE BAUDOIN AND CAMILLE TARDIF In such a frame the vertical Laplacian is given by In the sequel of the section, we consider a Kolmogorov type operator In this general framework, to study L, it will be more convenient not to work with the Levi-Civita connection of the Riemannian metric, but with a metric connection for which the horizontal and vertical bundles are parallel. Such a connection was introduced by Hladky in [21]. The Hladky connection of the foliation is a metric connection ∇ with torsion tensor T such that: where D is the Levi-Civita connection of the Riemannian metric and the subscript H (resp. V) denotes the projection on H (resp. V). Actually, in the local frame (2), one has the following formulas (see Example 2.16 in [21]): Associated to L, we consider the Bakry's Γ 2 operator which is defined, for f, g ∈ where Our first result is a Bochner's type identity for L. In order to state it, we introduce some tensors associated to the connection ∇. If f is a smooth function, the vertical Hessian of f will be denoted by ∇ 2 V f and is defined on vertical vectors by The Ricci curvature of the connection ∇ will denoted by Ric and, as usual, Ric(U, V ) is defined as the trace of the endomorphism W → R(W, X)Y where R is the Riemann curvature tensor of ∇. V f 2 is the Hilbert-Schmidt norm of the vertical Hessian. Proof. We split Γ 2 in two parts. Let us first observe that from the usual Bochner's formula in Riemannian geometry, we have by introducing a local vertical orthonormal frame Z 1 , · · · , Z m . In this frame we Since the covariant derivative of vertical fields is vertical and the connection ∇ is metric, we have The proof is then completed by putting the two pieces together. We now define for f, g ∈ C ∞ 0 (M), As before, as a shorthand notation, we will denote Γ H 2 (f ) := Γ H 2 (f, f ). Before we proceed to the Bochner's identity for Γ H 2 , let us introduce the relevant tensors. We define the following tensors for f ∈ C ∞ (M), U ∈ Γ ∞ (M), in the frame (2): where ∇ 2 V f, Θ(∇ H f ) denotes the Hilbert-Schmidt inner product. Proof. We shall prove this identity at the center of the local frame (2). It is easy to see, working with normal coordinates on the leaves, that we can assume that at the center of the frame ω γ αβ = 0. Using the chain rule we see that Repeating the argument of the proof of Proposition 1, we have We now compute that at the center of the frame Expanding the previous inequality by using the structure constants and completing then the squares, yields It is then a direct, but tedious, exercise to check that We observe that the computations considerably simplify if the foliation is bundle like and totally geodesic (close computations in that case have already been done in [5] Theorem 7.2). Corollary 1. Assume that the foliation is bundle like and totally geodesic, then for When the foliation is bundle like and totally geodesic, we can assume that the foliation comes from a totally geodesic submersion. The horizontal X i can then be chosen to be formed of basic vector fields. As we have seen above, we have and the conclusion easily follows. Gradient bounds for the semigroup. With the computations of Γ 2 and Γ H 2 in hands, we can now argue as in [4,5] to deduce criteria for hypocoercivity, that is a quantitative convergence to equilibrium for the semigroup generated by L. For the sake of completeness, we recall the main results. An analytic difficulty that arises when studying Kolmogorov type operators is that, in general, they are not symmetric with respect to any measure. As a consequence, we can not use functional analysis and the spectral theory of self-adjoint operators to define the semigroup generated by L. A typical assumption to ensure that L generates a well-behaved semigroup is the existence of a nice Lyapunov function. So, in the sequel, we will assume that there exists a function W such that W ≥ 1, ∇W ≤ CW , LW ≤ CW for some constant C > 0 and {W ≤ m} is compact for every m. The assumption about the existence of this function W such that LW ≤ CW implies that L is the generator of a Markov semigroup (P t ) t≥0 that uniquely solves the heat equation in L ∞ . If f ∈ C ∞ (M), we denote where ∇ is the whole Riemannian gradient, that is ∇ = ∇ H + ∇ V . From Propositions 1 and 2, we have This implies Proposition 3. Let us assume that for some K ∈ R, then for every bounded and Lipchitz function f ∈ C ∞ (M), we have for t ≥ 0 Proof. This is a consequence of [28] (see also [5]). 2.3. Convergence to equilibrium in H 1 . As in Theorem 7.6 in [5], which only concerned the bundle like and totally geodesic case, we therefore obtain: Corollary 2. Assume that there exist two constants ρ 1 ≥ 0, ρ 2 > 0 such that for every f ∈ C ∞ 0 (M), Assume moreover that the operator L admits an invariant probability measure µ that satisfies the Poincaré inequality 3. Hypocoercivity of the velocity spherical Brownian motion. Introduction. The so-called velocity spherical Brownian motion on the unit tangent bundle of a Riemannian manifold T 1 M is introduced in [1] (where it is called the kinetic Brownian motion). It is a two-parameters family of hypoelliptic diffusion on T 1 M which is a perturbation of the geodesic flow by a vertical Laplacian on the fibers. The velocity spherical Brownian motion is a Kolmogorov type process which is similar to the Langevin process. The difference being that the velocities have Brownian dynamics on the compact fibers (tangent sphere) whereas for the Langevin process, the velocities have Orstein-Uhlenbeck dynamics on the (non-compact) tangent space. It is shown in [1] that when the parameters go to infinity, the process projected on the base Riemmanian manifold converges in law to a Brownian motion. In this section we obtain, under the condition that the Riemannian tensor of the base manifold is bounded, that the velocity spherical Brownian motion converges in H 1 and in L 2 , when t goes to infinity, to the equilibrium measure (the renormalized Riemannian volume on T 1 (M)) at some exponential rate. This rate can be expressed explicitly in terms of the parameters. It is expected that the optimal rate converges, when the parameters go to infinity, to the spectral gap of the base manifold, but unfortunately the rate obtained converges to 0 and we do not reach the base manifold spectral gap. The idea to obtain this exponential rate of convergence is to find a definitepositive quadratic tensor T on T 1 M such that there is a generalized Bakry-Émery rstimate T 2 ≥ ρT − KΓ. where ρ is a strictly positive constant. We obtain this local inequality under the condition that the Riemann tensor of M is bounded. As explained in the previous section (see also [4], [6]) this inequality, together with a Poincaré inequality on T 1 M provides the exponential convergence to equilibrium in H 1 norm. This scheme is close to Talay article [25] and Villani's book [27]. Nevertheless in our case the bracket condition of Villani's Theorem is not fulfilled and one cannot apply directly his result. To obtain convergence in L 2 norm we prove some regularization estimates using methods inspired by Hérau's work [19]. Again, comparing to the case of the kinetic Fokker-Planck equation, our computations are more complicated because of the much more intricate Lie algebra structure for the generator. Define also the horizontal vector fields for i 0 = 0, . . . , n − 1 by The generator L := σ 2 2 ∆ v + κξ of the velocity spherical Brownian motion T 1 M is the projection ofL in the sense that Here are the fundamental relations involving Lie brackets of the vertical and horizontal vectors fields, ensuring that L satisfies Hörmander condition and is therefore hypoelliptic: where R is the Riemannian curvature tensor on M and ·, · is the metric on M. 3.3. Γ-calculus for the velocity spherical Brownian motion. The tangent space of T 1 M splits up in a direct sum of a vertical part (generated by π * V i , i = 1, . . . , n − 1 ) and the horizontal part (generated by π * H i , i = 0, . . . , n − 1). This horizontal part split up in the direction ξ = π * H 0 and its orthogonal (for the Sasaki metric) generated by π * H i , for i = 1, . . . , n − 1 which will be denotedH. So We introduce for each of those subspace the Gamma operator associated (Γ v , Γh and Γ ξ ) and the mixed Gamma Γ v,h . For smooth functions f, g : And define also for any Γ the corresponding Γ 2 by the formula Denote also Γ(f, f ) by Γ(f ). The vertical and horizontal gradient of a function f : T 1 M → R are given by: (H i (f • π)) 2 and consider also Thus ∇ h f 2 = ∇hf 2 + ∇ ξ f 2 . The underlying metric we use on T 1 M is the Sasaki metric, which makes orthogonal the decomposition T (T 1 M) = V ⊕H ⊕ Vect(ξ). Introduce also for i, j = 1, . . . , n − 1 the Hessian terms which appears in the computations of the iterated Gamma The following lemma gives the explicit expressions of the iterated Γ 2 for each Γ considered (vertical, horizontals, mixed). Hess Hess v f, Hess v,h f 2 + ∇hf 2 − 2 Hess v,ξ (f ), ∇hf Proof. Instead of giving a proof of the computations for any Γ 2 considered, we first check in all cases the terms involving the vertical Laplacian (i.e the terms appearing beyond σ 2 2 ) and we check, in all cases again, the terms involving the vector field ξ (i.e the terms appearing beyond κ). Remark that if L is a square of some vector field, says L(f ) := X 2 (f ) and Γ is of in particular if Y = Z one gets, So in the computation of the Γ 2 , the vertical = 0 on gets summing on i, j = 1, . . . , n − 1 . So summing on i, j one gets Hess v,ξ 2 + (n − 1) ∇ ξ f 2 + 2Tr(Hess v,h )ξ(f ) • π Moreover if L consists of a vector field, L(f ) := X(f ) and Γ is of the form Γ Recalling that [H 0 , R(e j , e 0 )e 0 , e i V i one gets immediately the Γ 2 for the ξ part of L. Finally fix b such that C h = C ξ = 1 and one obtains To sum up, one can describe all the obtained parameters a, b, c and d by the mean of two parameters ε ∈]0, 1[ and ε > 0 Finally the expression (quite complicated) of C v can be also given in terms of ε and ε . We just gives here the equivalent when κ = σ and goes to infinity where K ε,ε is the following explicit constant Define now the tensor T and its iterated T 2 as where the coefficient a, b, c and d are the one of the previous Proposition. Note that the carré du champ operator Γ associated to L is (5) of the previous Proposition gives the following generalized Bakry-Émery estimate for L: where ρ = 1 max(b + c, d) , Remark 1. Observe that when κ = σ and k, σ both go to infinity we have where K ε,ε is defined at the end of Proposition 4. 3.5. Convergence to equilibrium in H 1 . Denote by µ the volume on T 1 M for the Sasaki metric. For f : T 1 M → R, bounded one has: One supposes that the volume Vol M is finite, and thus µ is finite too. We denote by P t the semigroup generated by L, that is where (x t , v t ) t≥0 is the velocity spherical Brownian motion, i.e. the process generated by L. Since the manifold is assumed to be complete, the lifetime of this process is infinite. Using the generalized Bakry-Émery estimate (6) one deduces the following exponential convergence in the H 1 norm (as in [4], [5]). Proposition 5. Suppose that µ satisfies a Poincaré inequality (for the Sasaki metric) then one has the following exponential convergence to equilibrium in the H 1 (µ, T ) norm where andλ can be explicitly written in term of λ, K and ρ. Proof. We note that the law of the velocity spherical Brownian motion ( Thus if f is smooth and compactly supported then so is P t f . For that reason the following computations are well justified. Define Λ s := P t−s (T (P s f ))+KP t−s ((P s f ) 2 ) which is compactly supported when f is. Then, using inequality (6), By integrating with respect to µ, one obtains for some η > 0 the Poincaré inequality can be written, for f such that f dµ = 0, K+λ . Remark 2. When κ = σ and σ goes to infinity one obtains (recall that ρ := (max(b + c, d)) −1 andλ := λ × min(a(1 − √ ε), c(1 − √ ε), d) ) that ρ is of order σ andλ of order 1/σ 2 . So the rateλ obtained goes to zero when σ goes to infinity. Remark 3. One can provide a sufficient condition on M only so that the Poincaré inequality on T 1 M is satisfied. Indeed, assume that the Ricci curvature of M (for the Levi-Civita connection) is a Codazzi tensor, that is for any smooth vector fields X, Y, Z on M, Then, the Bott connection associated to the totally geodesic Riemannian submersion T 1 M → M is of Yang-Mills type (see Example 4.4 in [5]). Moreover, the boundedness of the Riemann curvature tensor implies the boundedness of torsion tensor of the Bott connection. As a consequence, if the integrated Ricci curvature of M is bounded from below by a positive constant, one deduces that the horizontal Laplacian on T 1 M satisfies an integrated generalized curvature-dimension inequality in the sense of [7]. As such, the horizontal Laplacian has a spectral gap (see [5]). Since the horizontal Dirichlet form is dominated by the total Dirichlet form on T 1 M, one concludes that the Poincaré inequality on T 1 M is satisfied. 3.6. Convergence to equilibrium in L 2 . In order to get the convergence in L 2 we need a regularization result which controls the gradient of P t f by the L 2 norm of f . To get this inverse Poincaré type inequality we adapt Hérau's method [19] and look now for coefficients a, b, c and d as in Proposition 4, but depending now on time t such that a positive expression involving the horizontal and vertical gradients is decreasing with t. 3.6.1. Σ-calculus. As observed by Gadat and Miclo in [14], even if in the literature about hypocoercivity, the authors consider mostly brackets of first order operators, it seems that in some models, the keypoint is given by brackets between second order operators. In our case of the velocity spherical Brownian motion, the 3order Lie bracket [V i , [V i , H 0 ]], necessary to reach the direction H 0 (or ξ), appears naturally in the bracket [∆ v , ξ] that we already met in the computation of the Γ ξ 2 (it is exactly what gives the term ∇ ξ f 2 in Γ ξ 2 ). However it seems to be not enough to get the regularization result that we are looking for. Instead, we remark that the bracket [∆ v , ξ] appears also when we consider the "Γ 2 " of the square of the vertical Laplacian. This is quite close to some Γ 3 computation and makes appear 3rd-order derivatives. 3.6.3. Convergence in L 2 . This is a consequence of the convergence in H 1 and the previous regularization lemma. Proposition 6. Under the hypotheses of Proposition 5, there is C > 1 such that for all function f ∈ L 2 (µ) such that f dµ = 0 one has for all t > 0 P t f 2 H 1 (µ,T ) ≤ Ce −2λt f 2 L 2 (µ) . FABRICE BAUDOIN AND CAMILLE TARDIF Proof. As in the proof of Proposition 5 one obtains P t f 2 H 1 ≤ e −2λ(t−t0) P t0 f 2 H 1 , and by Lemma 3.3, one can find a constantC > 0 such that Moreover, since t → (P t f ) 2 dµ is decreasing, we have also (P t0 f ) 2 dµ ≤ f 2 dµ.
2016-04-22T20:08:06.000Z
2016-04-22T00:00:00.000
{ "year": 2016, "sha1": "fb76414bbef39d7746a6d1e6eec73452e5e538b5", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=fb1c5166-74e0-4280-a51d-da6a95bd2847", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1ff1e82afe6723129c626ede5bf9ecf022f30b44", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
15259670
pes2o/s2orc
v3-fos-license
Low control beliefs in relation to school dropout and poor health: findings from the SIODO case–control study Background There is cumulating evidence that health is compromised through adverse socioeconomic conditions negatively affecting how people think, feel, and behave. Low control beliefs might be a key mechanism. The reversed possibility that low control beliefs might set people on a pathway towards adverse socioeconomic and health-related outcomes is much less examined. Methods A case–control design was used, consisting of 330 cases who dropped out of school in the 2010–2011 school year and 330 controls who still attended school at the end of that year. The respondents, aged between 18 and 23, came from Eindhoven and surrounding areas in the south-east of The Netherlands. A questionnaire asked for current health status, recalled socioeconomic and social background, and recalled control beliefs (mastery and general self-efficacy). Logistic regression analyses were used. Results Recalls of low mastery and low self-efficacy were strongly related to both dropout and less than good health. Low socioeconomic background was also associated to odds of dropout, but did not confound or moderate the associations of low control beliefs with dropout and health. Odds ratios of dropout and less than good health indicated at least twice the odds of a poor outcome with recalls of low control beliefs. Conclusions Independent of the socioeconomic background, low control beliefs are related to heightened odds of both poor health and school dropout. Individual differences in control beliefs might thus be as fundamental as socioeconomic conditions in generating life-course socioeconomic and health-related pathways. Although the findings should first be cross-validated in prospective studies, public health professionals working with youth might already start considering early interventions in youth with all too fatalistic and powerless mind-sets. Background Low control beliefs have often been studied in the context of the higher risks of disease and premature mortality in lower socioeconomic status groups (e.g. [1][2][3][4]). Control beliefs can be defined as a person's belief that he or she is able to actually perform a (desired) action or behaviour (self-efficacy) and the belief that his or her actions matter for outcomes and events (mastery) [5]. Low control beliefs are related to poor health outcomes, through either negatively affecting health behaviours or having direct repercussions for physiological functioning. Some have labelled low control beliefs as powerlessness or "socialised fatalism" [6]. The latter emphasises the embedding in the socioeconomic environment [4,7]. Low income hampers the number of available options; control over what to purchase is thus restricted. Similarly, long-term low control and autonomy at work, which is common in lower socioeconomic status groups [8,9], might hinder the development of self-directed behaviours and planning. Low control beliefs have thus been found partially rooted in socioeconomic conditions. Much less examined is the reversed possibility of low control beliefs negatively affecting socioeconomic attainment (e.g. school dropout) and poor health [2]. High control beliefs might promote social mobility, as people with such beliefs are confident both about their ability to perform well at school and about the future benefits of a higher educational career [10,11]. The question now is whether control beliefs influence socioeconomic attainment processes and later health and whether this influence is independent of social differences, as indicated by prior socioeconomic and social conditions. An independent influence of control beliefs would suggest that individual differences might be as fundamental as social differences for life-course socioeconomic and health-related pathways. This possibility of selection via individual personality characteristics has insufficiently been examined in the field of socioeconomic differences in health [12]. Using Dutch case-control data on 330 school dropouts and 330 controls still attending school, we set out to examine whether adolescent low control beliefs affect school dropout and poor health in young adulthood. The embedding in the socioeconomic background during upbringing is studied by examining the association of parental socioeconomic and social characteristics during upbringing with control beliefs. Simultaneously, it is examined whether low control beliefs relate to school dropout and poor health, independent of the socioeconomic and social background. The purpose of the study is to help with optimizing interventions aimed at tackling socioeconomic differences in health and to give public health professionals working with youth more personalised tools aimed at a timely prevention of school and health-related difficulties. Study population This study is part of the Dutch SIODO study, a sequential mixed-methods study, focusing on the early identification of risk factors on the pathway to school dropout [13]. The current paper is using the case-control quantitative data from SIODO. In November 2011, the municipal compulsory education department of the city of Eindhoven selected all young adults aged 18-23 years who had not yet met the Dutch minimum educational requirement, the socalled "basic qualification", at the start of the 2010-2011 school year. This minimum qualification (for making a good entry on the labour market) is equivalent to higher general secondary education, pre-university education or intermediate vocational education, Level 2. At this age, they normally should have obtained this required qualification. A proportion of these young adults remained in school during the 2010-11 school year to obtain this qualification (the controls), while others dropped out of school during that year without qualification (the cases). Cases and controls being so similar at the start of the 2010-11 school year was supposed to avoid selection bias. On average 1.5 year after the start of the 2010 -2011 school year, we sent a self-administered questionnaire with an informed consent form to the eligible young adults. The questionnaire contained questions on the current health status, recalled control beliefs, recalled socioeconomic background, and potential confounders. The power calculation for a retrospective study with a dichotomous outcome variable indicated that 290 cases and 290 controls would yield an 80% power to detect an odds ratio of 1.75 at a α-level of 5% for an exposure of 0.2 and a ratio of cases to controls of 1 [14,15]. We had to send 8,630 questionnaires to recruit 1,049 possible participants, among which 330 cases. The 330 controls were randomly selected from the remaining group of participating controls. Approval for conducting this study was granted by the Medical Ethics Committee of Maastricht University (METC 11-4-099, decision 22-08-2011). More detail on the SIODO study can be found elsewhere [13]. Dropout status and current health status Dropout status in the school year 2010-2011 was determined by the compulsory education department as defined in the previous paragraph. The subsequent questionnaire asked for the current health status. This was measured by asking how healthy the young adults currently considered themselves (varying from 1: not healthy at all to 10: very healthy). This variable was dichotomised by assigning respondents scoring lower than 7 to the less than good health category (n = 81 (25%) in the case group and n = 32 (10%) in the control group). Recalled low control beliefs For both mastery and general self-efficacy, the respondents were asked to think of the time period when they were 16 (prior to the dropout). Mastery was measured by computing the mean of the seven items of the Pearlin and Schooler scale (0: low and 5: high mastery) (Cronbach's α = 0.84) [16]. The introduction of this questionnaire asked the respondents to report what was most applicable for them when they were 16. Two example items are: "I have little control over things that happen to me" (to be reverse coded) and "I can do just about anything I really set my mind to do". General self-efficacy was measured by computing the mean on the 16 items of Sherer's general self-efficacy scale (0: low and 5: high selfefficacy) (Cronbach's α = 0.90) [17]. These questions had the same introduction as mastery (asking them to recall their psychological profile at the age of 16). Two example items are: "If something looks too complicated, I will not even bother to try it" (to be reverse coded) and "When I make plans, I am certain I can make them work". Both variables were categorised into thirds using tertiles. Socioeconomic and social background Socioeconomic background was based on the mean of the standardised scores for educational level of the father and the mother separately and four questions on material and social deprivation. Education had the following ordinal categories: no primary education, primary education only, lower vocational education, intermediate general secondary education, intermediate vocational education, higher general secondary education, higher vocational education, and university education. Separate for the primary and secondary school period, the material deprivation items asked for how often there was too little money for food or for replacing clothes or shoes (never, sometimes, regularly, always). Separate for the primary and secondary school period, the social deprivation items asked for how often there was too little money for joining a (sports) club or going on a school trip (never, sometimes, regularly, always). The resulting composite variable for socioeconomic background was categorised into thirds using tertiles. Ethnic background, looking at both the respondents' and their parents' country of birth, was dichotomous, as respondents with a western migration background were too small in numbers (0: autochthonous/western background and 1: non-western background). Family composition during primary school was defined as either living with both parents or living with one parent only. Other confounders Both sex and age at the start of the school year were used as covariates. Statistical analyses First, the associations of the socioeconomic, ethnic, and family background with dropout were analysed using cross-tabulations and related chi 2 -tests. The associations of the background variables with less than good health and low control beliefs were examined in the case and control group, separately. Second, the association of low control beliefs with dropout was also examined by chi 2tests. Logistic regression analyses were used to examine the same association, controlling for age and sex, and additionally for the socioeconomic background, ethnic background, and family composition. To examine whether low control beliefs were equally predictive of school dropout in different socioeconomic backgrounds, multiplicative interactions between control beliefs and socioeconomic background were separately tested. Third, the association of low control beliefs with current health was examined by chi 2 -tests in the case and control group, separately. Logistic regression analyses were used to control for age and sex, and additionally for the socioeconomic background, ethnic background, and family composition. Multiplicative interactions between socioeconomic background and low control beliefs were also tested. Sensitivity analyses were done to study the robustness of the findings when using the continuous versions of all variables (and linear regression for current health) and when additionally controlling for the educational level of the first class in secondary school (having four ordinal categories). Table 1 shows that the cases, compared to controls, more often came from lower socioeconomic backgrounds (38.2% vs. 28.8%), non-western backgrounds (14.6 vs. 5.2), and one-parent families (21.6 vs. 9.7). Results In the case group, low socioeconomic background was related to less than good health (30.4 vs. 17.3) and low mastery recalls (54.8 vs. 32.7) ( Table 2). In the control group, adolescents from one-parent families much more often reported low mastery than adolescents from twoparent families (53.1 vs. 22.5). All other associations were statistically not significant. Recalls of low mastery and self-efficacy were strongly related to odds of school dropout (Table 3). For example, adolescents with recalls of low mastery had 144% higher odds of dropout (odds ratio = 2.44) compared with adolescents with high mastery recalls (95% CI: 1.66-3.59). Controlling for the socioeconomic, ethnic, and family background hardly affected these odds ratios. In both the case and control group, the associations of recalled low mastery and low self-efficacy with less than good current health were also substantial ( Table 4). After control for the socioeconomic, ethnic, and family composition background, odd ratios indicated a four times higher odds of less than good health for adolescents recalling low control beliefs (e.g. odds ratio = 4.10 (95% CI: 1.48-11.4) for mastery in the control group). The adjustment for socioeconomic, ethnic, and family composition background did not strongly affect the odds ratios (comparing model 1 and 2). Confidence intervals were wide. Results of analyses with the original continuous variables, including linear regressions with the health status outcome (Table 5), showed a similar pattern of findings. There were no significant interactions, indicating that recalled low control belief measures are similarly related to poor current health and dropout in all three socioeconomic groups. A similar absence of interactions was detected for ethnic background and family composition. Additional control for the first class level of secondary school that pupils attended did not attenuate the odds ratios for low mastery and low self-efficacy, primarily because of the absence of an association between school level and both mastery and self-efficacy. Discussion In this group of 18 to 23 year old Dutch men and women, we found that recalls of low mastery and low self-efficacy were strongly associated with school dropout and less than good health. A lower socioeconomic background, as indicated by measures of recalled relative deprivation and parental education, was also related to school dropout, as was a non-western parental background and coming from a one-parent family. The socioeconomic and social background characteristics did neither confound nor moderate the association of low control beliefs with school dropout and less than good health, which underlines the independent association of low control beliefs with socioeconomic and health-related, life-course pathways. Some limitations should be discussed. The first two limitations regard the case-control design and the crosssectional questionnaire. First, we cannot fully exclude the possibility of dropout having affected the low control beliefs (as reported in the posterior questionnaire). Additionally, recall bias could have occurred, when the cases (compared to the controls) had a better remembrance or even reported more prior problems than actually occurred. To avoid these possibilities as much as possible, the questions on low control beliefs were introduced by explicitly asking the respondents to think of the period when they were 16, which was prior to any dropout. Second, the possibility of poor current health affecting the reports of the measures of low control beliefs in the same questionnaire was hopefully also addressed by asking for control beliefs at the age of 16 (and for current health). Dropout affecting the current health status and biasing the estimates of the association between low control beliefs and health was addressed by separate analyses of health status in the case and control group. However, to truly avoid both the first and second limitation, dedicated longitudinal studies are needed that allow a more in-depth and valid unravelling of the causal mechanisms related to socioeconomic attainment and health. Third, non-response was substantial, as we had to send 8,630 questionnaires to recruit 1049 possible participants (12 percent). The low response rate is related to the sample, and particularly the cases, being a hard-to-reachgroup. Non-participation was higher in cases (compared with controls), in boys, and in those living in areas with cheaper houses (Table 6). This may have resulted in an underestimation of the influence of socioeconomic conditions. In the absence of information on differential non-participation according to levels of control beliefs, we do not know whether and how the association of low control beliefs with dropout and health was affected by non-participation. Finally, both cases and controls normally should have had obtained their "basic qualification" already. This increased the similarity in the target population and thus decreased the possibility of selection bias, but might simultaneously have led to underestimated associations of determinants with school dropout. Future studies should examine the associations with a longitudinal design including all possible cases and controls. Many studies have conceived of low control beliefs as mediators, rather than as confounders, in the association between low socioeconomic status and poor health [2,10,11]. Our findings suggest the importance of thinking beyond "mediation" and of looking at earlier life individual differences (also within socioeconomic groups) as partial driving forces for social mobility, future health, and the development of socioeconomic differences in health in young adulthood. It thus cannot be excluded that control beliefs are as fundamental as socioeconomic grouping when it comes to affecting life-course pathways for people (e.g. see footnote 3 in [18]). Simultaneously, in order to avoid psychologising of social problems, more research is needed to find out where the low control Ordinary least squares regression analyses using continuous scores of current health status (ranging from 1: poor health to 10: excellent health), mastery (ranging from 1: low mastery to 5: high mastery) self-efficacy (ranging from 1: low self-efficacy to 5: high self-efficacy), and socioeconomic background (continuous score, being the mean of standardised education and deprivation variables). beliefs come from. Environmental factors, possibly other than socioeconomic, might interact with genetic factors in creating these individual differences [12]. Further research could, for example, study how early life parental support, secure attachment and bonding might be important in the development of control beliefs in children and adolescents [19,20]. Further research should also examine how control beliefs could be more strongly embedded in a personalised approach of public health professionals working with youth (e.g. [21,22]). Questions in need of an answer are, for example, how to detect low control beliefs, at what age should monitoring start, do gender and childhood diseases matter, how do low control beliefs relate to self-esteem, insecurity and emotional instability (particularly during puberty), and which interventions are available. As reported above, there is, however, first a need for cross-validation of the findings in prospective designs and for more stringent tests of the causal direction of the relevant mechanisms. Conclusion Independent of the socioeconomic background, low control beliefs are related to heightened odds of both poor health and school dropout. Individual differences in control beliefs might thus be as fundamental as socioeconomic conditions in generating life-course socioeconomic and health-related pathways. Although the findings should first be cross-validated in prospective studies, public health professionals working with youth might already start considering early interventions in youth with all too fatalistic and powerless mind-sets.
2018-04-03T02:55:56.655Z
2014-11-28T00:00:00.000
{ "year": 2014, "sha1": "9bd01af6e07c737c158d27c27b09981de207c764", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-1237", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6bf62555f7d18d3e51e7f1b6944ae2884abcba6", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263311757
pes2o/s2orc
v3-fos-license
Effects of covid-19 on the livelihoods of rural women in Ethiopia The COVID-19 pandemic has had an impact on people’s lives and economic activities. Women are expected to bear the impact of the impact because they are over-represented in affected sectors on the front lines of the pandemic’s response. However, no empirical evidence exists to support the effect of COVID-19 on women’s economic activities in the Ethiopian context. This study investigated effects of COVID-19 on economic activities of rural women in Ethiopia. Thereby, a multistage sampling procedure was employed to randomly draw 263 rural women as study participants. Data were collected through interview schedules and key informant interviews. Finally, the data were analyzed qualitatively and quantitatively. A binary logistic regression model is used to examine factors determining the effect of COVID-19 on economic activities of women. According to the findings, the most affected economic activities were remittances (94.28%), small business trade (94.06%), livestock and livestock product trading (91.30%), daily labor wages (84.82%), handcraft (72.73%), and crop production (61.32%). The logit regression result shows that irrigation use reduced the impact of the pandemic, whereas relying on remittances, market distance, and being a female-headed household exacerbated the impact of the pandemic on the economic activities of rural women. The pandemic had significant impact on rural women’s economic activities. Therefore, governmental and nongovernmental organizations should support rural women’s income-generating activities by providing revolving funds with training. Using remittances for income-generating activities would also improve the income of rural women. Introduction COVID-19 pandemic significantly impact global lives and economy (Kassegn and Endris 2021;Rasul 2021).The COVID-19 pandemic has a negative impact on the livelihoods and economic activities of rural households (Mahmud and Riley 2021;Nechifor et al. 2021).A study in China revealed that COVID-19 negatively impacted China's rural economy, affecting crop production, livestock, income, employment, economic development, and agricultural product sales (Pan et al. 2020).Ragasa et al. (2021) revealed that the pandemic has caused income loss for 51% of respondents in Myanmar, impacting agriculture and food supply chains in sub-Saharan Africa (Nchanji et al. 2021).The economic crisis of COVID-19 has affected all households, businesses and markets at the same time (Casarico and Lattanzio 2022;Fairlie 2020;Mottaleb et al. 2020).Financial sources such as remittances were held (Bisong et al. 2020;Sharma 2020).Informal and unskilled workers are vulnerable to losing their jobs because of COVID-19 (Casarico and Lattanzio 2022;Doss et al. 2020;Rasul et al. 2021). The effects of a crisis are never gender-neutral, and women and men experience differ levels of stress as a result of COVID-19 (Ragasa and Lambrecht 2020;Ragasa et al. 2021).The studies show that women are expected to bear the heaviest impact (Parry and Gordon 2021).Farmers' access to agricultural inputs has been affected by the pandemic as a result of lockdowns and transportation restrictions (Anagah 2020).These impacts were severe for women who lacked reliable and timely agricultural information (Alvi et al. 2021).According to Harris et al. (2020), female farmers are more vulnerable to the pandemic in terms of both livelihoods and diet.In addition, COVID-19 has an impact on women in agriculture, including both production and sale (Fairlie 2020;Haqiqi and Bahalou 2021).Women contribute significantly to production activities, despite receiving less recognition (Doss et al. 2020).Lockdown negatively impacts women's livestock production, including feeding, management, disease control, and trade (Hashem et al. 2020;Nchanji et al. 2021;Rasul et al. 2021).Women bear a disproportionate share of the burden of economic decline, which results in male job loss, increased domestic work, hunger, and violence (Agarwal 2021).As a result, female workers face increased poverty due to lost income opportunities (Sarker 2021).Women as marginalized groups suffered more due to COVID-19 since they lost both external support and coping mechanisms (Adhikari et al. 2021).COVID-19 has an impact on job numbers, job quality, and vulnerable groups in addition to health concerns, affecting labor market outcomes (Béné 2020;Janssens et al. 2021;Kansiime et al. 2021;Nechifor et al. 2021). Ethiopia is not exceptional because the pandemic has adversely affected different groups of people (Hirvonen et al. 2021;Tamru et al. 2020; United Nations (UN) 2020; World Food Programme (WFP) et al. 2020).According to the Economic Commission for Africa (ECA), Ethiopian rural women in agriculture and informal sectors are vulnerable to COVID-19, spending 2.3 times more on unpaid work (ECA 2020).In Ethiopia, the business process has slowed down, production is cut, the unemployment rate has more than doubled, and the cost of living is mounting (Mohammed 2021;Yonnas and Dubale 2021).The pandemic has further exacerbated socioeconomic inequalities between men and women (Sarker 2021).Shocks can exacerbate or reduce gender gaps, so evidence-based policy responses are needed to mitigate the pandemic crises on rural women (Ragasa and Lambrecht 2020). The evidence indicates that the effect of COVID-19 on women's economic activities is not well understood due to lack of detailed household survey data.We also noticed that the previous studies lacked methodological rigor in gathering primary data from representative samples.For example, Harris et al. (2020) undertook a rapid telephone survey with 448 farmers.Telephone coverage in rural areas is extremely low in developing countries such as Ethiopia.Such designs exclude women who do not have access to telephones.Fairlie et al. (2022) and Haqiqi and Bahalou (2021) also conducted using secondary data from the California Department of Tax and Fee Administration and the United States Department of Agriculture's National Agricultural Statistical Service data, respectively to assess the impact of COVID 19 on U.S. countries.In addition, Adhikari et al. (2021) used series of panel discussion with policy makers and expert, key informant interviews, and extensive literature review to understand the impacts of COVID-19 crisis in agriculture and food systems in Nepal.Moreover, Hashem et al. (2020) conducted an overview on the animal welfare and livestock supply chain sustainability under the COVID-19 outbreak with no methodological design on how the study is undertaken.Doss et al. (2020) draw an evidence from past health crises, reports from the COVID-19 pandemic, and literature on gender and food security to analyze potential gendered effects across production, processing, trading, and consumption.Rasul (2021) analyzed the twin challenges of COVID-19 pandemic and climate change for agriculture and food security in South Asia using secondary data with no clear methodological design.Sarker (2021) analyzed labor market and unpaid works implications of COVID-19 for Bangladeshi women using secondary data and literature review.Agarwal (2021) analyzed the livelihoods during the time of COVID-19 in terms of gender perspective through literature survey. As a result, the purpose of this study is to assess rural women's economic activities and to examine the impact of COVID-19 on those activities.The above empirical evidence serves as the reason for defining the scope, as rural women in developing countries such as Ethiopia have engaged in causal economic activities that are highly vulnerable to shocks and stresses.Consequently, the researchers are motivated to examine the impact of COVID-19 on the economic activities of rural women using firsthand information from a representative sample using a well-designed methodological approach.This study will allow the scientific community to triangulate primary data findings with secondary data findings.Following this, we are mainly interested in responding to the following research questions: • Do rural women perceive that the COVID-19 affected their economic activities? • Which economic activities are severely affected by COVID-19?• Which socioeconomic factors determine the effects of COVID-19 on economic activities of rural women? Description of study area and sample size The Oromia special administrative zone and South Wollo zone are located in the eastern Amhara Region of Ethiopia.The Oromia special administrative zone is 333 km from Addis Ababa in the north.It is located between latitude 10° 42′ 59.99" north and longitude 39° 51′ 59.99" east.The Zone is bordered on the southwest by North Shewa Zone, on the northwest by South Wollo and Argobba special woreda, and on the east by the Afar Region.The main agro-ecologies of the zone are low land and midland with an annual rainfall range of 600-900 mm.On the other hand, the South Wollo Zone is 404 km from Addis Ababa, the capital of Ethiopia, in the north.It is located between the latitudes of 10.8997° north and 38.9877° east with three agro-ecologies: lowland, midland and highland.The annual rainfall ranges from 750 to 1400 mm with a long-term mean of 1109 mm.The location map of the study area is presented in Fig. 1.Households were sampled using the following sampling procedures.First, South Wollo and Oromia Zones were selected purposively since it is the research mandate area of Wollo University, and one of severely affected areas by the COVID-19 in Ethiopia.Second, two districts from South Wollo and one district from the Oromia Zone were selected randomly.Third, a total of six kebeles (two kebeles from each district) were selected randomly.Fourth, a proportionate sampling technique was employed to draw 263 sample rural women from three sampled districts, regardless of household headship (both female and male-headed Fig. 1 Map of the study area households).The sample size was determined using the following formula as presented by Yehuala et al. (2021). where, n is the desired sample size, N is the total number of population in selected districts, z is the value of the standard variant at 95% confidence level and to be worked out from a table showing the area under the normal curve (z = 1.96), e is the level of precision = ± 5%, p is the proportion of sample size (p = 22%), and q = (1 − p).The number of households in selected district presented in Table 1 is obtained from Central Statistical Agency (CSA) population projection survey (CSA 2020b).By using the formula in Eq. 1, the sample size is calculated as: Note: among the total respondents, the data from 254 households is used for analysis and the remaining nine questionnaire is discarded due to incomplete information. Data collection and analysis The data were collected between September and November 2020.The effect of the pandemic is continuing and will continue for an unknown period of time.The qualitative data were analyzed through narration, explanation, interpretation, and triangulation.The quantitative data were analyzed using simple descriptive statistical tools, and the factors that aggravated or reduced the effect were examined using a binary logistic regression model.The binary logistic regression model was used to estimate the relationship between socioeconomic variables peculiar to rural women and the effect of COVID-19 on their economic activities.The dependent variable was the status of rural women's economic activities as a result of the COVID-19 pandemic.The responses were 1 (if affected) and 0, otherwise; hence, it is a binary response variable.Therefore, the logit model is selected for this study and specified in Eq. 2 according to Gujarati (2004) as follows: where L i = natural logarithm, P i = probability of being affected, 1-p i = probability of being not affected, β 1 = constant term, β 2, β 3… β n = coefficients of explanatory variables, and X i = explanatory variables that are hypothesized to be included in the model (Table 2). The study employed both qualitative and quantitative data obtained from primary sources.Data on the demographic and socioeconomic characteristics of the respondents, the economic activities, and the effect of COVID-19 on the economic activities of rural women were collected directly from 254 sample rural women using a pretested interview schedule keeping the recommended COVID-19 protective measures.Key informants from kebele development agents, kebele administrations, district office of agriculture, district gender office, local COVID-19 prevention task force, and other NGOs working on livelihood aspects of rural women were interviewed face to face. Definitions and measurements of variables The effect of COVID-19 on economic activities of rural women is estimated to be influenced by the independent variables presented in Table 2.These variables are selected based on extensive literature reviews. Demographic and socioeconomic characteristics of respondents The primary goal of this research is to examine how rural households perceive the effects of COVID-19 on their economic activities.We accomplished this by identifying the major economic activities practiced in the study area.As a result, rural women work in onfarm, off-farm, and non-farm economic activities.Crop production and livestock keeping are the most common economic activities undertaken by rural women in the study area.They also engage in small business trade, daily labor wage, remittance, and livestock trading. (2) Table 3 summarizes the demographic and socioeconomic information of respondents.The respondents' average age is 44, putting them in the active labor force category.This implies that respondents are likely to be involved in various farming practices and other income-generating activities in the rural economy.A possible explanation could be that rural women may have better farming operations, wealth accumulation, and climate knowledge and use better planning and experiences in the past pandemics than younger women.Moreover, young women engaged in daily wages and small business activities where these activities were hard hit by the COVID-19 shutdown.The finding is in conformity with those of Abdullah et al. (2019), who found that older farmers have more experience with farming practices that can cope with the crisis.On the other hand, the young generation does not like the rural setting, and they want to find employment in the urban area. The average family size in the study area is 5.21 members, which is higher than the national average of 4.6.The average land holding size of households in the study area is 0.86 ha which is more than the national average of 0.8 ha (Alemu et al. 2017).Households also go averagely 91.11 min to reach the nearest main market.The COVID-19 pandemic is expected to put pressure on DAs to visit smallholder farmers.The result shows that before COVID-19 Development Agents (DAs) were visited farmers more than three times (2.31) per month to provide agricultural production and marketing related activities.The frequency of DA during COVID-19 is averagely once a month which is reduced by half compared to their contact prior to COVID-19.Oxen ownership is important to practice agricultural activities.In the study area, the average ox ownership by respondents is 0.8.This implies that there are smallholders who have not own ox. About 54.7% of respondents are illiterate, while the remaining 45.3% are literate.About 45.7% of respondents are female headed, implying that there is a significant proportion of female-headed households in the study area.Regarding irrigation use, 50.8% of respondents are the users.Using irrigation, smallholder farmers are practicing different vegetable and fruit production activities.In the study area, only 22.8% of respondents are credit users.The use of credit is expected to fulfill agricultural inputs such as fertilizers, improved seeds and farm machineries.One possible explanation is that the use of credit services allows rural women to participate in various income-generating activities, and the previously derived revenue increases rural women's financial capacity and purchasing power, allowing them to avoid the effects of the pandemic.Other studies show that the use of credit has a positive influence on women's contributions to household welfare (Abdullah et al. 2019;Mandal et al. 2021). About 68.1% of respondents are the members of farmers' cooperatives.They access fertilizer, improved seeds, and basic commodities through cooperatives.Accordingly, 67.3% of respondent used agricultural inputs such as fertilizer, improved seeds, and agro-chemicals for their agricultural practices.Moreover, only 27.6% of the respondents were accessed remittance, whereas the majority (72.4%) of the respondents are not accessing the remittance. Major economic activity practiced by rural women in the study area Crop production (95.7%) was the dominant economic activity practiced by rural women, followed by livestock keeping (78.3%), small business trade (46.5%) such as tea, coffee, small shops and street venders in the villages, daily labor wages (44.1%), remittances (27.6%) and trading of livestock (27.2%) (Fig. 2).According to CSA, (2020a), the major crops grown in the study area are Teff, barley, maize, sorghum, wheat, chickpea, horse bean, and other oilseeds, fruits, and vegetables.As the key informants in Jamma and Dawachefa districts explained, the districts' small-and micro-scale enterprise office organized the rural women in the rural village to create employment opportunities in the five basic economic activities, namely, construction, small trade, peri-urban agriculture, service sectors and the manufacturing sector.This implies that rural women are practicing on farm, off farm and nonfarm income activities with the domination of on farm income activity. Economic activities affected by the COVID -19 pandemic First, the economic activities of rural women were identified and classified, as shown in Fig. 2. The COVID-19 pandemic effect on each economic activity was then independently examined to clearly demonstrate the pandemic's effect on economic activity.The survey results indicated that remittances (94.28%), small business trade (94.06%), trading of livestock (91.30%), daily labor wages (84.82%), handcraft (72.73%) and crop production (61.32%) were the most affected economic activities by the pandemic.Among the rural women whose remittance income was affected (94.28%), approximately 77.14% of them completely ceased receiving income from remittances due to the pandemic (Table 4).Out of the rural women whose small business trade is affected (94.06%), 83.91% were forced to cease this economic activity due to the pandemic.Similarly, among the rural women whose livestock trading activities are affected (91.30%), 84.06% totally banned the livestock trading activities.The respondents in Jamma and Dawachefa districts explained that they were doing beef and small ruminant fattening for holyday's market like Easter.However, they incurred losses due to movement restrictions and were forced to cease the business.Among the rural women who engaged in daily labor wages, 70.54% were forced to cease this economic activity.Out of the rural women engaged in handcraft, 59.09% were also forced to cease handcraft making due to a lack of input and output markets.Approximately 61.32% of the crop production activity of rural women is affected by the pandemic.This is due to a lack of the necessary agricultural input distribution, a lack of a market for their products and the disruption of the planting season.Consistent result is reported by other authors (Pan et al. 2020;Ragasa and Lambrecht 2020;Yazdanpanah et al. 2021).The overall effect of COVID-19 on Economic activities of rural women was calculated by taking in to consideration on individual economic activities.For this study, economic activity of household is said to be affected if at least one income source is affected.Accordingly, 93.7% of rural women are affected by the COVID-19 pandemic (Additional file 1). Our findings show that the majority of rural women perceived as the COVID-19 pandemic has a variety of effects on their economic activities.According to the study, the pandemic primarily impacted women's offfarm and non-farm economic activities such as small business trade, remittance, labor wage, and livestock trading.This is because certain business activities were banned by the government to avoid social gatherings; the others lacked the necessary inputs and customers due to movement restrictions.(Goswami et al. 2021) found that micro, small-, and medium-sized enterprises were the major victims of the pandemic as a result of financial shortages, supply chain disruptions, decreases in demand, and reductions in sales and profit. Perception of rural women on the severity of challenges due to the COVID-19 pandemic The severity of the COVID-19 pandemic effect was assessed across different themes in the study area.The severity of the problem varies according to the nature of the activity needed to support the economic activities (Table 5).The most severely affected activities were difficulty in getting good markets for the products (63.40%) and difficulty working in groups (44.5%), followed by difficulty preserving fruits and vegetables due to market shortages (35.80%) and problems securing rural labor (35.00%).The respondents explained the problem of market shortages for their handcraft and agricultural products, especially for perishable horticultural products.The main reasons are customers' fear of contamination from product transactions, especially in the early stage of COVID-19 case confirmation and lack of transport, lack of market due to movement restrictions and the perishable nature of agricultural products.A study in China The percentage of affected for each economic activity is calculated from the number of respondents who engaged in the respective economic activities.The percentage of forced to cease is calculated from the respective affected economic activities The major economic activities Affected (%) Forced to cease (%) explained that the COVID-19 movement restriction blocked the outflow channels of agricultural products, hindered necessary production inputs, destroyed production cycles, and finally undermined the production capacity of rural households (Pu and Zhong 2020).Working in Jige1 /Working in group is a labor sharing practice for ploughing, weeding and harvesting among rural households in the study area.However, the pandemic hit this human labor gathering hard due to the social distancing rules (Table 5). On the other hand, it was very difficult to obtain rural labor wages due to movement restrictions.It in turn affects input access, thereby affecting production and productivity in agricultural economic activity.An author in this regard confirmed that the COVID-19 pandemic reduced the availability of farm inputs in Nigeria (Aromolaran and Muyanga 2020).However, another study contradicts that finding that access to inputs was not significantly affected by the pandemic (Goswami et al. 2021).This difference may be the result of the level of infrastructure development and delivery mechanisms.The lockdown could also increase the cost of agricultural inputs.The study found that the price of inputs affected slightly, modestly, and severely by approximately 16.10%, 24.80%, and 26.40%, respectively.Contrary, the respondents explained that their income was dropped due to market inaccessibility for products.A study in India reported price reductions by over 80% of farmers, and farm income dropped by 90% during the pandemic (Harris et al. 2020). The containment measures taken by the government also significantly affected the daily labor wage.This implies that daily labor wage workers in both off-farm and nonfarm economic activities have suffered from the pandemic since they are more resource poor in terms of land ownership and education, which leads to food and nutrition insecurity (Mottaleb et al. 2020).Studies have revealed that income from remittances was affected by the lockdown (Bisong et al. 2020).Moreover, some of the migrants deported and relied on local aid in the study area due to the pandemic. Results of econometric analysis This study also looked at the relationship between various socioeconomic variables and the effects of COVID-19 on 6). Household headship Household headship indicates whether the household is headed by a woman (1) or a man (0).The logit model results show that being a female-headed household is positively associated with the effect of the COVID-19 pandemic on rural women's economic activities.As the household becomes female headed, the odds ratio in favor of being affected by the pandemic increases by a factor of 5.269.One possible explanation is that rural women lack resources and fail to recognize the power logic of men in their households with limited income activities.Respondents also stated that as a result of the pandemic, female-headed households lost support from neighboring farmers because labor sharing practices were disrupted.A study revealed that female workers have rapidly lost their means to earn income and are confined to their homes due to the pandemic (Sarker 2021). Use of irrigation Irrigation use has a negative and significant relationship with the effect of COVID-19 on rural women's economic activities.When a rural woman uses an irrigation facility, the odds of being affected by the pandemic decrease by a factor of 0.148.This is explained by the fact that in moisture-stressed areas like the study area, getting moisture through irrigation facilities would improve agricultural and horticultural products, which is true for the majority of the surveyed women.This finding is in conformity with Abdullah et al. ( 2019), who found that the use of irrigation positively affects the welfare of smallholder farmers. Market distance in walking minute The model output revealed that the distance to the nearest market in walking minutes was negatively and significantly related to the effect of COVID-19 on rural women's economic activities.The odds ratio indicates that being affected by the pandemic increases by a factor of 0.988 as the market center gets closer to the woman's home.This is because rural women who live near a market center may be involved in nonfarm economic activities such as daily labor, small trade, and the sale of firewood, all of which are severely impacted by the pandemic.The results of this study are consistent with the findings of Kumar et al. (2020). Access to remittance Relying on remittance income was found to exaggerate the effect of COVID-19 on rural women's income activities.As a rural woman relies on remittance income, the odds ratio in favor of economic activity being affected by the pandemic increases by a factor of 0.135.Most of the households in the study area sent their children (especially girls) to Arabian countries, and the households received gifts from abroad.Furthermore, as mentioned by respondents, sending family members to Arabian countries is a form of competition.Women who have access to this variable can use it to improve their income-generating activities, resulting in more income to improve their households, or they can spend it directly on the welfare of their households.Other studies shows that remittances have a positive influence on women's contributions to household welfare (Falola et al. 2020;Kumar 2019).However, the current finding reveals that the COVID-19 pandemic has harmed remittance income due to lockdown, and remittances create dependency syndrome.This finding is consistent with Chen et al. (2020), who reported that remittances are negatively impacted by the pandemic. Conclusion The study investigated the effect of COVID-19 on the economic activities of rural women in Ethiopia.Crop production, livestock rearing, small trade, daily labor wages, remittances, livestock and livestock product trading, productive safety net programs, labor migration, selling firewood, handcraft, and asset renting were all activities undertaken by rural women.The pandemic had a different impact on all dimensions of the majority of rural women's economic activities.As a result, significant numbers of rural women were forced to discontinue certain economic activities.The use of credit, inputs, and irrigation services significantly reduces the impact of the pandemic, whereas relying on remittances, being a female-headed household, and market distance exacerbate the impact of the pandemic on the economic activities of rural women.Therefore, the pandemic had a significant impact on rural women's onfarm, off-farm, and nonfarm economic activities.Thus, local and national governments and nongovernmental organizations must support rural women's income-generating activities.Revolving funds obtained from donor organizations and the national government could be used to provide assistance.Adequate training should be provided in addition to the funding.Furthermore, Fig. 2 Fig. 2 Description of major economic activities in the study area (percentage of respondents from n = 254) Table 1 Proportion of sample size across study districts (measured in numbers) Table 2 Definition and measurement of independent variables Table 3 Demographic and socioeconomic characteristics of respondents (N = 254) Table 4 Major economic activities affected by Table 5 Severity of challenges during the COVID-19 pandemic Table 6 Binary logistic regression results (econometric test)
2023-10-02T13:42:32.480Z
2023-10-02T00:00:00.000
{ "year": 2023, "sha1": "861365c80c9a496e6b5890204a615bc4898c2e7a", "oa_license": "CCBY", "oa_url": "https://cabiagbio.biomedcentral.com/counter/pdf/10.1186/s43170-023-00180-4", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "0ab5917a3a925f4139b26a6dcdac27bc00378323", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
45642239
pes2o/s2orc
v3-fos-license
Cartilage oligomeric matrix protein interacts with type IX collagen, and disruptions to these interactions identify a pathogenetic mechanism in a bone dysplasia family. Cartilage oligomeric matrix protein (COMP) and type IX collagen are key structural components of the cartilage extracellular matrix and have important roles in tissue development and homeostasis. Mutations in the genes encoding these glycoproteins result in two related human bone dysplasias, pseudoachondroplasia and multiple epiphyseal dysplasia, which together comprise a "bone dysplasia family." It has been proposed that these diseases have a similar pathophysiology, which is highlighted by the fact that mutations in either the COMP or the type IX collagen genes produce multiple epiphyseal dysplasia, suggesting that their gene products interact. To investigate the interactions between COMP and type IX collagen, we have used rotary shadowing electron microscopy and real time biomolecular (BIAcore) analysis. Analysis of COMP-type IX collagen complexes demonstrated that COMP interacts with type IX collagen through the noncollagenous domains of type IX collagen and the C-terminal domain of COMP. Furthermore, peptide mapping identified a putative collagen-binding site that is associated with known human mutations. These data provide evidence that disruptions to COMP-type IX collagen interactions define a pathogenetic mechanism in a bone dysplasia family. The skeletal dysplasias are a diverse group of genetic diseases affecting primarily the development of the osseous skeleton, and range in severity from relatively mild to severe and lethal forms (1). There are over 200 unique well characterized phenotypes (2), and many of these conditions have been grouped into "bone dysplasia families" on the basis of similar clinical and radiographic presentation with the supposition that they will share a common disease pathophysiology (3). While there has been great progress in identifying many of the genes involved in these diseases (4,5), we still have a very limited understanding of the precise cell matrix pathology of individual phenotypes and the relationship between pathogenetic mechanisms within specific bone dysplasia families. Pseudoachondroplasia (PSACH) 1 and multiple epiphyseal dysplasia (MED) comprise a bone dysplasia family; they are clinically similar diseases characterized by varying degrees of short-limbed dwarfism, joint laxity, and early onset degenerative joint disease (1). Mild and severe forms of PSACH can be recognized (6,7), and MED presents with considerable clinical variability where traditionally the mild Ribbing and severe Fairbank forms have been used to define the phenotypic spectrum (8). PSACH results almost exclusively from mutations in the gene encoding cartilage oligomeric matrix protein (COMP) (9 -11). COMP is a pentameric glycoprotein found in the extracellular matrix (ECM) of cartilage (12), tendon (13), and ligament, where it is thought to play a major role in tissue development and homeostasis through interactions with cells (14) and other ECM components such as type I and type II collagen (15). It is a member of the thrombospondin gene family (16,17) and is a modular protein comprising an amino-terminal domain, calcium binding domains (type II and type III repeats), and a large carboxyl domain situated at the distal termini of the pentamer. The majority of the mutations identified in the COMP gene are located within exons encoding the calcium binding type III repeats and are postulated to produce qualitative defects to the protein and a reduction in Ca 2ϩ binding (18). This results in the retention of abnormal COMP pentamers within the rough endoplasmic reticulum (RER) by an undetermined "protein quality control mechanism" (19 -22). Interestingly, type IX collagen has been found to colocalize with abnormal COMP in RER vesicles, but the specificity of these intracellular interactions is unknown. Recently, we were the first to identify mutations in one of the exons encoding the carboxyl terminus, thus confirming an important role for this domain in the structure and/or function of COMP (10). Electron microscopy of labrum ligament from a PSACH patient with a COMP mutation shows severe disruption to collagen fibril orientation, variable fibril diameters, and numerous fused fibrils, confirming an important role for COMP in collagen fibrillogenesis. 2 Some forms of MED are allelic with PSACH and also result from qualitative defects in COMP (9,10); however, as a reflection of its clinical variability, MED is genetically heterogeneous The studies detailed in this paper were performed in the Wellcome Trust Center for Cell-Matrix Research, which was established by Wellcome Trust Grant 040450/Z/94/Z. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ‡ and can result from mutations in the genes encoding type IX collagen (COL9A2 and COL9A3) (23)(24)(25)(26). Type IX collagen is closely associated with type II collagen fibrils, where it binds in an anti-parallel orientation to type II collagen molecules (27). Type IX collagen is a member of the FACIT (fibril-associated collagen with interrupted triple helices) group of collagens and is a heterotrimer ␣1(IX)␣2(IX)␣3(IX) of polypeptides derived from three distinct genes (COL9A1, COL9A2, and COL9A3). Type IX collagen comprises three collagenous (COL) domains separated by four noncollagenous (NC) domains and has long been thought to act as a molecular bridge between collagen fibrils and other cartilage matrix components (28). The COL3 and NC4 domains project out from the fibril surface, providing ideal sites for these interactions. All of the mutations identified in the COL9A2 (24,25) and COL9A3 (23)(24)(25)(26) genes are in the splice donor or acceptor sites of exon 3. These result in the skipping of exon 3, leading to an in-frame deletion of 12 amino acid residues from equivalent regions of the COL3 domain of the ␣2(IX) and ␣3(IX) chains. The restricted localization of these mutations suggests a possible role for this region of the COL3 domain of type IX collagen in the proposed interactions with other components of the cartilage ECM. The observations that mutations in COMP (9, 10) and type IX collagen genes (23)(24)(25)(26) result in phenotypes within the MED disease spectrum, that COMP interacts with triple helical type I and type II collagen (15), and that abnormal collagen fibril morphology is associated with COMP gene mutations provided a rationale for investigating potential interactions between COMP and type IX collagen. In this paper, we show that COMP can interact with type IX collagen. These interactions are mediated through the C-terminal domain of COMP and the noncollagenous domains of type IX collagen. A putative collagen-binding domain in COMP is located between residues 579 and 595, and mutations in this region are likely to disrupt these interactions. Overall, these data provide evidence that disruptions to molecular interactions between two key components of the cartilage extracellular matrix might produce distinct clinical phenotypes that share a common disease pathophysiology and belong to the same bone dysplasia family. EXPERIMENTAL PROCEDURES Isolation and Purification of Native COMP from Cartilage-Fetal bovine calves were obtained fresh from the local slaughterhouse, and articular cartilage was dissected from the large joints. This tissue was then minced in ice-cold TBS, pH 7.4 (20 mM Tris-HCl, pH 7.4, 150 mM NaCl) and briefly homogenized on ice. The homogenate was stirred for 1 h at 4°C and centrifuged at 10,000 rpm (Beckman JA-10 rotor) for 30 min at 4°C. The pellet was resuspended in ice-cold TBS, and this process was repeated. To extract COMP, the pellet was resuspended in ice-cold TBS containing 10 mM EDTA, briefly homogenized, and stirred overnight at 4°C. The extract was centrifuged as previously and filtered at 4°C by gravity flow through Whatman No. 1 filter paper to remove particulate matter prior to chromatography. The COMP-containing EDTA extract was diluted with an equal volume of 20 mM Tris-HCl, pH 7.4, and applied to a 75-ml DEAE-Sepharose fast flow column (Amersham Pharmacia Biotech) equilibrated with 20 mM Tris-HCl, pH 7.4, 75 mM NaCl. The column was then washed with 2 column volumes of the same buffer plus 2 column volumes of 150 mM NaCl in the same buffer prior to elution of COMP by application of 2 column volumes of 300 mM NaCl in the same buffer. At the elution stage, fractions of 14 ml were collected, and 20-l aliquots were analyzed by SDS-PAGE to identify COMP containing fractions. Appropriate fractions were pooled and concentrated by ultrafiltration on ice using an XM-300 membrane (Amicon). The concentrate was centrifuged to remove precipitated protein, diluted with an equal volume of 20 mM Tris-HCl, pH 7.4, and applied to a 12-ml Heparin-Sepharose CL-6B column (Amersham Pharmacia Biotech) equilibrated in 20 mM Tris-HCl, 150 mM NaCl, pH 7.4, to remove thrombospondin and fibronectin. The flow-through containing COMP was collected and concentrated by ultrafiltration on ice using a YM-10 membrane (Ami-con). The sample was applied to a 24-ml Superose 6 gel filtration column (Amersham Pharmacia Biotech) equilibrated in 20 mM Tris-HCl, pH 7.4, 500 mM NaCl for final purification. Fractions of 0.5 ml were collected, and aliquots of 5 l from each were analyzed for the presence of COMP by SDS-PAGE and staining with Gel-code® (Pierce). Expression of Recombinant C-terminal COMP-Total RNA was extracted from human cartilage using TRIZOL® reagent (Life Technologies, Inc.). Approximately 1 g of total RNA was reverse transcribed using an oligo(dT) primer and superscript reverse transcriptase (Life Technologies, Inc.). An aliquot of cDNA was used for a single step PCR amplification using oligonucleotide primers for the 5Ј-and 3Ј-ends of cDNA encoding the C-terminal domain of COMP (Ct-F1, 5Ј-aggcgcgccgaagtcacgctcacc-3Ј; Ct-R1, 5Ј-tcctcgagcggcttgccgcagctgatg-3Ј). Oligonucleotide Ct-F1 contained an engineered AscI restriction site, while oligonucleotide Ct-R1 contained an engineered XhoI restriction site. The polymerase chain reaction yielded a product of the expected size (approximately 700 base pairs), which was digested with AscI and XhoI restriction enzymes and ligated into pSecTag2A vector (Invitrogen) in frame to a polyhistidine tag and Myc epitope. Clones obtained were completely sequenced and a wild-type clone selected for transfection into CHO-K1 cells (ECACC). Cells were grown in Ham's F-12 medium supplemented with 10% fetal calf serum, and transfection was performed using Lipofectin® and Opti-MEM® (Life Technologies, Inc.) according to the manufacturer's protocol. Following transfection, a stably transfected cell line was established by selection with Zeocin® at 250 g/ml (Invitrogen). Resistant cell lines were analyzed for expression of recombinant C-terminal domain of COMP (Ct-COMP) by SDS-PAGE and Western blotting of culture medium using anti-Myc antibody. A clone shown to express recombinant protein was serially diluted to obtain a single cell clone expressing recombinant Ct-COMP, which was then expanded. Purification of Recombinant C-terminal COMP-To simplify purification of Ct-COMP from the culture medium, the concentration of fetal calf serum in the culture medium was gradually reduced from 10 to 2% with no apparent detriment to cell growth/protein production. Cells were grown in 162-cm 2 TC flasks in medium supplemented with 50 g/ml Zeocin®. Medium was harvested and replaced at 4-day intervals, and the cells were maintained at near confluence. Harvested medium (100-ml batches) was chilled on ice, and solid (NH 4 ) 2 SO 4 was added slowly with stirring to a final saturation of 50% and stirred at 4°C for 4 h. The solution was then centrifuged at 10,000 rpm (Beckman JA-10 rotor) at 4°C, and the supernatant was removed. Further solid (NH 4 ) 2 SO 4 was added to a final saturation of 70% and stirred for 4 h, and the solution was centrifuged as previously. The precipitate was resuspended in 20 mM Tris-HCl, pH 8.0, 50 mM NaCl and dialyzed against the same buffer. This was then applied to a 1-ml HiTrap-Q column (Amersham Pharmacia Biotech) equilibrated in the same buffer, and proteins were eluted with a linear NaCl gradient to 500 mM NaCl over 20 column volumes. SDS-PAGE and Western blotting using the anti-Myc antibody identified fractions containing recombinant Ct-COMP. Relevant fractions were pooled and dialyzed against 20 mM Na 2 HPO 4 , pH 7.2, 500 mM NaCl, 10 mM imidazole before being applied to a 1-ml Hi-Trap chelating column (Amersham Pharmacia Biotech) charged with Ni 2 S0 4 and equilibrated in the same buffer. Contaminating proteins were washed away, and His-tagged recombinant Ct-COMP was eluted with a linear imidazole gradient to 500 mM over 20 column volumes. 1-ml fractions were collected and analyzed for presence of recombinant Ct-COMP by SDS-PAGE and staining with Gel-code® reagent. Purification of Native Type IX Collagens from Cartilage and Vitreous-Dr. Rod Watson kindly provided type IX collagen isolated from chick sternal cartilage. Briefly, sterna from 50 dozen 17-day chick embryos were incubated in culture medium containing Dulbecco's modified Eagle's medium supplemented with 64 g/ml ␤-amino proprionitrile, 50 g/ml ascorbic acid, a 1 Ci/ml concentration of a uniformly labeled mixture of 14 C-L-amino acids, and 10 Ci/ml Na 2 35 SO 4 . Type IX collagen was extracted from the sterna in 50 mM Tris-HCl buffer (pH 7) containing 0.2 M NaCl and chromatographed on two consecutive columns of DEAE-cellulose. The type IX collagen was eluted on a gradient of NaCl, pressure-concentrated, and stored frozen until needed (29,30). Pepsin-resistant fragments of types II and IX collagen were extracted from bovine vitreous and articular cartilage, respectively, by pepsin digestion and salt precipitation according to established protocols (31). The pepsin-resistant high molecular weight (HMW) and low molecular weight (LMW) fragments of type IX collagen obtained by this method were separated by molecular sieve chromatography on a Superose-6 column (Amersham Pharmacia Biotech). The short form of type IX collagen was a kind gift from Dr. Kees Jan Bos and was purified from bovine adult vitreous by established protocols (32). SDS-PAGE and Western Blot Analysis-COMP and type IX collagen samples were analyzed in the presence (reduced) or absence (nonreduced) of 100 mM dithiothreitol by SDS-PAGE and stained with Gel-code® (Pierce). Alternatively, proteins were transferred from gels to nitrocellulose by standard protocol, and the nitrocellulose was Western blotted with the relevant primary antibody (1:1000 dilution) followed by the secondary horseradish peroxidase-or alkaline phosphatase-conjugated secondary antibody (1:1000). For each antibody incubation, the diluent was composed of phosphate-buffered saline, 0.1% Tween 20 and 2% (w/v) marvel. Following incubation with the secondary antibody, detection was performed using the enhanced chemiluminescence Western blotting analysis system (Amersham International) or Sigma FAST 5-bromo-4-chloro-3-indolyl phosphate/nitro blue tetrazoliumbuffered substrate tablets (Sigma), according to the manufacturer's protocols. Anti-Myc monoclonal antibody was obtained from Roche Molecular Biochemicals, and polyclonal antibodies to type IX collagen were kind gifts from Dr. Paul Bishop. Horseradish peroxidase-and alkaline phosphatase-conjugated secondary antibodies were obtained from Sigma. Electron Microscopy of COMP-Type IX Collagen Complexes-Either the long form of type IX collagen purified from chick sternal cartilage or the short form lacking the N-terminal NC4 domain purified from bovine vitreous were mixed with native COMP at a molar ratio of 1:1 and dialyzed at 4°C against 20 mM Tris-HCl buffer (pH 7.4), containing 150 mM NaCl and 1 mM ZnCl 2 . A modified version of the mica sandwich technique (33) was used to prepare 6-l aliquots of each sample for platinum carbon rotary shadowing using the Cressington CFE-50B and Nickel 400 TEM grids. Replicas created by this method were studied using a JEOL 1200EX transmission electron microscope operated at an accelerating voltage of 100 kV. Electron micrographs were taken on Agfa Scientia 23D56 electron microscope film and then scanned onto a PC using a Polaroid Sprintscan 45 scanner, in preparation for image reproduction or digitized for analysis. Micrographs obtained were digitized from the photographic film using a monochrome TV camera (Bosch analogue type YK91D) for image analysis purposes. The Semper 6 electron image analysis package (Synoptics Ltd., Cambridge, UK) was used to measure along the length of the collagen molecule to the point of COMP binding, starting at the prominent globular NC4 domain of type IX collagen. Measurements were taken, and the distribution of bound COMP molecules was analyzed by graphing the number of bound molecules relative to defined distances (10-nm intervals) along the collagen molecule. Surface Plasmon Resonance Assay-Protein-protein interaction studies and peptide competition assays were carried out on the BIAcore 1000 or 3000 systems (BIAcore AB, Sweden) as indicated in the figure legends. Native COMP, recombinant Ct-COMP, and BSA were immobilized at 25°C onto different flow cells of a CM5 sensor chip. The chip surface was first activated by injection of 50 l of a 1:1 mixture of 0.1 M N-hydroxysuccinimide and 0.4 M N-ethyl-NЈ-(dimethylaminopropyl)carbodiimide. COMP (100 l of 15 g/ml) in 10 mM glycine, 15 mM NaCl, pH 3.0 (optimal buffer components as determined by trial binding experiments) was immobilized at a flow rate of 2 l/min. Recombinant Ct-COMP (100 l of 2 g/ml) was immobilized under the same conditions, and an additional flow cell was prepared as a blank by immobilization of BSA (15 g/ml) under the same buffer conditions. Remaining activated groups on each flow cell were blocked by injection of 70 l of 1.0 M ethanolamine HCl, pH 8.5, at a flow rate of 10 l/min. The system was then primed with 20 mM Tris-HCl, pH 7.4, 150 mM NaCl containing 1 mM ZnCl 2 . In some binding studies, native type IX collagen at 5 g/ml, HMW and LMW fragments of type IX collagen at 45 nM, and type II collagen at 200 g/ml in the same buffer were injected over the flow cell surface at a flow rate of 10 l/min and a sample volume of 20 l. In peptide competition assays, collagens were preincubated with peptides at various molar ratios prior to injection under the same conditions. In all studies, upon attainment of a steady sensorgram response, the tightly bound proteins were dissociated by injection of 5 mM EDTA in the same buffer ( Fig. 1A; otherwise, data not shown). The BIAcore evaluation software version 3.0 was used to compare sensorgram readings for the different experiments. Purification of Native Proteins and Expression of Recombinant C-terminal Domain of COMP-COMP, intact type IX collagen (long and short forms), and the HMW and LMW pepsin-resistant fragments of type IX collagen were extracted from tissue and purified. COMP appeared as a single band when analyzed by SDS-PAGE and Gel-code® staining (data not shown). The HMW and LMW pepsin-resistant fragments of type IX collagen were extracted from bovine cartilage by acid extraction and salt precipitation. Molecular sieve chromatography of the extract resulted in highly pure preparations of these fragments as judged by SDS-PAGE and Gel-code® staining (Fig. 1). The identities of the fragments were also confirmed by Western blot using type IX collagen-specific polyclonal antibodies (data not shown). To facilitate investigations into the interactions of the Ct-COMP, we developed an expression system for the production of soluble and recombinant Ct-COMP to augment the study of native COMP isolated from fetal bovine tissues. A mammalian expression system using a pSecTag2-Ct-COMP construct was used to produce glycosylated and His-tagged Ct-COMP from CHO-K1 cells. The recombinant protein was isolated from the culture medium by ion exchange and Ni 2ϩ affinity liquid chromatography (Fig. 1). The typical yield of pure recombinant protein from a 100-ml culture was 4 g. Interactions between Native COMP and the Long Form of Type IX Collagen Viewed by Rotary Shadowing Electron Mi- croscopy-Previously, Rosenberg et al. (15) demonstrated that native COMP binds to type I procollagen, type I collagen, type II procollagen and type II collagen via its C-terminal globular domain in the presence of Zn 2ϩ or Ni 2ϩ . To determine whether COMP could interact with the long form of type IX collagen under these conditions, we incubated the purified proteins at a 1:1 molar ratio in TBS supplemented with 1 mM ZnCl 2 . The complexes were prepared for rotary shadowing electron microscopy using a Cressington CFE-50B instrument, and the replicas were viewed by transmission electron microscopy (Fig. 2). The COMP molecules exhibited the characteristic five-armed structure in which the C-terminal domains were clearly visible as globules at the distal ends of the arms. Molecules of type IX collagen appeared as 170-nm-long rods, often exhibiting a pronounced kink about two-thirds along the molecule. In addition, type IX molecules exhibited a distinctive globule, corresponding to the NC4 domain of the molecule. Images of COMP-type IX collagen complexes were digitized, and the location of the binding site of COMP along each type IX collagen molecule was determined by image analysis. Typically, a type IX collagen molecule was seen to bind one COMP molecule. In some instances, two molecules of COMP could be seen interacting with one type IX collagen molecule. For image analysis, we digitized 172 images of type IX collagen molecules that had one or more bound COMP molecules. To determine the sites of COMP binding, we only used images in which both the COMP and the type IX collagen molecules could be visualized easily (107 in total). In all complexes examined, COMP interacted with type IX collagen via its C-terminal domain, consistent with a single binding site on COMP. However, it was apparent that type IX collagen had more than one binding site for COMP. The results demonstrated that in 80% of the complexes, the C-terminal domain of COMP specifically bound to one of four distinct sites on the type IX collagen molecule (Fig. 3). Careful measurements suggested that these sites corresponded to the noncollagenous (NC1, NC2, NC3, and NC4) domains of type IX collagen (Fig. 3, A-D). The pronounced kink in the type IX collagen molecule at the NC3 domain greatly facilitated assignment of molecular polarity (N to C) of the molecule and precise length and distance measurements. The NC2 domain had the highest frequency of occupation, with 36 from 107 complexes having COMP bound to this domain. The frequencies of binding to the other NC domains were relatively similar to each other (Fig. 3E). We noticed that peaks in the frequency histogram were relatively sharp at the NC1, NC2, and NC4 binding sites (Fig. 3E). However, the peak corresponding to the NC3 domain was relatively broad and ranged from 30 to 70 nm, with a median at 50 nm from the NC4 domain. The broadening of this peak was most probably the result of difficulties in determining the precise binding site because of the abrupt kink in the type IX collagen molecule at the site of binding. Interactions between Native COMP and the Short Form of Type IX Collagen-The similar dimensions of the NC4 domain of type IX collagen and the C-terminal domain of COMP in rotary-shadowed images made it difficult in some images to assign, unequivocally, binding of COMP to the NC4 domain or to a region in the collagen molecule close to the NC4 domain. Therefore, we repeated the rotary shadowing electron microscope experiments with the short form of type IX collagen isolated from bovine vitreous, which lacks the NC4 domain (Fig. 4). The NC4 domain is derived exclusively from the ␣1(IX) chain and through alternative splicing and the subsequent use of an alternative start codon in the COL9A1 gene the vitreous form of type IX collagen lacks the entire ␣1(IX) NC4 domain (32). The ␣1(IX) NC4 was clearly missing in all of the type IX collagen molecules observed, and analysis of 25 COMP-type IX collagen complexes failed to identify any COMP molecules binding to the amino terminus of type IX collagen. This confirmed that an interaction with COMP is mediated specifically through the ␣1(IX) NC4 domain. Bound COMP molecules had a similar distribution between the NC1-3 (Fig. 4E) domains to that seen for the long form of type IX collagen. COMP-Type IX Collagen Interactions Studied by Real Time Biomolecular Interaction Analysis- In further experiments to examine the association of COMP and type IX collagen, we used real time biomolecular interaction analysis. The data from these studies provided qualitative evidence that the C-terminal domain of COMP specifically interacted with the noncollagenous domains of type IX collagen. In the first experiment, native COMP was bound to the surface of a CM5 sensor chip, and when buffer containing type IX collagen was injected onto the chip surface a characteristic sensorgram was recorded (Fig. 5A). There was an initial sharp rise in response units (RU) as a result of refractive index change, and this was followed by a slower association phase typical of reversible binding, which reached a plateau on saturation of the binding sites of COMP (labeled A in Fig. 5A). There was then a rapid decrease in RU signal after buffer change (refractive index change), which was followed by the dissociation of type IX collagen that was bound with low affinity (labeled D in Fig. 5A). Finally, an asymptotic phase was observed, which was illustrative of high affinity reversible binding of type IX collagen to the immobilized COMP (labeled B in Fig. 5A). If the BIAcore analysis had been allowed to continue for longer, the sensorgram would eventually have returned to base line once all of the type IX collagen had dissociated. High affinity binding of type IX collagen to COMP was immediately abolished upon the injection of 5 mM EDTA, confirming that the interaction was reversible and cation (Zn 2ϩ )dependent (data not shown). This experiment was repeated using several different concentrations of type IX collagen (7.5-15 g/ml), and the level of binding was seen to be proportional to the analyte concentration (data not shown). To confirm the specificity of these interactions, we examined the ability of two ECM proteins (fibronectin and laminin) and four other proteins (chosen at random) to bind to COMP. The injection of fibronectin, laminin, BSA, ADH (not shown), ␤-amylase (not shown), and aproferritin (not shown) over COMP did not exhibit the typical association or dissociation curves (surface plasmon resonance effect) that were observed with type IX collagen (Fig. 5A). In further control experiments, the injection of COMP, type IX collagen, fibronectin, laminin, alcohol dehydrogenase, ␤-amylase, and aproferritin over BSA bound to the sensor chip failed to show evidence of binding (data not shown). We considered the possibility that our rotary shadowing EM measurements could have been insensitive to the binding of COMP to the triple-helical regions of type IX collagen immediately adjacent to the noncollagenous domains. Therefore, we performed experiments using BIAcore to qualitatively determine whether there were specific interactions between COMP and pepsin-digested type IX collagen molecules (in which the noncollagenous domains had been proteolytically cleaved). In this series of experiments, we bound COMP to the sensor chip and injected buffer containing purified pepsin-resistant type IX collagen samples, comprising separately the HMW (COL2-COL3) or LMW (COL1) fragments, respectively. In these experiments, the characteristic association and dissociation curves that were seen with intact type IX collagen were seen only, and to a lesser extent, with the HMW fragment and not the LMW fragment (Fig. 5B). In this experiment, the injection of type IX collagen samples was terminated before the attainment of binding saturation, which was done to conserve the limited quantities of type IX collagen we had available. The inability of the LMW fragment to bind to COMP confirms that the NC1 and NC2 domains of type IX collagen contain binding sites for COMP. In the case of the HMW fragment, it has been previously determined that the NC3 domains of human and avian ␣1(IX) and ␣3(IX) chains are resistant to digestion by pepsin, while the ␣2(IX) chain is sensitive to pepsin digestion, resulting in a cut between the COL2 and COL3 domains of ␣2(IX) (34,35). While the extent of cleavage in bovine type IX collagen is unclear, the limited ability of the COL2-NC3-COL3 domain to bind to COMP suggests that the binding site in the NC3 domain is not fully disrupted. Overall, these qualitative data confirmed the rotary shadowing EM observations that binding to COMP was mediated through the noncollagenous domains of type IX collagen. In further experiments, recombinant Ct-COMP and BSA were immobilized onto different flow cells of the same sensor chip, and native type IX collagen was injected over the surface (Fig. 5C). Typical association (labeled A in Fig. 5C) and dissociation (labeled D in Fig. 5C) phases were seen when type IX collagen was injected over Ct-COMP, indicating that there was reversible binding between type IX collagen and Ct-COMP. The binding of type IX collagen to Ct-COMP was abolished after the addition of 5 mM EDTA (data not shown). Once again the injection of analyte was terminated prior to attainment of binding saturation. These data, in combination with those derived from rotary shadowing electron microscopy experiments, confirmed that COMP binding to type IX collagen was mediated through the C-terminal domain of COMP. The lower response registered on recombinant Ct-COMP (ϳ30 RU) com-pared with native COMP (ϳ55 RU) resulted from a combination of two factors. First, there was a smaller amount of Ct-COMP bound to the chip surface. This was as a consequence of our expression protocol, which was optimized to produce soluble and glycosylated Ct-COMP rather than maximizing expression. Second, pentameric native COMP consisting of five C-terminal domains per molecule is presumably more likely to present these domains for favorable interactions with type IX collagen at distances away from the chip surface. Conversely, recombinant Ct-COMP molecules would be bound directly to the chip surface in all cases and would therefore be less accessible for interactions with type IX collagen. Identification of a Potential Collagen Binding Site in COMP-To identify the collagen binding site in COMP, we preincubated type IX collagen with peptides synthesized to regions of the C-terminal domain in which disease-causing mutations had been previously identified. These mutations include E583K (36), T585R and T585M (10), H587R (37), and R718P, 3 duced proportionally reversible binding of type IX collagen to COMP (Fig. 6A). In contrast, peptide 713-723 at the highest molar excess (10,000-fold) had little effect on binding (Fig. 6B). These data suggest that a putative collagen-binding site, lo-cated in the C-terminal domain of COMP, is likely to reside between residues 579 and 595. Specificity of the Putative Collagen Binding Site-Previously, COMP has been shown to bind to type I and type II FIG. 5. BIAcore sensorgram showing type IX collagen binding to COMP detected by differences in surface plasmon resonance measured as response (RU) using a BIAcore 1000 instrument. A, COMP (5 g/ml) was immobilized onto a flow cell of a CM5 sensor chip, and various proteins (all at 45 nM) were injected over the chip surface. Type IX collagen reversibly bound to COMP during the association phase (A), but once the injection of type IX collagen ceased there was dissociation of type IX collagen (D) until an asymptotic phase was reached (B). There was no evidence of binding with BSA, fibronectin, or laminin, which only exhibited a refractive index change due to the bulk shift of solvent. B, COMP (15 g/ml) was immobilized onto a flow cell of a CM5 sensor chip, and type IX collagen and pepsin-resistant type IX collagen fragments, HMW(COL2-3) or LMW-(COL 1) (all at 45 nM), were injected over the surface for 120 s as indicated by the arrow. Type IX collagen was shown to bind strongly (as previously shown), with only a small level of binding apparent with the HMW fragment and no binding with the LMW fragment. D, recombinant Ct-COMP (2 g/ ml) was immobilized onto a flow cell of a CM5 sensor chip, and type IX collagen (5 g/ml) was injected over the surface for 120 s as indicated by the arrow. Reversible binding was seen during the association phase (A). Once injection of type IX collagen ceased, there was an initial dissociation (D) of type IX collagen from Ct-COMP until an asymptotic phase (B) was reached. In each experiment, binding of type IX collagen to BSA was measured in a separate flow cell on the same sensor chip under the same conditions. collagen in the presence of 0.25-2.5 mM Zn 2ϩ (15). To determine whether the binding of COMP to type II collagen and type IX collagen was mediated through the same site in the C-terminal domain of COMP, we preincubated type II collagen with the two peptides prior to passing them over COMP immobilized on the BIAcore sensor chip. Analysis of the sensorgram (Fig. 6C) demonstrated that while preincubation with peptide 713-723 had no effect on the levels of binding of type II collagen to FIG. 6. Peptide mapping of collagen binding sites in the C-terminal domain of COMP. COMP (15 g/ml) was immobilized onto a flow cell of a CM5 sensor chip. A, type IX collagen (5 g/ml) alone and type IX collagen (5 g/ml) that had been preincubated with increasing amounts of peptide 579 -595 (GVDFEGT-FHVNTVTDDD) were injected over the chip surface for 120 s as indicated by the arrow. Molar excesses of 10-, 100-, 500-, and 1000-fold of peptide 579 -595 were used. During injection of type IX collagen (with or without peptides), there was an association phase characteristic of reversible binding. Once injection of type IX collagen (with or without peptides) ceased, there was dissociation of type IX collagen (D), leading into an asymptotic phase (B). B, type IX collagen (5 g/ml) alone or type IX collagen (5 g/ml) that had been preincubated with a 10,000-fold molar excess of peptide 713-725 was injected over the chip surface for 120 s as indicated by the arrow. C, type II collagen (200 g/ml) alone or type II collagen (200 g/ml) that had been preincubated with a 1000-fold molar excess of peptide 579 -595 or a 10,000-fold molar excess of peptide 713-725 was injected over the chip surface for 120 s as indicated by the arrow. In each experiment, binding of type IX or type II collagen to BSA was measured in a separate flow cell on the same sensor chip under the same conditions. COMP, preincubation with peptide 579 -595 (1000-fold molar excess) reduced binding by a similar extent to that seen with type IX collagen (Fig. 6, compare A and C). These data suggest that the binding of COMP to type II collagen and type IX collagen is mediated through either the same or a closely located binding site. DISCUSSION COMP and type IX collagen are important structural components of the cartilage ECM with fundamental roles in collagen fibrillogenesis, tissue development, and homeostasis. We have used rotary shadowing electron microscopy and BIAcore analysis to show that COMP can interact with native type IX collagen. These interactions are mediated through the C-terminal domain of COMP and the noncollagenous domains of type IX collagen (NC1-4). Using BIAcore, we demonstrated qualitatively that COMP can interact with native type IX collagen and to a certain extent the pepsin-derived HMW fragment but not the LMW fragment. These data collectively suggest that each of the noncollagenous domains of type IX collagen are involved in interactions with COMP. The use of recombinant Ct-COMP in BIAcore studies confirmed the rotary shadowing EM findings that the C-terminal domain of COMP mediates interaction with type IX collagen. Furthermore, the use of peptide inhibition assays aided in the identification of a putative collagen-binding site between residues 579 and 595 of COMP. This region of COMP has previously been shown to contain mutations resulting in skeletal dysplasia, providing a direct link between these fundamental interactions and human disease. Overall, these findings support recent data indicating that the C-terminal domain of COMP can bind to collagen I/II and procollagen I/II molecules in the presence of divalent cations (15). Using a solid-phase binding assay, Rosenberg and colleagues determined that interactions between COMP and collagen I/II displayed a preference for Zn 2ϩ , with binding saturated at 0.5 mM. They subsequently characterized these interactions further using BIAcore and rotary shadowing transmission electron microscopy with 1 mM Zn 2ϩ (15). We performed similar experiments to study interactions between COMP and type IX collagen, which appear also to be mediated by the C-terminal domain of COMP in the presence of 1 mM Zn 2ϩ . Whereas Rosenberg and co-workers demonstrated that COMP bound to the collagenous regions of types I and II collagen at four sites located at 0 (C-terminal), 126, 206, and 300 nm (N-terminal) (15), we have shown that COMP appears to bind exclusively to the noncollagenous domains of type IX collagen. We used BIAcore analysis to confirm that type II collagen interacted with COMP in our system and then used peptide inhibition assays to show that this binding could also be specifically disrupted with peptide 579 -595 (but not 713-725). These data suggest that type I, II, and IX collagen interactions are mediated through the same (or closely located) region of the C-terminal domain of COMP. The majority of mutations in the COMP gene are within exons encoding the calcium binding (type III repeat) domain (9 -11) and are predicted to result in qualitative defects to COMP (18) leading to the retention of misfolded protein in the RER, a matrix deficient in COMP, and ultimately cell death (21). Interestingly, analysis of chondrocytes from PSACH cartilage shows the accumulation of type IX collagen along with COMP in the RER, suggesting that interactions, possibly specific, occur between these molecules prior to secretion (19). During pentamerization, by random association, 97% of all pentamers will contain at least one abnormal monomer. The relative effect of different numbers of abnormal monomers on COMP pentamer secretion has yet to be determined, but im-munohistochemical analysis of cartilage has shown that there is a significant reduction in the level of extracellular COMP that would be available for interactions with collagen (19). Interestingly, electron microscopy of labrum ligament from a PSACH patient with a mutation in the type III domain of COMP (G465S) shows a generalized disruption to tissue organization and abnormal collagen fibril morphology. Longitudinal sections show severe disruption to the orientation of collagen fibrils and differences in individual fibril thickness, whereas transverse sections show variable fibril diameter, irregular fibril surface, and numerous fused fibrils. 2 Overall, these data confirm a role for COMP in collagen fibrillogenesis and matrix assembly. We hypothesize that disruptions to COMP-type IX collagen interactions are a secondary component of the pathophysiology of the "PSACH-MED bone dysplasia family." Disruption to these interactions can occur by one of two mechanisms; either mutations occur within the binding sites of these molecules or there is a reduction in the amount of one (or both) of these molecules in the ECM of cartilage. We have shown that a collagen binding site is located between residues 579 and 595, a region of COMP previously shown to contain mutations that cause either PSACH or MED. Four mutations have been identified within five residues, E583K, T585R, T585M, H587R, (10,36,37), suggesting that the motif EGTFH plays an important role in COMP-collagen interactions. We suggest that mutations in exons encoding the C-terminal domain of COMP are likely to have a less deleterious effect on the structure and folding of abnormal COMP, therefore not preventing its secretion into the extracellular matrix. In this case, interactions with type IX collagen are likely to be disrupted by mutations in the collagenbinding site of COMP. The cell matrix pathology of MED, resulting from either COL9A2 or COL9A3 mutations, is unresolved. Previously, analysis of cartilage ultrastructure has suggested that there was no retention of abnormal type IX collagen within the RER of chondrocytes from some affected patients (38). However, recent data have shown that inclusion bodies can be present with a lamellar structure similar to that seen in chondrocytes from patients with COMP gene mutations (23). Finally, we have reported that a specific mutation in COL9A2 results in the degradation of COL9A2 mRNA from the mutant allele, and undoubtedly this would result in an overall reduction in type IX collagen within the ECM (24). Collectively, these data suggest that although several different mechanisms contribute to the pathophysiology of MED, all of them are likely to result in an ECM deficient in type IX collagen, thus disrupting important interactions between COMP and type IX collagen. In conclusion, we have shown that COMP interacts with type IX collagen and have identified the domains in each of these proteins that mediate these interactions. High resolution structural studies will be needed to map the binding sites with precision. Disruption to these interactions is likely to define a pathogenetic mechanism in a human bone dysplasia family, and this finding has major implications in understanding the cell matrix pathology of human skeletal dysplasias.
2018-04-03T06:07:48.039Z
2001-02-23T00:00:00.000
{ "year": 2001, "sha1": "046e8b3d3535533eac1b21ecd692766cb0f5c88a", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/276/8/6046.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "30683bd8d9535d3aedd2bfb33928f924bc28f658", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
118712862
pes2o/s2orc
v3-fos-license
Is the Bohr's quantization hypothesis necessary ? We deduce the quantization of Bohr's hydrogen's atomic orbit without using his hypothesis of angular momentum quantization. We show that his hypothesis is nothing more than a consequence of the Planck's energy quantization. I. INTRODUCTION The existence of an atomic nucleus was confirmed in 1911 by E. Rutherford in his classic scattering experiment [1]. Before that time, it was believed that an atom was something like a positively charged dough and negatively charged raisins scattered here and there within the dough, or an ice-cream with chocolate chip flakes in it, where the ice-cream would be the positive charge (protons) and the chocolate chip flakes the negative charge (electrons); an atomic model proposed by J.J. Thomson in 1904. However, this atomic model had many problems. And it became untenable as E. Rutherford showed the inconsistencies of such a model in face of his experiments of alpha particle (ionized helium) scattering by thin sheets of gold as targets. The main objection to that model being that scattering experiments indicated a far more "dilute" type of matter constituints and "empty" space within objects. Then, Rutherford himself proposed the orbital model for the atom, in which there was a central nucleus populated by positively charged particles -protons -and negatively charge particles -electrons -moving around the nucleus in orbital trajectories, similar to the solar system with the sun at the center and planets orbiting it, described by the mechanics of celestial bodies acted on under central forces among them. Later on, the concept of neutral particlesneutrons -within the nucleus came to be added into the model. The planetary model, however, was not free of problems. The main objection was that in such a model, electrons moving around the nucleus would radiate energy and therefore, classically, such an atom would collapse into itself after electrons radiated all their energy. Therefore, the model proposed by Rutherford had to be modified, and here comes the contribution of N. Bohr, which proposed the hypothesis for quantization of electron orbits making it possible to better understand the properties of atoms and of the stuff with which they are made of. A comment may be in order here: At that time, the atomic nucleus was understood as a "storage" of mass and positive charge of the atom. There was no need to know neither a hint as to what was the internal structure of the nucleus. [2]. These concepts came into play afterwards, as experiments got more sophisticated and more deeply concerned as to the internal structure of atoms. II. THOMSON'S MODEL In the model proposed by J.J. Thomson [3] in 1904, the atom was considered like a fluid with continuous spherical distribution of positive charges where electrons with negative charges were embedded, in a number sufficient to neutralize the positive charges. This model had an implicit underlying assumption: that of the existence of stable configurations for the electrons around which they would oscilate. However, according to the classical electromagnetic theory, there can be no stable configuration in a system of charged particles if the only interaction among them is of the electromagnetic character. Moreover, since any electrically charged particle in an accelerated movement emits electromagnetic radiation, his model had an additional hypothesis that the normal modes for the oscillating electrons would have the same frequencies as those observed associated with the lines of the atomic spectrum. However, there was not found any configuration for the electrons for any atom whose normal modes had any one of the expected frequencies. Therefore, Thomson's model for the atom was abandoned because there was no agreement between its assumptions/predictions and the experimental results obtained by H. Geiger and E. Marsden [4]. III. THE PLANETARY ATOMIC MODEL The discoveries that ocurred by the end of XIX century led the physicist Ernest Rutherford to do scattering experiments that culminated in a proposal for the planetary model for atoms. According to this model, all positive charge of a given atom, with approximately 99% of its mass, would be concentrated in the atomic nucleus. Electrons would be moving around the nucleus in circular orbits and these would be the carriers of the negative charges. Knowing that the charge of an electron and the charge of a proton are the same in modulus, and that the nucleus has Z protons, we can define the charge of the nucleus as Ze − . Experimentally we observe that in an atom the distance r between the electron orbit and the nucleus is of the order 10 −10 m. In this section, we build the planetary model for the atom and analyse its predictions compared to experimental data. Using Coulomb's law and the centripetal force acting on the electron in its circular orbit where we have used the shorthand notation |e − | = e. From (4) we can estimate the radius for the electron orbit, which means that the radius depends on the total number of protons in the nucleus, Z, and also on the electron's velocity. Here we can make some definite estimates and see whether our estimates are reasonable, i.e., agrees or does not violate experimental data. According to the special theory of relativity, no greater velocity can any particle possess than the speed of light. More precisely, for particles with mass like electrons, we know that their velocity is where we have taken Z = 1 and didn't consider the relativistic mass. This means that we have a lower limit for the radius of an electron's orbit around the central nucleus, which is consistent with the experimental observed data where radius of electronic orbits are typically of the order 10 −10 m. A. Limitations of the model Even though this model could explain some features of the atomic structure concerning the scattering data, there was nonetheless problems that could not be explained just by classical mechanics analysis. Since protons and electrons are charged particles, electromagnetic forces do play their role in this interaction, and according to Maxwell's equations, an accelerated electron emits radiation (and therefore energy), so that electrons moving around the nucleus would be emitting energy. This radiated energy would of course lead to the downspiraling of electrons around the nucleus until hitting it. Classical theoretical calculations done predicted that all electrons orbiting around a nucleus would hit it in less than a second! However, what we observe is that there is electronic stability, and therefore the model had to be reviewed. IV. BOHR'S HYPOTHESIS Analysis of the hydrogen spectrum which showed that only light at certain definite frequencies and energies were emitted led Niels Bohr to postulate that the circular orbit of the electron around the nucleus is quantized, that is, that its angular momentum could only have certain discrete values, these being integer multiples of a certain basic value [6]. This was his "ad hoc" assumption, introduced by hand into the theory. In 1913, therefore, he proposed the following for the atomic model The electrons orbiting such a nucleus had discrete quantized energies, which meant that not any orbit is allowed but only certain specific ones satisfying the energy quantization requeriments; 3. The allowed orbits also would have quantized or discrete values for orbital angular momentum, according to the prescription |L| = nh whereh = h/2π and n = 1, 2, 3, ..., which meant the electron's orbit would have specific minimum radius, corresponding to the angular momentum quantum number n = 1. That would solve the problem of collapsing electrons into the nucleus. Two colloraries following these assumptions do follow: First, from item 2. above, the laws of classical mechanics cannot describe the transition of an electron from one orbit to another, and second, when electrons do make a transition from one orbit to another, the energy difference is either supplied (transition from lower to higher energy orbits) or carried away (transition from higher to lower energy orbits) by a single quantum of light -the photonwhich has the same energy as the energy difference between the two orbits. In this short work we propose that Bohr's atomic orbit quantization hypothesis is not necessarily needed as an "ad hoc" assumption, but that this can be arrived at using only Planck's assumption of energy quantization. First, let us follow the usual pathway where Bohr's quantization is introduced. Using Newton's second law for the electron moving in a circular orbit around the nucleus, and thus subjec to Coulomb's law, we have: This allows us to calculate the kinetic energy of the electron in such an orbit: The potential energy for the system proton-electron on the other hand is given by where r is the radius of the electronic orbit. Therefore, the total energy for the system is This result would suggest that, since the radius can have any value, the same should happen with the angular momentum L. L = pr sin θ = pr, where θ = 90 0 (10) that is, the angular momentum depends on the radius. The linear momentum of the electron is given by Therefore the problem of quantizing the angular momentum L reduces to the quantizing of the radius r, which depends on the total energy (9). Just here Bohr introduced an additional hypothesis, in that the angular momentum of the electron is quantized, i.e., whereh = h 2π . In this manner he was able to quantize the other physical quantities such as the total energy. This is the usual pathway wherein the textbooks normally follow in their sequence of calculations. In this work we show another way, in which we do not need the additional Bohr's assumption input for angular momentum quantization, but stress the importance of Planck's original energy quantization, from which angular momentum quantization follows as a consequence. So, from Planck's hypothesis to quantize the energy (7) we have that the kinetic energy of the orbiting electron is quantized accordingly, i.e., so that if we take the derivative of E c with respect to the frequency f we get Knowing that the scalar orbital velocity is where f is the frequency and r is the radius of the electronic orbit, we have that the ratio of kinetic energy variation with respect to the frequency is From (16), (10) and (11), we rewrite (17), so that Using now (15) in (18) we obtain We see that the ratio of kinetic energy variation with respect to the frequency plus the Planck's hypothesis of energy quantization leads to Bohr's assumption. V. CONCLUSIONS In this work we have shown that Planck's fundamental assumption of energy quantization is more fundamental than Bohr's assumption of angular momentum quantization. In fact, we have shown that Bohr's rule for angular momentum quantization can be dispensed of altogether if we consider Planck's energy quantization and that the former can be derived from this latter one. The identity (15) can be further clarified by Wilson-Sommerfeld's rules for quantization [8]. They proposed a set of rules to quantize any physical system whose coordinates are periodic functions of time. Their rules are the following: For all physical systems whose coordinates are periodic functions of time there exists a quantum condition for each coordinate expressed as: where i labels any one of the coordinates, p i the conjugate momentum associated with such coordinate and n i a quantum number atributed to such a coordinate. In our case, we have,
2019-04-14T03:14:17.135Z
2005-06-06T00:00:00.000
{ "year": 2005, "sha1": "00b2044b671c1220c7d5e8376ef7276307bdaef9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "173e8c6888f26855bcc239b8732ec6c97d75aa3f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119179642
pes2o/s2orc
v3-fos-license
Spectral analysis of an abstract pair interaction model We consider an abstract pair-interaction model in quantum field theory with a coupling constant $\lambda\in {\mathbb R}$ and analyze the Hamiltonian $H(\lambda)$ of the model. In the massive case, there exist constants $\lambda_{\rm c}<0$ and $\lambda_{{\rm c},0}<\lambda_{\rm c}$ such that, for each $\lambda \in (\lambda_{{\rm c},0},\lambda_{\rm c})\cup (\lambda_{\rm c},\infty)$, $H(\lambda)$ is diagonalized by a proper Bogoliubov transformation, so that the spectrum of $H(\lambda)$ is explicitly identified, where the spectrum of $H(\lambda)$ for $\lambda>\lambda_{\rm c}$ is different from that for $\lambda\in (\lambda_{{\rm c},0}, \lambda_{\rm c})$. As for the case $\lambda<\lambda_{{\rm c},0}$, we show that $H(\lambda)$ is unbounded from above and below. In the massless case, $\lambda_{\rm c}$ coincides with $\lambda_{{\rm c},0}$. Introduction In this paper, we consider an abstract pair-interaction model in quantum field theory. The Hamiltonian of the model is of the form acting in the boson Fock space F b (H ) over a Hilbert space H (see Subsection 2.1), where T is a self-adjoint operator on H , dΓ b (T ) is the second quantization operator of T , Φ s (g) is the Segal field operator with test vector g in H (see Subsection 2.1) and λ ∈ R is a coupling constant. A model of this type is called a φ 2 -model. There have been many studies on massive or massless φ 2 -models in concrete forms or abstract forms (see, e.g., [4,7,8,10,11,15]). In [10] and [15], the (essential) self-adjointness of the Hamiltonian of a φ 2 -model is proved in the case where λ > 0 or |λ| is sufficiently small. In [10], the existence of a ground state of a φ 2 -model also is shown in the case where the quantum field under consideration is massive and λ > 0. It is well known that Hamiltonians with linear and/or quadratic interactions in quantum fields may be analyzed by the method of Bogoliubov transformations (see, e.g., [1,2,3,4,6,7,9,11]). A typical Bogoliubov transformation is constructed from bounded linear operators U, V and a conjugation operator J on H satisfying the following equations: where A J := JAJ and A * is the adjoint of a densely defined linear operator A. It is well known that there is a unitary operator U on F b (H ) which implements the Bogoliubov transformation in question if and only if V is Hilbert-Schmidt [6,12,13,14]. Moreover, it is shown that, under the condition that V is Hilbert-Schmidt and suitable additional conditions, the Hamiltonian under consideration is unitarily equivalent via U to a second quantization operator up to a constant addition. For example, the Pauli-Fierz model with dipole approximation, which can be regarded as a kind of φ 2 -model, is analyzed by this method in [9]. Recently, a general quadratic form Hamiltonian with a coupling constant λ ∈ R has been analyzed in [11] and it is shown that, in the case of a massive quantum field, under suitable conditions, the Hamiltonian is diagonalized by a Bogoliubov transformation. In [7], the sufficient condition formulated in [11] to obtain the result just mentioned has been extended. The spectrum of the standard pair-interaction model in physics, which is a concrete realization of the abstract pair-interaction model, is formally known [8] in the case where λ > λ c,0 and λ = λ c for some constants λ c and λ c,0 < λ c . The paper [4] gives a rigorous proof for that in the framework of the boson Fock space theory over H = L 2 (R d ) for any d ∈ N and λ > λ c . One of the motivations for the present work is to extend the theory developed in [4] with H = L 2 (R d ) to the theory with H being an abstract Hilbert space including the case where λ < λ c . It is known [8] that spectral properties of a pair-interaction model may depend on the range of λ with λ c being a border point. Hence it is important to make this aspect clear mathematically. Therefore we analyze our model also for the region λ < λ c . We show that, in the massive case with λ ∈ (λ c,0 , λ c ) also, the method of Bogoliubov transformations can be applied to prove that the Hamiltonian H(λ) is unitarily equivalent to a second quantization operator up to a constant addition. Then we see that the spectrum of H(λ) for λ ∈ (λ c,0 , λ c ) is different from that for λ > λ c . In the massless case, λ c,0 coincides with λ 0 . The main results of the present paper include the following (1)-(3) (see Theorem 2.8 for more details): (1) Identification of the spectra of H(λ) for λ > λ c . (2) Identification of the spectra of H(λ) for λ c,0 < λ < λ c (the massive case; in the massless case, λ c,0 = λ c ). In this case, bound states different from the ground state appear. (3) Unboundedness from above and below of H(λ) for λ < λ c,0 . The outline of this paper is as follows. In Section 2, we define our model and recall a fundamental fact in a general theory of Bogoliubov transformations. We prove the (essential) self-adjointness of H(λ) (Theorem 2.3). Then we state the main theorem of this paper (Theorem 2.10). In Section 3, we construct operators U and V which are used to define the Bogoliubov transformation we need. In Section 4, we show that U and V satisfy (1.1) and V is Hilbert-Schmidt. In Section 5. we prove Theorem 2.8 (1) and calculate the ground state energy of H(λ) in the case λ > λ c . In Section 6, we prove Theorem 2.8 (2). In Section 7, we prove Theorem 2.8 (3). In Section 8, we consider a slightly generalized Hamiltonian of the form H(η, λ) := H(λ) + ηΦ S (f ) with η ∈ R and f ∈ H . Applying the methods and results in the preceding sections, we can analyze H(η, λ) to identify the spectra of it. In Appendix, we state some basic facts in the theory of boson Fock space. The abstract Boson Fock Space Let H be a Hilbert space over the complex field C with the inner product ·, · H . The inner product is linear in the second variable and anti-linear in the first one. The symbol · H denotes the norm associated with it. We omit H in ·, · H and · H , respectively if there is no danger of confusion. For each non-negative integer n = 0, 1, 2, . . . , ⊗ n s H denotes the n-fold symmetric tensor product Hilbert space of H with convention ⊗ 0 s H := C. Then . For a linear operator T on a Hilbert space, we denote its domain by D(T ). For a densely defined closable operator T on H , let T be the densely defined closed operator on ⊗ n s H defined by where I denotes the identity operator on H , A denotes the closure of a closable operator A and A M denotes the restriction of a linear operator A on a subspace M. The operator is called the second quantization operator of T . If T is self-adjoint or non-negative, then so is dΓ b (T ). For each f ∈ H , there exists a unique densely defined closed operator A(f ) on F b (H ) such that its adjoint A(f ) * is given as follows: where S n is the symmetrization operator on the n-fold tensor product ⊗ n H of H . The operator A(f ) (resp. A(f ) * ) is called the annihilation (resp. creation) operator with test vector f . We have for all f ∈ H and A(f ) and A(f ) * leave F b,0 (H ) invariant. Moreover, they satisfy the following commutation relations: is called the Segal-field operator with test vector f . We write its closure by the same symbol. Bogoliubov Transformation In this subsection, we define a Bogoliubov transformation and recall an important theorem about it. For a conjugation J on H (i.e., J is an anti-linear operator on H satisfying Jf = f for all f ∈ H and J 2 = I) and a linear operator A on H , we define Then the correspondence (A(·), A(·) * ) → (B(·), B(·) * ) is called a Bogoliubov transformation. By hold, then the Bogoliubov transformation preserves CCR, i.e., it holds that The following theorem is well-known [13,14]: if and only if V is Hilbert-Schmidt. (2) Let λ ≤ λ c,0 and f ∈ D(T 1/2 ). Then H(η, λ) is essentially self-adjoint on any core of dΓ b (T ) for all η ∈ R. In particular, if η = 0 and λ = λ c,0 , then H(λ c,0 ) = H(0, λ c,0 ) is bounded from below. Definition 2.5. Let T be a self-adjoint operator on H and {E(B) | B ∈ B 1 } be the spectral measure associated with T on the Borel field B 1 on R. The operator T is called purely absolutely continuous if, for each f ∈ H , the measure E(·)f 2 on B 1 is absolutely continuous with respect to the one-dimensional Lebesgue measure. Definition 2.6. For a purely absolutely continuous self-adjoint operator T and vectors f, g ∈ H , ψ g,f denotes the Radon-Nikodym derivative of the finite complex Borel measure g, E(·)f on B 1 . In particular, we set ψ g := ψ g,g . Assumptions To prove our main theorem stated later (Theorem 2.10), we need some assumptions. For a closed operator A, σ(A) denotes the spectrum of A. If A is self-adjont, then σ ac (A) (resp. σ p (A), σ sc (A)) denotes the absolutely continuous (resp. point, singular continuous) spectrum of A. For a self-adjoint operator A bounded from below, is called the lowest energy of A. In particular, it is called the ground state energy of A if E 0 (A) ∈ σ p (A). In this case, any for responding eigenvector is called a ground state of A. (1) The operator T is a non-negative, purely absolutely continuous selfadjoint operator, Remark 2.8. The operator T is injective since it is a purely absolutely continuous selfadjoint operator. Since T has no eigenvector, the inverse ofT exists. Assumption 2.7 (2) implies that T J = T . In general, for a self-adjoint operator A and a conjugation J, we can choose a vector f ∈ D(A) satisfying Jf = f if A J = A. Thus the vector g in Assumption 2.7 (2) exists. By Assumption 2.7 (3), one can easily show that sup x∈σ(T ) ψ g (x) < ∞ and, for each f ∈ H , the functions ψ g,f , ψ T ±1/2 g,f are in L 2 (R) and the maps : f → ψ g,f , ψ T ±1/2 g,f are bounded. Actually, for any h ∈ H and B ∈ B 1 , the following inequality holds for almost all µ ∈ R with respect to the Lebesgue measure. Hence, by Assumption 2.7 (3), we have the boundedness of the mappings. Moreover, we see that for any Proof. These are proved by using the spectral theorem. The Main Theorem In this subsection, we state the main theorem of the present paper. Let λ c be a constant defined by Then it is easy to see that λ c,0 ≤ λ c , and λ c,0 = λ c if and only if E 0 = 0. (1) Let T and g satisfy Assumption 2.7. If λ > λ c , then there are a unitary operator U on In particular, U −1 Ω 0 is the unique ground state of H(λ) up to constant multiples, and (2) Let T and g satisfy Assumption 2.7 and E 0 > 0. If λ c,0 < λ < λ c , then there exist a unitary operator V on F b (H ), an injective non-negative self-adjoint operator ξ on H and a constant E b ≥ 0 such that ξ has a ground state and In particular, V −1 Ω 0 is the unique ground state of H(λ) up to constant multiples, and where β > 0 is the discrete ground state energy of ξ. Example 2.11. A concrete realization of the abstract model is given as follows (see [8,Chapter 12]): where ω is a multiplication operator associated with the function ω(k) Hence, with J being the complex conjugation, the following conditions (2)'-(4)' imply that the present model satisfies Assumption 2.7: For example, one can easily check that the function satisfies the above conditions (2)'-(4)'. Definitions and properties of some functions and operators In this section we introduce some functions and operators. We assume that H is separable and Assumption 2.7 from this section to Section 6. Functions D and D Then D is well-defined and analytic in C\[0, ∞). Moreover, the following hold: (1) For all λ > λ c , D(z) has no zeros in C\[0, ∞). (2) Let λ < λ c . We can see that Lemma 3.2. The following hold: be the conjugate poisson kernel and the poisson kernel respectively and f * h denote the convolution of functions f and h. Let where Hf is called the Hilbert transform of f . Then for all x ∈ R, hold uniformly in x. Proof. By Assumption 2.7 (2), (3) and (4), the assertion (1) holds. Next we consider the assertion (2). By (1), in particular, φ g is bounded and uniformly continuous. Thus it is easy to see that A (2) ε * φ g converges uniformly to φ g . Moreover, by (1), Holder's inequality, the mean value theorem and a similar estimate to the proof of [16,Theorem 92.], we can show that (A tends to 0 uniformly in x as ε ↓ 0. Hence the assertion (2) holds. Detailed studies of the Hilbert transform are given in [16]. Proof. For any s ≥ 0 and ε > 0, we have by change of variable Thus, by Lemma 3.2, D ± converge uniformly in s ≥ 0 and (3.1) holds. The continuity of D ± is due to the uniform convergence. Operators R ± Lemma 3.7. One can define bounded operators R ± on H as follows: where R z (A) is the resolvent of a linear operator A at z ∈ ρ(A) (the resolvent set of a linear operator A). Proof. For a fixed ε > 0 and any f ∈ H , by Lemma 3.5 and a property of a resolvent. Thus we can define linear operators R where we have used the functional calculus. By change of variables in the Lebesgue-Stieltjes integration, functional calculus and Fubini's theorem, we have 2)A ± f as ε ↓ 0 by a property of Hilbert transform and the continuity of the inner product with L 2 (R), where the linear operators are well-defined (see Remark 2.8 and Lemma 3.5). Moreover, by change of variables, the isometry of Hilbert transform and Remark 2.8, we can show that the inequalities hold for all f ∈ H with constant c g := sup σ(T ) ψ g . Hence R ± are bounded. It is easy to see that R * ± := (R ± ) * are given as follows: for f ∈ H , For a densely defined linear operator A on a Hilbert space, we denote by A A or A * . Proof. For any f, h ∈ H , we have By change variable, we have Thus we see by Assumption 2.7 (3) and functional calculus that Ran(R ± ) ⊂ D(T −1 ). The equation operational calculus for (3.5) and Assumption 2.7 (3) imply that Ran(R ± ) ⊂ D(T ). For any f ∈ D(T ) and µ ∈ R, Hence R ± f ∈ D(T 2 ) and the following equation holds for any h ∈ H , where c := R ψ ± T 1/2 g,f (x)dx. In quite the same manner as in the case of R ± , we can prove the statement for R * ± . Lemma 3.10. The operator equation is a bounded operator. Thus, by change of variable, we have for any By a property of the Poisson kernel, the function A . Hence the continuity of inner product with L 2 (R) implies that Since f and h are arbitrary, one obtains the conclusion. It is easy to see that (A − ) * = −A + . Lemma 3.11. For any Borel measurable function F on R, Proof. It is easy to see that for any f ∈ D(F (T )), ψ g,F (T )f = F ψ g,f ∈ L 2 (R). This fact and Lemma 3.5 imply that ψ g,f (T )g ∈ D(F (T )) and F (T )ψ g,f (T )g = ψ g,F (T )f (T )g. Hence Lemma 3.12. The following operator equations hold: Proof. By applying Lemma 3.11 to the case F = χ B , one can easily see that A ± E(B) = E(B)A ± holds for any B ∈ B 1 . For any f, h ∈ H , we have Then, since γ and E(B) commute on H for any B ∈ B 1 , (3.2) gives Thus, by a limit argument, we obtain A − R * ± = (γ − 1)R * ± . Moreover, (3.2) and the equation Operators Ω ± In this subsection we consider the bounded operators Let x 0 < 0 be the zero of D(z) given in Lemma 3.1 (2) and Then it is easy to see that and Hence P is a projection operator. 9) where θ is the Heaviside function: Remark 3.14. Lemma 3.13 implies that Ω ± are unitary operators if λ > λ c and partial isometries with their final subspace Ran(I − P ) if λ < λ c . Proof. By the definition of the function D, we have By this formula and a resolvent identity, we obtain ± on H are given as follows: h, E (ε) ± are bounded for all 0 < ε < ε 0 . Thus, by the Lebesgue dominated convergence theorem, we have s-lim ε↓0 E (ε) ± = I. Hence we obtain that R * ± R ± = −(R ± + R * ± ). On the other hand, by Lemma 3.5, (3.3) and the Lebesgue dominated convergence theorem, there are constantsR > 0 and c 0 > 0 such that |D(z)| ≥ c 0 for all |z| ≥R. Thus we have where O(·) stands for the well known Landau symbol. Therefore we have for each µ, µ ∈ [E 0 , ∞). Thus, by (3.10), we have As in the proof in (1), we obtain s-lim ε↓0 E (ε) ± −1 = I. Therefore we obtain Thus we obtain the desired result. (ii) The case λ < λ c . In this case, G ε,± µ,µ (z)/D(z) has a simple pole at z = x 0 in addition to . Thus we have . This implies that Thus we obtain the desired result. Operators U and V In this subsection, we investigate the operators U and V defined as follows: which are used to construct a Bogoliubov transformation. Then, by Lemma 3.8, one can easily see that Lemma 3.15. The operators U and V are bounded. In the same way as in the proof of Lemma 3.15, we see that T −1/2 R * ± T 1/2 and T 1/2 R * ± T −1/2 are bounded on each domain D(T 1/2 ) and D(T −1/2 ). In what follows, we write the bounded extension of U and V by the same symbol respectively. Then Proof. By applying Lemma 3.8 and using the equations one can easily see that the assertion for U is true. Similarly one can prove the statement for V . Proof. By Lemma 3.8, the domain of each side of (3.13) includes D(F (T )). By Lemmas 3.11 and 3.12, we have Proof of Theorem 4.1. (2) By Lemma 3.16 and fundamental properties of the annihilation operators and creation operators, we can see that, for any f ∈ D( for all n, k ∈ N and f ∈ D(T −1/2 ) ∩ D(T ). By the fundamental inequalities (9.1) and (9.2) and the dΓ b (T )-boundedness of Φ s (g) 2 , we can see that Hence we obtain (4.2). Relations of U and V Hence, by direct calculations and (3.9), one obtains U U * −V J V * J = I −θ(λ c −λ)Q + . Similarly one can prove the last equation in (4.6) (note that P J = P ). Hilbert-Schmidtness of V In this subsection, we show that V is Hilbert-Schmidt. Then we can use Theorem 2.2 in the case of λ > λ c . Proof. On D(T −1/2 ) ∩ D(T 1/2 ), V * V is calculated as follows: where we have used the formula R * + R + = −(R + + R * + ) in the proof of Lemma 3.13 and Lemma 3.8. Thus, for any f ∈ D(T −1/2 ) ∩ D(T 1/2 ) and ε > 0, we have Then, for any B ∈ B 1 , we can see Similarly, we obtain Thus, by a formula of change of variable in Lebesgue-Stieltjes integration and Fubini's theorem, we have . For any ε > 0 and µ, µ , µ ∈ [E 0 , ∞), we have by Lemma 3.5 and the arithmetic-geometric mean inequality On the other side, for any α, β ∈ C, we see Thus, by the Lebesgue dominated convergence theorem, we have . In particular, for each α, β = ±1, ±i, the polarization identity and Fubini's theorem give where we have used the arithmetic-geometric mean and Lemma 3.5. Hence V is Hilbert-Schmidt. 5 Analysis in the case λ > λ c In this section we prove Theorem 2.10 (1). Before starting the proof, we need to know a property of the Hamiltonian H(λ). Proof of Theorem 2.10 (1) In this subsection, we assume that λ > λ c . By this equation and a limiting argument, we obtain Ue itH(λ) U −1 = e it(dΓ b (T )+Eg) . By the unitary covariance of functional calculus, we have Hence (2.7) holds. The equation (2.7) and the well-known spectral properties of dΓ b (T ) imply that E g is the ground state energy of H(λ) and Ω is the unique ground state of H(λ). 6 Analysis in the case λ c,0 < λ < λ c In Section 5, we proved Theorem 2.10 (1). But the proof is valid only for the case λ > λ c . Therefore it is necessary to find another pair of operators U and V if one wants to use a Bogoliubov transformation for the spectral analysis of H(λ) in the case λ ≤ λ c . In this section we assume that T and g satisfy Assumption 2.7, E 0 > 0 and λ c,0 < λ < λ c . Under these conditions, we can define operators ξ, X, Y and T ± as follows: ξ := Ω + T Ω * + + βP, X := U Ω * + + T + P, Y := V Ω * + + T − P, Remark 6.1. The definition of x 0 implies the following: Thus, in the case λ c,0 < λ < λ c , we see that the inequality 0 < β < E 0 holds. Let Then C(f ) is a densely defined closable operator. We denotes its closure by the same symbol. Properties of X, Y and ξ In this subsection we study operators X, Y and ξ. First, we consider ξ. Let bẽ T := Ω + T Ω * + . Lemma 6.2. The operatorT is a self-adjoint operator with D(T ) = D(T ). Lemma 6.3. The spectra ofT are as follows: Proof. We define a family of projection operators {E P (B) | B ∈ B 1 } on H as follows: E P (B) = 0 if 0 / ∈ B and E P (B) = P if 0 ∈ B for each B ∈ B 1 . It is easy to see that {ET (B) := Ω + E(B)Ω * + + E P (B)| B ∈ B 1 } is a spectral measure. Using functional calculus, we see that ET (·) is the spectral measure ofT . It is easy to see that the absolutely continuous part (resp. singular part) ofT isT Ran(I −P ) (resp.T Ran(P )) since T is absolutely continuous and Ω ± are partial isometries. Thus we see σ(T ) = {0} ∪ σ ac (T ), σ p (T ) = {0}, σ sc (T ) = ∅. Lemma 6.4. The operator ξ is an injective, non-negative self-adjoint operator with D(ξ) = D(T ) and we have the following equations: In particular, β is the ground state energy of ξ, which is an isolated eigenvalue of ξ, and U b is the unique ground state of ξ. Proof. By Lemma 6.3 and a spectral property of direct sum of self-adjoint operators, we have the equation (6.1). Thus β is an isolated ground state energy by Remark 6.1. It is easy to see that U b is a ground state of ξ. Assume that f ∈ Ker(ξ − β) satisfies (I − P )f = 0. Proof. The assertion follows from Lemma 3.8, Lemma 3.16, Lemma 6.5 and the definition of X and Y . (2) For any f ∈ D(T −1/2 ) ∩ D(T ) and ψ, φ ∈ D(dΓ b (T )), Theorem 6.10 follows, in the same manner as in the proof of Theorem 4.1, from Lemma 3.16, Lemma 6.5 and the next lemma: Lemma 6.11. For any f ∈ D(T ) the following equations hold: ) Remark 6.12. By Lemma 3.16 and the definition of ξ, the both sides of (6.5) and (6.6) have meaning. Proof. These are proved in the same way as in the proof of Theorem 5.1 and Theorem 6.10. Let Ω := V −1 Ω 0 where V is the unitary operator in Lemma 6.9. Then: (1) There is an eigenvalueẼ g of H(λ) and Ω is an eigenvector of H(λ) with eigenvaluẽ E g . (3) The constantẼ g is given as follows: Proof. Parts (1) and (2) can be proved in the same way as in the proof of Theorem 2.10 (1). We choose a CONS {e n } ∞ n=0 ⊂ D(T ) satisfying e 0 = U b . Then it is easy to see that {Ω * + e n } ∞ n=1 is a CONS for H by Lemma 3.13. Hence we have Thus we obtain (6.8). Hence the spectral properties of H(λ) as stated in Theorem 2.10 (2) follow. In this section, we show that H(λ) is unbounded from above and below. Generalization of the φ 2 -model In this section we consider H(η, λ) defined in Subsection 2.3. We can prove a slight generalization of Theorem 2.10. Appendix In this section, we recall some known facts in Fock space theory. Let T be a non-negative, injective self-adjoint operator on H .
2018-07-23T02:36:13.000Z
2018-07-23T00:00:00.000
{ "year": 2021, "sha1": "51aceaee52b65a07d3d566429f00f83c70d23316", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.08408", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a0a355a5414e1867d76ded6337bcdc4fb610d60c", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
229683313
pes2o/s2orc
v3-fos-license
Nanometer-Thick Crystalline Carbon Films Having a Spinel Structure Grown on ZnO Substrates: Implications for New Ceramic–Carbon Composition I developed a bottom-up process of crystal growth using a field emission (FE) electron beam without transfer of heat energy. In this study, highly crystalline single-walled carbon nanotubes were used as the FE electron source. Acetylene was irradiated with an electron beam of high-resolution energy emitted from the electron source. Then, zinc oxide (ZnO) was irradiated with the carbon-based ions dissociated from the acetylene and electron beam, which formed a nonequilibrium excitation reaction field. As a result, a crystalline carbon thin film with a spinel-like structure different from the structures of graphite and diamond was grown on the ZnO surface. It is considered that the carbon film can be formed on substrates with a periodic crystal structure, not only ZnO. I confirmed that a carbon film with a periodic crystal structure independent of the crystal structure of the underlying substrate was grown, which bridged with the substrate. Thus, I have established a technique of crystal bridging between a ceramic and carbon for the first time to the best of our knowledge. ■ INTRODUCTION In the formation of materials, general chemical processes, including the bonding and dissociation of heteroelements, involve heating or heat-releasing treatment regardless of the presence of catalysts. We have studied a bottom-up process for fabricating materials using an electron beam to form a nonequilibrium excitation reaction field as a means of transferring reaction energy without the transfer of heat energy. The process of bonding metals and other hetero-nanomaterials using a reaction field has been reported. 1−3 We aim to establish a reaction process for high-ordered structures in a bottom-up architecture that is completely independent of the heating process in order to fabricate novel composites of ceramics (metal oxides) and carbon, and we have been studying the synthesis of highly functional composite materials. A nonequilibrium excitation reaction field was found to be a reaction field in which ceramic nanoparticles can be induced and manipulated by a focused electron beam and ions whose acceleration energy was controlled at the keV level. 4,5 A beam and ions have conventionally been used in research on lattice defects in materials and surface modification. In the irradiation space, studies of a bottom-up specific reaction for the development of various nano-and microstructures have been carried out. 6,7 In this study, we developed a technique for controlling the atomic arrangement by crystal bridging between a ceramic (ZnO) and carbon to fabricate novel composite materials with a low-speed field emission (FE) electron beam 8−11 as a nonequilibrium excitation reaction field. The technique used to synthesize various composite materials significantly affects their properties. 12−24 Materials based on the bonding between metals and ceramics include metal−ceramic composites, 12−14 heat-insulating ceramic coatings, 15−17 electron device materials, 18−20 and catalysts. 21−24 For these composite materials, the interface between heteroelements with greatly different chemical bonding states significantly affects their physical and chemical properties. 25,26 It is necessary to analyze the atomic structure and chemical bonding state of heteroelements at the interface and clarify their correlation with their properties. In this study, we achieved crystal bridging between ZnO and carbon in a nonequilibrium excitation reaction field and succeeded in forming a thin layer in which the carbon film with a structure different from those of graphite, 27,28 diamond, 29,30 and diamond-like carbon (DLC) 31,32 is formed near the interface between the ZnO and carbon. In this paper, we report the crystal structure, chemical bonding, and the electron state of the ZnO−C interface from the viewpoint of crystallography. ■ RESULTS The hc-SWCNTs used as the electron source can achieve stable and long time FE even in a low vacuum of 2−3 Pa for both active and inactive gases. 11 Figure 1a shows the relationship between the partial pressure of acetylene (C 2 H 2 ) before and after dissociation (fixed at 1.6 Pa before dissociation) and the timing of FE, which was obtained using a quadrupole mass spectrometer (QMS; Canon Anelva Corporation). The FE current shown in Figure 1a was determined from the FE current density (dose rate) and the decrease in the partial pressure of acetylene shown in Figure 1b. The result in Figure 1 revealed that acetylene is dissociated into carbon-or hydrocarbon-based ions at a voltage of 20−30 V and a current density (dose rate) of ≥70 nA/cm 2 . Exploiting the dissociation of acetylene by the electron source and electron beam, we performed an experiment in which ZnO nanoparticles were irradiated with the electron beam and carbon-and hydrocarbon-based ions. The ZnO nanoparticles used in this study were synthesized by a wet process. 33 They were exposed to a nonequilibrium excitation reaction field under the exposure conditions shown in Table 1. Figure 2 shows ZnO composites obtained at an acceleration voltage (V a in Figure 9) of 500 V, detected as micrometer diameter agglomerates after exposure. The nanoparticles before exposure had almost spherical shapes and a diameter of 10−100 nm and are shown in the inset of Figure 2, which is a field emission scanning electron microscopy (FE-SEM, Hitachi High-Tech Corporation) image taken at an acceleration voltage of 20 kV. The composite obtained after exposure was embedded in resin and its surface was removed using a focused ion beam (FIB, FEI Company). As shown by the cross-sectional view in Figure 3a, small voids were observed inside the agglomerates. Macroscopically, elliptic ZnO particles, which were synthesized in this study exhibited selective crystal bridging along the c axis upon FE electron beam radiation as a nonequilibrium excitation reaction field, 10,34 are concentrated and agglomerated. Figure 3b shows an enlarged view of the area within the white circle in Figure 3a. The area in the red circle in Figure 3b was further enlarged to analyze the composition distribution at the interface between the ZnO and the carbon film by point measurement using energy-dispersive X-ray spectroscopy (EDX, Hitachi High-Tech Corporation). Figure 3c shows a scanning transmission electron microscopy (STEM, Hitachi High-Tech Corporation) image of the same area. To analyze elements, spot tests were performed for zinc, oxygen, and carbon along the white arrow in the figure. Figure 3d shows the composition distribution of zinc, oxygen, and carbon obtained by EDX. It confirms that a film mainly composed of carbon was formed on the ZnO surface between the elliptic ZnO nanoparticles. X-ray diffraction (XRD, Rigaku Corporation) crystallography was carried out to examine the exposed samples. The XRD pattern of a carbon thin film formed on a (100)-oriented Si substrate by the irradiation of carbon ions dissociated from acetylene by the electron beam was also obtained for reference. For the ZnO−C composite film, the XRD patterns are shown in Figure 4 for acceleration voltages (V a in Figure 9a) of 100 and 500 V. The results indicate that the carbon films formed in this study have a structure different from that of graphite. The crystal structure (diffraction peak) of the carbon film formed on the Si substrate, shown as a reference, also matches that of the ZnO−C composite film. This suggests that the dissociation of acetylene by the electron beam and the existence of carbon-based ions generated after dissociation contribute to the formation of the carbon film with a structure different from that of graphite. In particular, the significant difference in the crystal structure owing to the acceleration energy is noteworthy. The crystal structure of the carbon film was clearer for the reaction field with the higher acceleration voltage. The crystal structure of the carbon film formed on the Si substrate is different from that identified from the diffraction peaks of the reported silicon carbide (SiC). 35 It is considered that the carbon film obtained in this study has a unique crystal structure that does not depend on the crystal structure of the underlying substrate. To examine the crystal structure of the interface between the ZnO and the carbon film in more detail, a high-resolution transmission electron microscopy (HRTEM, JEOL, Ltd.) image was obtained at an acceleration voltage of 300 kV (Figure 5a). The measurement sample in Figure 5 was cut again to the piece shown in Figure 3 by a FIB for HRTEM. Figure 5a shows an HRTEM image of the interface between the ZnO and the carbon film. An ∼20 nm-thick carbon film was formed on the ZnO surface, confirming a clear interface between the ZnO and the carbon film. Figure 5b shows the diffraction pattern of the ZnO. This pattern represents the arrangement of the ZnO in the direction of the c axis. As shown in Figure 5a,b, the ZnO layers along the c axis and the carbon film graphite layers are parallel, forming a clear interface. Moreover, another layer formed in the space of the carbon film corresponding to the gap between the ZnO layers, suggesting that atomic or molecular layers different from graphite layers were formed. Figure 5c shows the fast Fourier transform (FFT) pattern of the diffraction image at a position corresponding to the carbon film. The FFT pattern represents a diffraction image different from that in the ZnO region, meaning that the crystal structure of the carbon film is fundamentally different from that of the ZnO. Assuming that Figure 5b shows the crystal structure of the ZnO itself, the plane orientations of the diffraction pattern are shown in Figure 5c, indicating that a spinel-like crystal structure was formed. At a position on the carbon film far from the ZnO surface, a polycarbon-like carbon film formed, in which multiple graphite layers were laminated. As mentioned above, we confirmed the existence of another layer on the carbon film formed near the ZnO surface and positioned in the interlayer gap; this layer exhibits a diffraction image corresponding to that of the ZnO. A continuous crystal structure different from that of graphite or diamond layers is considered to have been formed. Figure 6 shows absorption spectra of carbon obtained by Fourier transform infrared (FTIR) spectroscopy (JASCO Corporation). The results are for the ZnO−C composites formed by the irradiation of the electron beam and carbon-based ions at acceleration voltages of 100 and 500 V and for carbon films formed on Si and glass substrates for reference. For the ZnO−C composites and the carbon film on the Si substrate, carbon bonds corresponding to sp and sp 2 C and hydrocarbon-related bonds corresponding to sp and sp 3 CH were clearly observed. In particular, the higher the acceleration energy, the clearer the peaks are for sp and sp 2 C. These spectra are similar to those of DLCs. The crystal structure is a mixture of a graphite layer structure and a tetrahedral or diamond structure. 27,32 Furthermore, the carbon film on the Si substrate has an electron bond state similar to that of the carbon film on the ZnO, and no peaks corresponding to carbon bonds were observed for the carbon film on the glass substrate. These results suggest that the crystal structure of the carbon film is independent of the dangling bonds and periodic crystal structure of the underlying substrate. Figure 7 shows X-ray photoelectron spectroscopy (XPS) spectra of the ZnO−C composite particles shown in Figure 2. The spectra correspond to the carbon bond and the oxygen bond (Figure 7a,b). It was found (1) that no peaks exist that correspond to the bonds of metals with carbon (carbides), (2) that a tetrahedral carbon bond state corresponding to a diamond structure with an electron state related to the sp 3 orbital exists, and (3) that most peaks correspond to the bonds of metals (Zn in this study) with oxygen. The peaks corresponding to CO and OH are considered to be due to the components generated by the dissociation of acetylene and those synthesized with the ZnO as well as carbon oxide on the composite surface. ■ CONCLUSIONS The analysis of our results revealed that carbon film is formed on a ZnO surface when ZnO is irradiated with carbon-based ions and an FE electron beam accelerated to 500 V to provide a nonequilibrium excitation reaction field. The low-energy FE electron beam is generated using hc-SWCNTs and dissociates acetylene during the process. In the carbon film, graphene-like thin film layers and polygraphite layers are laminated. The distance between the graphene-like thin film layers is considered to depend on the periodic crystal structure of the substrate on which the ZnO crystals are grown. Figure 8 shows a schematic of crystal bridging between the ceramic material ZnO and the carbon. The distance between graphene layers for stable graphite is approximately 0.33−0.35 nm, while the c axis interlayer distance of ZnO is 0.53 nm as shown in Figure 5a. The interlayer distance of ZnO is about 1.7 times that of graphite, and it is impossible for each graphite interlayer to cross-link with ZnO in the same c axis direction. The carbon layer was found to be cross-linked with almost every c axis layer of ZnO, and the existence of a cross-linking network with the upper and lower carbon layers was necessary for the intermediate layer to be cross-linked. In addition, no heteroelements were intercalated into the graphite layer, and another carbon atomic layer was grown in the gap between the layers, that is, the thin film was composed of only pure carbon. Figure 8 shows an image of the crystal structure of the ZnO layer, which is based on the hybridization of sp 2 and sp 3 orbitals revealed from Figures 6 and 7 and the spinel-like crystal pattern indicated by the FFT diffraction pattern in Figure 5. In this study, we fabricated carbon thin films using ZnO and a reference Si substrate. In addition to the growth of such films on metals and semiconductors, it is expected that a periodic crystal structure of pure carbon that is different from those of graphite, diamond, and DLC can be fabricated by controlling the acceleration energy of the electron beam and carbon-based ions to achieve crystal bridging between ZnO and the carbon film, and we obtained a new composite comprising ZnO and carbon film. In the future, we expect to synthesize not only zinc oxide but also highly functional materials by fusing ceramics and carbon using a nonequilibrium reaction field, and we will attempt to synthesize ceramic−carbon composite materials with high electrical, physical, and chemical functionalities. ■ EXPERIMENTAL METHODS In this study, an FE electron beam was used to form a nonequilibrium excitation reaction field. Compared with thermal electron beams, the FE electron beam can achieve a narrower energy distribution in accordance with the principle of 36 and can efficiently emit a large number of electrons with a large amount of electron irradiation (current) at a low voltage. 37 To emit electrons more efficiently at a lower driving voltage, we used highly crystalline single-walled carbon nanotubes (hc-SWCNTs) for the electron source. This is because hc-SWCNTs are expected to achieve stable FE and are suitable for arbitrarily controlling the amount of FE from pA to A order. 37 We added hc-SWCNTs to a conductive thin film to form a planar FE electron source and constructed a system for forming a nonequilibrium excitation reaction field. Figure 9a shows a schematic of the system. The planar electron source with added hc-SWCNTs was used as a cathode, and an electrode covered by wet-synthesized ZnO nanoparticles was used as an opposing anode. A gate electrode to control the emission of electrons and dissociate carbonaceous gas (acetylene in this study) and an electrode to accelerate the carbon ions dissociated from acetylene and the electron beam were arranged between the cathode and the anode. Figure 9b shows the FE characteristics of the planar electron source in low vacuum (1.0 Pa). FE was successfully controlled at the nA order at an applied voltage of <40 V. ■ ACKNOWLEDGMENTS This study was carried out through the joint research conducted with Honorary Professor Shun-ichiro Tanaka of Tohoku University. It was conducted as a project supported by the JSPS KAKENHI grant number JP26220104.
2020-12-17T09:06:35.934Z
2020-12-11T00:00:00.000
{ "year": 2020, "sha1": "2745d3d20fa10083bff8a957330b573fe6209592", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c04214", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ccc197c8e2f51bbf7efdcaeb6834349b1f1af2e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
1074905
pes2o/s2orc
v3-fos-license
Lognormal and Gamma Mixed Negative Binomial Regression In regression analysis of counts, a lack of simple and efficient algorithms for posterior computation has made Bayesian approaches appear unattractive and thus underdeveloped. We propose a lognormal and gamma mixed negative binomial (NB) regression model for counts, and present efficient closed-form Bayesian inference; unlike conventional Poisson models, the proposed approach has two free parameters to include two different kinds of random effects, and allows the incorporation of prior information, such as sparsity in the regression coefficients. By placing a gamma distribution prior on the NB dispersion parameter r, and connecting a lognormal distribution prior with the logit of the NB probability parameter p, efficient Gibbs sampling and variational Bayes inference are both developed. The closed-form updates are obtained by exploiting conditional conjugacy via both a compound Poisson representation and a Polya-Gamma distribution based data augmentation approach. The proposed Bayesian inference can be implemented routinely, while being easily generalizable to more complex settings involving multivariate dependence structures. The algorithms are illustrated using real examples. Introduction In numerous scientific studies, the response variable is a count y = 0, 1, 2, · · · , which we wish to explain with a set of covariates x = [1, x 1 , · · · , x P ] T as E[y|x] = g −1 (x T β), where β = [β 0 , · · · , β P ] T are the regression coefficients and g is the canonical link function in generalized linear models (GLMs) (McCullagh & Nelder, 1989;Long, 1997;Cameron & Trivedi, 1998;Agresti, 2002;Winkelmann, 2008). Regression models for counts are usually nonlinear and have to take into consideration the specific properties of counts, including discreteness and nonnegativity, and often characterized by overdispersion (variance greater than the mean). In addition, we may wish to impose a sparse prior in the regression coefficients for counts, which is demonstrated to be beneficial for regression analysis of both Gaussian and binary data (Tipping, 2001). Count data are commonly modeled with the Poisson distribution y ∼ Pois(λ), whose mean and variance are both equal to λ. Due to heterogeneity (difference between individuals) and contagion (dependence between the occurrence of events), the varance is often much larger than the mean, making the Poisson assumption restrictive. By placing a gamma distribution prior with shape r and scale p/(1 − p) on λ, a negative binomial (NB) distribution y ∼ NB(r, p) can be generated as f Y (y) = ∞ 0 Pois(y; λ)Gamma λ; r, p 1−p dλ = Γ(r+y) y!Γ(r) (1 − p) r p y , where Γ(·) denotes the gamma function, r is the nonnegative dispersion parameter and p is a probability parameter. Therefore, the NB distribution is also known as the gamma-Poisson distribution. It has a variance rp/(1 − p) 2 larger than the mean rp/(1 − p), and thus it is usually favored over the Poisson distribution for modeling overdispersed counts. The regression analysis of counts is commonly performed under the Poisson or NB likelihoods, whose parameters are usually estimated by finding the maximum of the nonlinear log likelihood (Long, 1997;Cameron & Trivedi, 1998;Agresti, 2002;Winkelmann, 2008). The maximum likelihood estimator (MLE), however, only provides a point estimate and does not allow the incorporation of prior information, such as sparsity in the regression coefficients. In addition, the MLE of the NB dispersion parameter r often lacks robustness and may be severely biased or even fail to converge if the sample size is small, the mean is small or if r is large (Saha & Paul, 2005;Lloyd-Smith, 2007). Compared to the MLE, Bayesian approaches are able to model the uncertainty of estimation and to incorporate prior information. In regression analysis of counts, however, the lack of simple and efficient algorithms for posterior computation has seriously limited routine applications of Bayesian approaches, making Bayesian analysis of counts appear unattractive and thus underdeveloped. For instance, for the NB dispersion parameter r, the only available closed-form Bayesian solution relies on approximating the ratio of two gamma functions using a polynomial expansion (Bradlow et al., 2002); and for the regression coefficients β, Bayesian solutions usually involve computationally intensive Metropolis-Hastings algorithms, since the conjugate prior for β is not known under the Poisson and NB likelihoods (Chib et al., 1998;Chib & Winkelmann, 2001;Winkelmann, 2008). In this paper we propose a lognormal and gamma mixed NB regression model for counts, with default Bayesian analysis presented based on two novel data augmentation approaches. Specifically, we show that the gamma distribution is the conjugate prior to the NB dispersion parameter r, under the compound Poisson representation, with efficient Gibbs sampling and variational Bayes (VB) inference derived by exploiting conditional conjugacy. Further we show that a lognormal prior can be connected to the logit of the NB probability parameter p, with efficient Gibbs sampling and VB inference developed for the regression coefficients β and the lognormal variance parameter σ 2 , by generalizing a Polya-Gamma distribution based data augmentation approach in Polson & Scott (2011). The proposed Bayesian inference can be implemented routinely, while being easily generalizable to more complex settings involving multivariate dependence structures. We illustrate the algorithm with real examples on univariate count analysis and count regression, and demonstrate the advantages of the proposed Bayesian approaches over conventional count models. Regression Models for Counts The most basic regression model for counts is the Poisson regression model (Long, 1997;Cameron & Trivedi, 1998;Winkelmann, 2008), which can be expressed as x iP ] T is the covariate vector for sample i. The Newton-Raphson method can be used to iteratively find the MLE of β (Long, 1997). A serious constraint of the Poisson regression model is that it assumes equal-dispersion, i.e., E[y i |x i ] = Var[y i |x i ] = exp(x T i β). In practice, however, count data are often overdispersed, due to heterogeneity and contagion (Winkelmann, 2008). To model overdispersed counts, the Poisson regression model can be modified as where i is a nonnegative multiplicative random-effect term to model individual heterogeneity (Winkelmann, 2008). Using both the law of total expectation and the law of total variance, it can be shown that Thus Var[y i |x i ] ≥ E[y i |x i ] and we obtain a regression model for overdispersed counts. We show below that both the gamma and lognormal distributions can be used as the nonnegative prior on i . The Negative Binomial Regression Model The NB regression model (Long, 1997;Cameron & Trivedi, 1998;Winkelmann, 2008;Hilbe, 2007) is constructed by placing a gamma prior on i as where E[ i ] = 1 and Var[ i ] = r −1 . Marginalizing out i in (2), we have a NB distribution parameterized by mean µ i = exp(x T i β) and inverse dispersion parameter φ (the reciprocal of r) as f Y (y i ) = The MLEs of β and φ can be found numerically with the Newton-Raphson method (Lawless, 1987). The Lognormal-Poisson Regression Model A lognormal-Poisson regression model (Breslow, 1984;Long, 1997;Agresti, 2002;Winkelmann, 2008) can be constructed by placing a lognormal prior on i as where E[ i ] = e σ 2 /2 and Var[ i ] = e σ 2 e σ 2 − 1 . Using (3) and (4), we have Compared to the NB model, there is no analytical form for the distribution of y i if i is marginalized out and the MLE is less straightforward to calculate, making it less commonly used. However, Winkelmann (2008) suggests to reevaluate the lognormal-Poisson model, since it is appealing in theory and may fit the data better. The inverse Gaussian distribution prior can also be placed on i to construct a heavier-tailed alternative to the NB model (Dean et al., 1989), whose density functions are shown to be virtually identical to the lognormal-Poisson model (Winkelmann, 2008). The Lognormal and Gamma Mixed Negative Binomial Regression Model To explicitly model the uncertainty of estimation and incorporate prior information, Bayesian approaches appear attractive. Bayesian analysis of counts, however, is seriously limited by the lack of efficient inference, as the conjugate prior for the regression coefficients β is unknown under the Poisson and NB likelihoods (Winkelmann, 2008), and the conjugate prior for the NB dispersion parameter r is also unknown. To address these issues, we propose a lognormal and gamma mixed NB regression model for counts, termed here the LGNB model, where a lognormal prior ln N (0, σ 2 ) is placed on the multiplicative random effect term i and a gamma prior is placed on r. Denot- where ϕ = σ −2 and a 0 , b 0 , c 0 , d 0 , e 0 , f 0 and g 0 are gamma hyperparameters (they are set as 0.01 in experiments). Since y i ∼ NB (r, p i ) in (11) can be augmented into a gamma-Poisson structure as LGNB model can also be considered as a lognormalgamma-gamma-Poisson regression model. Denoting If we marginalize out h in (14), we obtain a beta prime distribution prior r ∼ β (a 0 , b 0 , 1, g 0 ). If we marginalize out α p in (13), we obtain a Student-t prior for β p , the sparsity-promoting prior used in Tipping (2001); Bishop & Tipping (2000) for regression analysis of both Gaussian and binary data. Note that β is connected to p i with a logit link, which is key to deriving efficient Bayesian inference. Model Properties and Model Comparison Using the laws of total expectation and total variance and the moments of the NB distribution, we have . (17) We define the quasi-dispersion κ as the coefficient associated with the mean quadratic term in the variance. As shown in (7) and (10), κ = φ in the NB model and κ = e σ 2 −1 in the lognormal-Poisson model. Apparently, they have different distribution assumptions on dispersion, yet there is no clear evidence to favor one over the other in terms of goodness of fit. In the proposed LGNB model, there are two free parameters r and σ 2 to adjust both the mean in (16) and dispersion κ = e σ 2 (1+r −1 ) − 1 , which become the same as those of the NB model when σ 2 = 0, and the same as those of the lognormal-Poisson model when φ = r −1 = 0. Thus the LGNB model has one extra degree of freedom to incorporate both kinds of random effects, with their proportion automatically inferred. Default Bayesian Analysis Using Data Augmentation As discussed in Section 3, the LGNB model has an advantage of having two free parameters to incorporate both kinds of random effects. We show below that it has an additional advantage in that default Bayesian analysis can be performed with two novel data augmentation approaches, with closed-form solutions and analytical update equations available for both Gibbs sampling and VB inference. One augmentation approach concerns the inference of the NB dispersion parameter r using the compound Poisson representation, and the other concerns the inference of the regression coefficients β using the Polya-Gamma distribution. Inferring the Dispersion Parameter Under the Compound Poisson Representation We first focus on inference of the NB dispersion parameter r and assume we know {p i } i=1,N and h, neglecting the remaining part of the LGNB model at this moment. We comment here that the novel Bayesian inference developed here can be applied to any other scenarios where the conditional posterior of r is proportional to N i=1 NB(y i ; r, p i )Gamma(r; a 0 , 1/h), for which a hybrid Monte Carlo and a Metropolis-Hastings algorithms had been developed in Williamson et al. (2010) and Zhou et al. (2012), but VB solutions were not yet developed. As proved in Quenouille (1949), y ∼ NB(r, p) can also be generated from a compound Poisson distribution as where Log(p) corresponds to the logarithmic distribution (Barndorff-Nielsen et al., 2010) Using the conjugacy between the gamma and Poisson distributions, it is evident that the gamma distribution is the conjugate prior for r under this augmentation. Note that to obtain (22), we use the relationship proved in Lemma 1 of the supplementary material that Gibbs sampling for r proceeds by alternately sampling (24) and (21). Note that to ensure numerical stability when r > 1, instead of using (25), we may iteratively calculate R r in the way we calculate F in (23). We show in Figure 1 of the supplementary material the matrices R r for r = .1, 1, 10 and 100. Inferring the Regression Coefficients Using the Polya-Gamma Distribution Denote ω i as a random variable drawn from the Polya-Gamma (PG) distribution (Polson & Scott, 2011) as ωi ∼ PG(yi + r, 0). We have E ωi exp(−ω i ψ 2 i /2) = cosh −(yi+r) (ψ i /2). Thus the likelihood of ψ i in (11) can be expressed as Given the values of {ω i } i=1,N and the prior in (15), the conditional posterior of ψ can be expressed as and given the values of ψ and the prior in (31), the conditional posterior of ω i can be expressed as Gibbs Sampling Inference Exploiting conditional conjugacy and the exponential tilting of the PG distribution in Polson & Scott (2011), we can sample in closed-form all latent parameters of the LGNB model described from (11) to (14) as Sampling Li with (24), Sampling r with (21) where Note that a PG distributed random variable can be generated from an infinite sum of weighted iid gamma random variables (Devroye, 2009;Polson & Scott, 2011). We provide in the supplementary material a method for accurately truncating the infinite sum. Univariate Count Data Analysis The inference of the NB dispersion parameter r by itself plays an important role not only for the NB regression (Lawless, 1987;Winkelmann, 2008) but also for univariate count data analysis (Bliss & Fisher, 1953;Clark & Perry, 1989;Saha & Paul, 2005;Lloyd-Smith, 2007), and it also arises in some recently proposed latent variable models for count matrix factorization (Williamson et al., 2010;Zhou et al., 2012). Thus it is of interest to evaluate the proposed closed-form Gibbs sampling and VB inference for this parameter alone, before introducing the regression analysis part. We consider a real dataset describing counts of red mites on apple leaves, given in Table 1 of Bliss & Fisher (1953). There were in total 172 adult female mites found in 150 randomly selected leaves, with a 0 count on 70 leaves, 1 on 38, 2 on 17, 3 on 10, 4 on 9, 5 on 3, 6 on 2 and 7 on 1. This dataset has a mean of 1.1467 and a variance of 2.2736, clearly overdispersed. We assume the counts are NB distributed and we intend to infer r with a hierarchical model as where i = 1, · · · , N and we set a = b = α = β = 0.01. We consider 20,000 Gibbs sampling iterations, with the first 10,000 samples discarded and every fifth sample collected afterwards. As shown in Figure 1, the autocorrelation of Gibbs samples decreases quickly as the lag increases, and the VB lower bound converges quickly even starting from a bad initialization (r is initialized two times the converged value). The estimated posterior mean of r is 1.0812 with Gibbs sampling and 0.9988 with VB. Compared to the method of moments estimator (MME), MLE, and maximum quasi-likelihood estimator (MQLE) (Clark 1 There is a typo in B.2 Lemma 2 and other related equations of Polson & Scott (2011), where E(ω) = a c tanh( c 2 ) should be corrected as E(ω) = a 2c tanh( c 2 ). & Perry, 1989), which provides point estimates of 1.1667, 1.0246 and 0.9947 2 , respectively, our algorithm is able to provide a full posterior distribution of r and is convenient to incorporate prior information. The calculating details of the MME, MLE and MQLE, the closed-form Gibbs sampling and VB update equations, and the VB lower bound are all provided in the supplementary material, omitted here for brevity. Regression Analysis of Counts We test the full LGNB model on two real examples, with comparison to the Poisson, NB, lognormal-Poisson and inverse-Gaussian-Poisson (IG-Poisson) regression models. The NASCAR dataset 3 , analyzed in Winner, consists of 151 NASCAR races during the 1975-1979 Seasons. The response variable is the number of lead changes in a race, and the covariates of a race include the number of laps, number of drivers and length of the track (in miles). The MotorIns dataset 4 , analyzed in Dean et al. (1989), consists of Swedish third-party motor insurance claims in 1977. Included in the data are the total number of claims for automobiles insured in each of the 315 risk groups, defined by a combination of DISTANCE, BONUS, and MAKE factor levels. The number of insured automobile-years for each group is also given. As in Dean et al. (1989), a 19 dimensional covariate vector is constructed for each group to represent levels of the factors. To test goodness-of-fit, we use the Pearson residuals, a metric widely used in GLMs (McCullagh & Nelder, 1989), calculated as whereμ andκ are the estimated mean and quasidispersion, respectively, whose calculations are described in detail in the supplementary material. The MLEs for the Poisson and NB models are wellknown and the update equations can be found in Winner; Winkelmann (2008). The MLE results for the IG-Poisson model on the MotorIns data were reported in Dean et al. (1989). For the lognormal-Poisson model, no standard MLE algorithms are available and we choose Metropolis-Hastings (M-H) algorithms for parameter estimation. We also consider a LGNB model under the special setting that r = 1000. As discussed in Section 3.1, this would lead to a model which is approximately the lognormal-Poisson model, yet with closed-form Gibbs sampling inference. We use both VB and Gibbs sampling for the LGNB model. We consider 20,000 Gibbs sampling iterations, with the first 10,000 samples discarded and every fifth sample collected afterwards. As described in the supplementary material, we sample from the PG distribution with a truncation level of 2000. We initialize r as 100 and other parameters at random. Examining the samples in Gibbs sampling, we find that the autocorrelations of model parameters generally reduce to below 0.2 at the lag of 20, indicating fast mixing. Shown in Table 1 are the MLEs or posterior means of key model parameters. Note that β 0 of the LGNB model differs considerably from that of the Poisson and NB models, which is expected since β 0 + σ 2 /2 + ln r in the LGNB model plays about the same role as β 0 in the Poisson and NB models, as indicated in (16). Tables These observations also support the claim in (Winkelmann, 2008) that the lognormal-Poisson model should be reevaluated since it is appealing in theory and may fit the data better. Compared to the lognormal-Poisson model, the LGNB model has an additional advantage that its parameters can be estimated with VB inference, which is usually much faster than sampling based methods. As shown in A clear advantage of the Bayesian inference over the MLE is that a full posterior distribution can be obtained, by utilizing the estimated posteriors of σ 2 , r and β. For example, shown in Figure 2 are the estimated posterior distributions of the quasi-dispersion κ, represented with histograms. These histograms should be compared to κ = 0 in the Poisson model, and the NB model's MLEs of κ = 0.1905 and κ = 0.0118, for the NASCAR and MotorIns datasets, respectively. We can also find that VB generally tends to overemphasize the regions around the mode of its estimated posterior distribution and consequently places low densities on the tails, whereas Gibbs sampling is able to explore a wider region. This is intuitive since VB relies on the assumption that the posterior distribution can be approximated with the product of independent Q functions, whereas Gibbs sampling only exploits conditional independence. The estimated posteriors can also assist model interpretation. For example, based onΣ β in VB for the which is typically not provided in MLE. Since β 1 (Laps) and β 3 (TrkLen) are highly positively correlated, we expect the corresponding covariates to be highly negatively correlated. This is confirmed, as the correlation coefficient between the number of laps and the track length is found to be as small as −0.9006. Conclusions A lognormal and gamma mixed negative binomial (LGNB) regression model is proposed for regression analysis of overdispersed counts. Efficient closed-form Gibbs sampling and VB inference are both presented, by exploiting the compound Poisson representation and a Polya-Gamma distribution based data augmentation approach. Model properties are examined, with comparison to the Poisson, NB and lognormal-Poisson models. As the univariate lognormal-Poisson regression model can be easily generalized to regression analysis of correlated counts, in which the derivatives and Hessian matrixes of parameters are used to construct multivariate normal proposals in a Metropolis-Hastings algorithm (Chib et al., 1998;Chib & Winkelmann, 2001;Ma et al., 2008;Winkelmann, 2008), the proposed LGNB model can be conveniently modified for multivariate count regression, in which we may be able to derive closed-form Gibbs sampling and VB inference. As the log Gaussian process can be used to model the intensity of the Poisson process, whose inference remains a major challenge (Møller et al., 1998;Adams et al., 2009;Murray et al., 2010;Rao & Teh, 2011), we may link the log Gaussian process to the logit of the NB probability parameter, leading to a log Gaussian NB process with tractable closed-form Bayesian inference. Furthermore, the NB distribution is shown to be important for the factorization of a term-document count matrix (Williamson et al., 2010;Zhou et al., 2012), and the multinomial logit has been used to model correlated topics in topic modeling (Blei & Lafferty, 2005;Paisley et al., 2011). Applying the proposed lognormal-gamma-NB framework and the developed closed-form Bayesian inference to these diverse problems is currently under active investigation.
2012-06-27T12:59:59.000Z
2012-06-26T00:00:00.000
{ "year": 2012, "sha1": "c656d38ba590dc1ae62a54e9fc180935b4b1a52e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "73008ee333653db530bdca11598a7494b6025279", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Medicine" ] }
225991100
pes2o/s2orc
v3-fos-license
Multicentric Invasive Pleomorphic Lobular Carcinoma of Breast An Uncommon Histopathological Variant Invasive Pleomorphic Lobular Carcinoma is a very rare histopathological variant of breast cancer. It constitutes less than 1% of invasive breast carcinoma .We present a 70 year female presented with lump in right breast of 10 months duration. The lump was insidious in onset, progressive in nature and was associated with gradual skin color changes. Patient was a known diabetic and was on regular treatment. On fine needle aspiration cytology showed positive for carcinoma cells. Right modified mastectomy was performed. On histopathology reported as Invasive Pleomorphic Lobular Carcinoma – Grade II with multicentric nodules. Right axillary lymph nodes with perinodal infiltration of Introduction Invasive Pleomorphic Lobular Carcinoma (IPLC) is one of the distinctive subtypes of invasive breast carcinoma. IPLC was first described by Martinez V and Azzopardi in 1979, [1] and later on in 1987 by Page and Anderson. [2] Invasive pleomorphic lobular carcinoma represents less than 1% of invasive carcinomas. [3] Unlike the classic variant, the tumor cells of the pleomorphic variant of ILC are larger in size , large irregular , pleomorphic, hyperchromatic nuclei with occasional prominent nucleoli and having abundant eosinophilic cytoplasm. It is considered to be an aggressive tumor with poor prognosis. Case Report A 70 year female presented with lump in right breast of 10 months duration. The lump was insidious in onset, progressive in nature and was associated with gradual skin colour changes. Patient was a known diabetic and was on regular treatment. On fine needle aspiration cytology, showed positive for carcinoma cells. Systemic examination was within normal limits. On X-ray and ultrasonography abdomen & pelvis, no evidence of metastasis was noted. On gross examination a specimen(figure-1) of right modified mastectomy measuring 24.1 x 22 x 2.0 cm and weighing 817 gm. Covering skin flap measures 16.5 x 14 .5 cm. Tumor was reaching upto the nipple and areola on gross examination. Serial cut sections through the specimen, revealed a grey white, firm tumor measuring 10 x 4.5 x 2 cm, located in subareolar region. Tumor showed infiltrating borders. Tumor was reaching upto the deep surgical margin. Overlying skin showed discoloration (areas of hyperpigmentation and hypopigmentation) and appeared fixed to the underlying tumor grossly. Nearest peripheral surgical margin was 3.5 cm away from tumor. Adjacent breast tissue showed four nodular masses. Largest measuring 1.5 x 1 x 0.3 cm and smallest measuring 0.4 cm in maximum diameter. Cut section of which revealed grey white firm tumor .Serial sectioning showed three lymph nodes in axillary region ( along the specimen ), largest measuring 1.2 x 0.8 x 0.3 cm. Cut section of which was grey white and firm. Serial sectioning through right axillary dissection revealed 06 lymph nodes. Largest lymph node measuring 0.5 x 0.4 x 0.1 cm. Cut section of which appears grey white, firm. On microscopic examination showed a tumor is composed of neoplatic cells arranged in cords, Indian file pattern, small nests, and alveolar pattern and scattered diffusely and multifocally in the breast parenchyma. Individual cells were round having moderately pleomorphic hyperchromatic or vesicular nuclei with 0-2 nucleoli and moderate to ample amount of eosinophilic cytoplasm. In areas targetoid appearance was noted around the terminal duct. Mitotic activity was increased. Tumor was seen diffusely infiltrating in single file with dense fibrovascular stroma separating the tumor. (Figure- The immunohistochemistry study showed positivity for ER, PR and negativity for HER-2neu. Our patient received treatment of right sided modified radical mastectomy with axillary clearance. She was kept on chemotherapy. On follow up patient responded well to treatment. Discussion Infiltrating lobular carcinoma (ILC) of the classic type is well recognized type of invasive breast carcinoma. The less well appreciated is a group of variant forms of ILC includes solid, alveolar, mixed, apocrine, pleomorphic , signet-ring, histiocytoid, and tubulo-lobular variants. [1,4] IPLC of breast is a very rare malignancy, which account for less than 1% of invasive carcinomas. The histomorphological features for diagnosis are pattern of classic lobular carcinoma i.e. invasive tumor with frequently single linear row (Indian file) in dense stroma, targetoid appearance around terminal ducts. Tumor cells are usually round with mild nuclear pleomorphism. IPLC shows tumor cells are of high nuclear grade II or III and having solid, alveolar pattern. IPLC on clinical presentation are usually large in size as compared with Invasive breast carcinoma (NOS). The mean size of tumor for IPLC was 3.2 cm while for IBC (NOS) is 2.2 cm. [5,6] The postmenopausal women are usually affected. In our case tumor was very large (10 x 4.5 x 2 cm) with multicentric nature. On mammography showed tumor mass with speculated lesion. Mammograms of pleomorphic breast cancer tumors will tend to show well-circumscribed or speculated mass lesions without evidence of calcification. The FNAC finding in our case showed positive for carcinoma cells. However, differentiation from IBC and ILC is not possible. The study by Jagtap et al showed ILC with 84.6 % ER ,PR positivity and HER-2 neu 23 % positivity on immunohistochemical study. [7] On IHC study our patient was ER, PR positive while HER-2Neu negative. PLCs were frequently negative for ER, PR and E-cadherin; and positive for p53 and HER-2 in few cases. [8] PLBC is highly aggressive and usually presents as a grade II to III tumor. [9] At time of presentation may show evidence of distant metastasis. IPLC is associated with worse prognostic factors when the tumor size is larger and axillary metastasis compared with other types of lobular carcinomas. [9] There is a higher risk of recurrence and decreased survival rate. Conclusion We are presenting this case of invasive pleomorphic lobular carcinoma for its uncommon type, clinical, histomorphological and immunohistochemical pattern. Our case showed multifocal tumor with extensive lymph nodal metastasis and aggressive clinical behavior.
2020-07-09T09:05:01.145Z
2020-03-07T00:00:00.000
{ "year": 2020, "sha1": "fa902a7d0309179fb02a508d3bcc7483160875d9", "oa_license": null, "oa_url": "https://www.pacificejournals.com/journal/index.php/apalm/article/download/2582/1810/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8291cfbff36fc452a10affea32628805f46a050e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17660964
pes2o/s2orc
v3-fos-license
How Sustainable Is Transnational Farmland Acquisition in Ethiopia ? Lessons Learned from the Benishangul-Gumuz Region Due to the nature of available land as one of the main attractions for investment, land lease marketing in Sub-Saharan Africa is appearing on policy agenda. This paper describes critical land-related institutional and governmental frameworks that have shaped the contemporary land governance and land lease contracts in Ethiopia. It also examines the effectiveness of the land lease process regarding economic, social, and environmental expectations from agricultural outsourcing. Both qualitative and quantitative data analyses were used and results showed that the size of the land cultivated by investors is significantly lower than the agreed-upon size in the contract. Besides, the supply of land to large-scale commercial investors in Ethiopia is made without adequate land use planning, land valuation, and risk analysis. Furthermore, limitations in monitoring systems have contributed to meager socio-economic gains and led to deforestation. Accordingly, the study concludes that supplying vast tracts of farmland to large-scale agricultural investors requires integrated land use planning, land valuation and governance, monitoring systems, and a capacity to implement the various social and environmental laws in coordination with other sectors. Improving rural infrastructure, particularly road, is also indispensable to enhance the level of performance of commercial farms. Last but most importantly, the customary land holding rights of residents should be respected and institutionally recognized. Transnational Land Acquisition Among the academia, practitioners, policy makers, and institutions involved in the transnational land acquisition, or the "land grabbing" and land deals, there is lots of debate with full of discontents, emotions, and justifications.Most of the recent literature highlight the term "land grabbing" as a contemporary phenomenon caused by the combined effects of the global stock market crash and the food and energy crisis of 2008/2009 [1][2][3][4].However, the term was also mentioned in earlier works by Karl Marx for the first time in the context of the enclosures of England: "The laborers are first driven from the land, and then come the sheep.Land grabbing on a great scale, such as was perpetrated in England, is the first step in creating a field for the establishment of agriculture on a great scale" [5].Following the global financial crisis as well as the increasing demand for food and bio-fuel and the effects of climate change, a new wave of land transactions in many developing countries have been stemmed [6,7]. In order to produce and export food and bio-fuel, government officials of "land abundant" agrarian countries facilitated foreign companies and governments to move in a move that resulted in the dispossession, expulsion, and adverse incorporation of local communities [8].For example, South Korean Daewoo Logistics planned to invest in 1.3 million hectares of land to produce maize in Madagascar, but after a large debate in the country the transitional government cancelled its license [9].Furthermore, Hamelinck (2013) [10] has shown that many other land deals which had aspired to produce bio-fuel underwent serious public pressure and resulted in the cancellation of long-term land use agreements.Yet studies from Mozambique and Tanzania reported significant interest in biofuel projects [11].In Ghana, Schoneveld et al. (2010) [12] focus on a biofuel project in which 800 hectares of land were taken, causing 70 households in three different villages to lose their farmlands.The governments of developing countries which host such investments on the other hand, would consider transnational land acquisition as part of their development strategy; an opportunity to attract foreign direct investment (FDI) to their agricultural sectors and would supply land to transnational investor companies and to mainly rich governments [13][14][15][16][17].In addition to governments maintaining transnational land deals, international development institutions facilitate the acquisition of land by big corporations (both foreign and domestic), typically in the form of leases or concessions rather than outright land purchases for development [2].Furthermore, governments in countries that have a high potential for agricultural production and a good competitive advantage are encouraging renewed commercial investment from domestic and foreign investors.Several governments wish to allot many of the "idle" lands within their countries and in order that agribusiness implementers may acquire them.For instance, in July, 2009 the government of Ethiopia allocated 1.6 million hectares of land, extendable to 2.7 million, for investment in commercial farmlands [18].All in all, due to increasing population and urbanization rates and changing diets, the global demand for food is increasing, along with prices of food, over a longer period.The limitations in the global food supply and the rising demands for energy and agricultural products make agriculture an interesting option for investment [19][20][21]. Unfortunately, quantitative assessments of the geography, scale, and trends in land grabbing are no longer available yet overall commercial interest in farmland is expected to continue to increase and to become a long-term trend.Figure 1 shows the global trend of land acquisition concentrate in key countries in Africa, Southeast Asia (Cambodia, Laos, the Philippines, Indonesia) and parts of Eastern Europe (e.g., Ukraine).As shown in the figure, 83 million ha of land were acquired globally out of which 56 million ha were acquired in Africa.However, there is a large discrepancy in the figures between the amounts of land acquired.According to the Land Matrix data (accessed on 30/01/2016) [22], there are 42,213,284 ha of concluded transnational deals globally, of which 18,789,391 ha belong to Africa.This demonstrates the point that precise data are not yet available, and for a number of reasons such data can over-or underestimate the true extent of the land acquired.It has also been found that two-thirds of the cropland that interest foreign investors are located in Africa, mainly in Sub-Saharan Africa (SSA)-e.g., South Sudan, Sudan, Tanzania, Mozambique, Ethiopia, Madagascar, The Democratic Republic of Congo, Liberia, and Zambia [23,24]. While Ethiopia is one of the leading agrarian economies in attracting foreign direct investment, according to Rulli and D'Odorico (2014) [25], 10.4 million people (11% of the total population of the country) could have been fed by the crops grown in the acquired land.The country is endowed with abundant agricultural resource bases and like that of other developing countries' governments, the hope of the government of Ethiopia is that investors will bring capital, know-how, technology, and market-access to their economies.Investors could therefore act as a catalyst for economic transformation in rural areas where the Growth and Transformation Plan (GTP) of the country is key.This, in turn, is expected to generate employment, increase public revenue, improve local people's access to infrastructure and upgrade the overall standard of living in local communities.However, the important question has often remained: "Does this trickle-down effect actually take place and who really benefits from these transnational land acquisitions?"The country's institution which promotes private investment, the Ethiopian Investment Agency (EIA) announced that more than 11.50 million hectares of land has the potential for farming and agricultural investment [16,26].Yet of the stated amount of land, so far, close to 2.5 million hectares have been given to investors who came from more than 32 different countries and invested in the different regions of the nation between 2007 and 2013.The focus of this paper is the lack of comprehensive evidence regarding the national, regional, and grass root level issues and effects of transnational land acquisition in Ethiopia with a particular emphasis to the Benishangul-Gumuz region. Conceptual Framework As part of the policy agenda on the SSA, land marketing has come into question.Many of the SSA countries have gone through consecutive institutional reforms concerning land use; most of which were made with the support of the World Bank and developed countries [27].While national land use policies and strategies vary across countries, common land-related problems, conservation strategies, and expertise solutions are becoming apparent and provide timely lessons for Ethiopia.Land management systems are increasingly checked against information-based land use models that contribute to efficient and effective land use management.Globalization and technological development further enhance the establishment of multifunctional information systems by incorporating diverse land features, uses, rights, regulations, and other pertinent data [28].To this end, contemporary land management systems should consider the diverse interests and competitive purposes of land before it is marketed to commercial investment on a long-term basis [29].A holistic approach to land management most importantly requires information, recognition of the human, social and governance elements, as well as adaptation of improved land use practices elsewhere.This approach plays a central role in the enhancement of informed land marketing chains. As there are a number of tools which help to conduct sustainable impact assessments of development projects [30][31][32], we adapt the Integrated Land Use Management and Responsible Agricultural Investment framework because, according to Enemark et al. (2005) [28] and FAO et al. (2010) [33], institutional arrangements, land information systems, and land use management more appropriately show the informative level of land markets.Furthermore, it is important to integrate federal and regional interests including multiple land use sectors and dimensions of sustainability and to do so on all scales [33].The interplay between institutional arrangements and land information systems determines the quality of land use management which again determines the effectiveness and efficiency of land markets [28,33] (Figure 2).In an integrated land use management system, the interests of different land uses are balanced against the broader developmental objectives of a country or region.These interests serve as a base for the plans of use and for control through institutional mechanisms and incentives.As described by Reidsma et al. (2011) [34] and Enemark et al. (2014) [29], effective planning and control of land use requires up-to-date data so as to understand the spatial, temporal, and anthropogenic consequences of land use policies and decisions.To this end, in addition to integrating sectoral and spatial components, the process of land use planning and implementation should be participatory.Furthermore, as for other key issues, allotting land for agricultural investment should include and be designed to meet not only the interests of the investors but also the needs of the various stakeholders [35], particularly local communities and those whose livelihoods are attached to the land and environmental sustainability.With the support of international development partners and civil society organizations the governments of such countries like Ethiopia need to protect their local communities and natural resources while attracting foreign investments in order to guarantee their economic growth.Apart from the host nations and intergovernmental bodies, investors should also commit and take complete responsibility in the present situation.Accordingly, the impacts of development projects can only be sustained with an integrated plan where institutional arrangements, land information systems, and land use management are assured.A responsible agricultural investment is an inclusive situation where the process of investment respects resource use rights of local communities, ensures local food security, is transparent, and encourages consultation and participation, investment viability, and environmental sustainability [33]. Objectives According to "databases" that were directly obtained from the Ministry of Agriculture in Ethiopia in 2013 [36], 2.11 million ha of the total 11.5 million ha potential land was already given to investors, out of which 600,254 ha is the share of the Western Ethiopian lowland state called the Benishangul-Gumuz region [26].Focusing on this region, the following key research questions are considered in this study: (i) What are the key institutional frameworks which shaped the contemporary land governance in Ethiopia?(ii) How effective would the integration and harmonization of large scale commercial farming with other local development projects be? and (iii) How effective is the land lease process from the context of integrated land management, and economic, social, and environmental payoffs of agricultural investment? Description of Study Area This paper relies on the data set from the Benishangul-Gumuz region of Ethiopia which is located at 09.17 ˝-12.06 ˝N latitudes, and 34.10 ˝-37.04 ˝E longitudes along the Western Ethio-Sudan border (Figure 3), with a total area of about 50,699 km 2 .According to the Regional Bureau of Agriculture, the overall area of arable land in the region is about 911,877 ha, of which less than half has been cultivated.It was noted that 189,534 hectares of land in the region is potentially irrigable [37].The region is located in an area where the Grand Renaissance Dam of Ethiopia is under construction over the Abay River (or Blue Nile) which includes the Beles, Dabus, Anger, Dhidhsa, and Dindir Rivers, which are tributaries of the Abay River.Agro-ecologically, the study area can be classified into three major climatic zones: (a) Lowland or kola (75% of the region) with an altitude below 1500 m; (b) Midland (woynadega) zone which constitutes about 24% of the region and has an altitude of 1500-2500 m; and (c) Dega agro-ecologic zone which accounts for only 1% of the area of the region and lies at an altitude of 2500 m [38]. Data Collection and Analysis In order to analyze the process of commercial farmland acquisition and its effects, multilevel exploration of qualitative and quantitative data is required.Accordingly, this study collected data at federal, regional, district, and village or farm levels (Table 1).Rigorous archival review, review of land lease contracts and accompanied documents, extraction of relevant data from accessed databases, and interviews were also conducted.Participatory field observation and key informant interviews were also held at federal, regional, and local levels, though as this information is considered highly sensitive in Ethiopia, it will not be disclosed.Not to mention that in order to make this survey, we reassured the respondents that their information would remain confidential.A total of 26 interviews were made (four in federal-level institutions, six in regional bureaus, seven at district, and nine at village levels) in the period from October 2013 to January 2014.Additionally, 22 owners and/or managers of commercial farms in the study site were interviewed.A survey of 89 commercial farms was made in the period May 2013-January 2014.Using a structured questionnaire, the data were collected to assess the performance evaluation of commercial farmers on the ground.The questionnaire was approved through face validity and the reliability was confirmed by estimating Cronbach's alpha (α = 0.78). Descriptive statistics and relevant statistical tests such as the paired sample t-test were made to complement the qualitative data analysis and explain the economic, social, and environmental implications of the ongoing commercial land acquisition in the study area.STATA (version 11) software was used to conduct statistical computations and a paired sample t-test (with a significant level of α = 0.05) was run to compare the amount of land taken in lease form and the amount of land developed.Furthermore, with a significance level of α = 0.05, the paired t-test was made for the size of land leased by an investor versus the size of land cultivated, and regression analysis on the amount of land developed and employment generated so as to see whether significant amounts of land were developed and the expected level of employment opportunities was achieved. Land in Ethiopia: The Institutional and Governance Framework According to the Constitution of the Federal Democratic Republic of Ethiopia [39], the right of land ownership and other natural resources of the country exclusively belong to the State and the peoples of Ethiopia.This implies that, all subsidiary laws and regulations of the country which could be issued either by the federal or regional state bodies recognize usufructuary rights to land which can be in the form of state, communal or group, and private holdings.While the right to use and inherit land is preserved, private ownership of land is prohibited.As explained by GOE (2005) [40], our study also corroborates that the private ownership of land is prohibited to ensure equity of land use among citizens and between generations, especially in the rural areas where livelihood exclusively depends on land; if not, the country would be threatened by the social predicaments of land accumulation within a few hands.Therefore, it is up to the federal government to determine the amount and type of land a citizen may hold in the country.The main justification given for this system is that if land is privatized, small-holder farmers may sell it when they face financial difficulties (desperation sales) and ultimately the country's land would be under the hands of a few rich farmers.According to Galor et al. (2009) [41], the concentration of land in the hands of a minority and inequality in land ownership adversely affects institutions that promote human capital, for instance public schooling.A study by Cárdenas (2012) [42] shows that such instances of concentrated land ownership have already occurred in some Latin American countries such as Colombia.Similarly, Pestana et al. (2013) [43] in their study in Brazil showed that inequitable allocation of land contributed to land-related conflicts and unrest in the country. Prohibiting private land ownership, on the other hand, may not be an adequate guarantee for equitable distribution of land among citizens.As argued in some studies [44,45] policies that prohibit private ownership of land have been criticized because it stifles farmland investments and could lead to unproductive and excessively small parcels of land size.According to Nyssen (1998) [46], the land tenure system in Ethiopia has made positive contributions to the unprecedented involvement of farmers in soil and water conservation because land is "equally" shared among farmers unlike previous land holding systems.Some studies [8] showed that governments play an important role in of the political economy of a global capitalist system where the contemporary ramifications of land acquisition are considered as part of "security mercantilism" in international relations.Yet the continuing debate in Ethiopia is whether or not consolidating all the land under the custody of the state is considered as suppressing citizens' rights to privately own land.The federal government, through the constitution and other laws that followed later, assigned mandates and jurisdiction to the different federal and regional government organs as summarized chronologically in the Appendix. Evolution of Land Use-Land Governance Institutional Frameworks in Ethiopia The current institutional framework of Ethiopia concerning land use governance is the result of a number of consecutive institutional and regulatory developments.According to the 1955 constitution of Ethiopia [47], all natural resources of the country (water, forest, land, etc.) became the state's domain since 1955.Later, in 1974, "Land to the tiller" was one of the mottos of the socialist-driven revolution in the country as land ownership was one of the major issues at the time and of the revolution which overthrew the imperial/feudal system in the country.As a result of the 1974 socialist revolution in Ethiopia [48], all forms of private ownership of land were abolished without any compensation, and all lands used for agriculture or grazing purposes throughout the country were declared to be the collective property of Ethiopians.Due to the functioning supreme law of Ethiopia (i.e., the 1995 constitution of the country), land and all other natural resources are commonly owned by the Nations, Nationalities, and Peoples of Ethiopia and are not subject to sale or to other means of exchange.As noted by GOE (1995) [39,49], regional states are given the power and mandate to administer land and other natural resources in accordance with federal laws.Peasants and pastoralists have usufructuary right over land without any charge and without time limit, and have safeguards against expulsion from "their" land unless it is intended for public purposes and are then subject to compensation. In addition to the chronology of stated proclamations and regulations, under the tenets of the 2005 Federal Rural Land Administration and Land Use Proclamation (Federal proclamation no.456/2005), non-pastoralist regions of Ethiopia have enacted their own regional rural land administration laws, regulations, and guidelines.The first two regions in this case are Tigray (regional proclamation no.97/2006, and land use regulation no.37/2007), and Amhara (regional proclamation no.133/2006, and land use regulation no.51/2007).As well as the Southern Region's Rural Land Administration and Use Proclamation (regional proclamation no.97/2006) and the Oromia Rural Land Administration and Use Proclamation (regional proclamation no.130/2007).As explained by Tigistu (2011) [50], the Gambela and the Benishangul-Gumuz regions enacted their land proclamation in 2010, with the Benishangul-Gumuz regional state having a regional Rural investment Land Use Regulation, regulation no.29/2009.The remaining regions (Afar, Harari, and Somali), lack legislation for administering their land for rural and clan-based uses, hence it is currently difficult to enforce laws and formally recognize peasants' and pastoralists' rights in these regions. Investment Land Supply and the Land Lease Contracts Investment Land Supply According to official investment guidelines developed by the Ethiopian Investment Agency, as of 2013, a total of 11,545,902 ha of potential investment land was made available to investors in Ethiopia (Table 2).In the Benishangul-Gumuz region, the total amount of land delivered to both foreign and domestic investors has taken the lead with 600,254 hectares, followed by Oromiya and Gambella regional states with 458,292 and 399,491 ha, respectively (Table 2).Source: MoA (2013) [36]. Given the discrepancy in stated figures among different authors, the figure seems exaggerated and, therefore, further assessment is required.For instance, according to Deininger et al. (2011) [51], since mid 2010, the total amount of land allocated to local and foreign investors is 1.2 million ha with a regional distribution of 535,000 ha (Gambela), 380,000 ha (Oromia), 191,500 ha (Benishangul-Gumuz), 60,500 ha (SNNP), 20,000 ha (Afar), and 18,000 ha (Amhara).However, according to recent information from the Ministry of Agriculture [36] (i.e., since October, 2013), in total, there are about 238 foreign agricultural investment projects from about 32 origins/countries (including the Ethiopian diasporas) which took a total of 736,228 ha of land in the different regions of Ethiopia.There is a variation in stated figures among different authors, and such discrepancies of figures are, however, common not only for Ethiopia but also for other host countries.Furthermore, there is large discrepancy in the figures between the amount of land stated on an investor's investment contractual agreement (on the document) and the actual amount of land delivered on the ground, where the latter is significantly lower than the former.According to our survey results, in most cases the amount is lower by up to 50%-60%.Edelman et al. (2013) [52] believe the reasons for such discrepancies are a result of poor planning in terms of land use and a lack of information about investment land supplies. The soaring demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms to investments in productive agricultural land, mostly in the developing world [25].The estimated potential areas of agricultural investment for the cultivation of agricultural products in all regional states of the country are presented in the Figure 4.The country also has a huge potential for large-scale plantations to produce pulse, cotton and oil crops. Land Lease Contracts According to the revised land use directive which was developed in 2009 and became effective in 2011/2012, there was no maximum limit set on the amount of land that an investor could take.The maximum threshold of land that can be given to an investor is set based on the type of land development (Table 3), the capacity of the project (its capital and skilled man power), level of employment creation, and fertility level of the land.While there is no limit set on the amount of land that an investor can take, investors cannot buy as much land as they want due to a maximum limit on the type of investment land. In Ethiopia, as opposed to other countries (e.g., Canada etc., [4,15,[53][54][55]), investors cannot lease as much land as they want.There is a limit both in terms of land size and land lease period, while land is leased to investors depending on the area of investment and other terms (Tables 3 and 4).As stated by EIA (2012) [26], the Ministry of Agriculture has given the responsibility of providing technical support to private investors in agriculture.According to the MoA (2009) [56] land of size 5000 ha and above and which are found in a single place will be administered by the Ministry of Agriculture, and lands below 5000 ha found in different places (or pockets lands) will be administered by an appropriate regional office.The Benishangul-Gumuz regional government has also set a maximum threshold of land for different areas of investment (Table 3).The regional government's proposition of delivering 200-500 ha of land to a single investor is ambitious, vague, and imprudent in light of some of the regulatory points that are contrary to what is stated by the Federal Government.It did not indicate a land lease period limit for an investment land of 200 ha and less, while the threshold by the regional government is made in terms of land size and years.However, specifically in reference to the horticultural sector, the land lease period further varies according to the type of land (i.e., either cultivated or non-cultivated).Moreover, the regional government limited the land lease holding period of 25-35 years (Table 4).20 25 Source: Regional land use regulation [57]. Although the regional government set 20-35 years as the maximum lease periods for investment lands in the region, there are companies in the same region with contracts of 50 years.These companies hold land on a large-scale basis and the contractual agreement was made not between the regional government and the company but with the Federal government and the company.This is because the regions have "delegated their authority upwards" to the Federal Government (i.e., Ministry of Agriculture), for leasing adjoining farm land areas of above 5000 ha so as to expedite the development of large farm lands for export and industrial crops. Bilateral Land Lease Contracts: Lessee-Lessor Rights and Obligations Commercial land lease agreements in Ethiopia are generally bilateral.Meaning, land lease agreements are made between the investor termed in contracts as "lessee" and the responsible government body, mainly the Ministry of Agriculture (MOA) called the "lessor".The following describes the rights and obligations of each party. Lessee The lessee is vested with rights to develop land for the cultivation of crops and plantations as agreed upon with the lessor.The lessee can build relevant infrastructure and facilities which are helpful to enable its investment operations upon consultation and permit from concerned bodies.Moreover, the lessee can administer or develop the leased land by itself or through a legally delegated agency or person.The lessee has full rights to use mechanization or other methods that they consider as proper for developing the land.As explained by Stebek (2011) [13], the right to get additional land is maintained by the lessee based on performance on the ground.Upon presentation of convincing reasons or for better options, by providing at least six months' time to the lessor, the lessee can cancel the contract. The lessee is obliged to take care of and conserve the leased land and its natural resources with specific obligations to: (i) conserve trees which have not been cleared for land preparation, (ii) apply appropriate farming methods to avoid soil erosion, (iii) adhere to all laws and proclamations related to the conservation of natural resources, and (iv) conduct environmental impact assessment (EIA) followed by submission of the EIA report within three months (the EIA obligation was added to the contractual agreements recently and used for those land lease agreements made since 2010/2011 following the environmental damages incurred by most of the land lease agreements made between 2007/2008-2009/2010), (v) submit an action plan in advance about the use of leased land together with the contract agreement document to the Ministry of Agriculture at the time of agreement (although what is practically found is the contract agreement and no submitted action plan!), (vi) pay the agreed down-payments, and (vii) start operation on the land within six months from the date of the execution of the land lease agreement.The lessee is expected to develop at least one-third of the land within a one-year period, and develop all the land within three years from the date of the execution of the land lease agreement.For some companies this obligation is adjusted to "develop at least 10% of the leased plot of land within the first year period from the date of the execution of the land lease agreement, and should develop the whole leased land within a five-year period".Yet, no justifiable reason is found for having two different statements in this part. Upon terminating the land lease contract or revocation of investment licenses, the lessee should clean the land from all his assets and hand it over to the lessor within one year and provide investment activity reports and correct data upon request from the Ministry of Agriculture.Furthermore, the lessee should pay the land lease rent every year at the rate stated in the agreements.The lessee should not make any unauthorized use of leased land without written consent from the lessor.Until three-fourths of the land is developed, the lessee cannot transfer the land or properties developed on the land to any other individual or company.In addition, an organization or company which leased the land in its name cannot reallocate the land to its individual members or shareholders, and to do so results in an automatic revocation of a contract.Upon developing three-fourths of the land, the lessee can transfer the land or properties developed on the land to any other individual or company only with a permit from the lessor. Lessor The lessor holds exclusive rights to monitor the activities of the lessee in accordance with the mutually-agreed contract without hindering any of the activities and operations of the lessee.The lessor also has the right to amend the land rent rate (decrease or increase) in consultation with the lessee.Finally, after 2010/2011, a new statement was added under the lessor's right: "With a convincing reason and for using the land for a better function ( . . . ) the lessor can revoke the land use contract" (Article 5:5 of the contractual template [58]).This implies that, according to key informant officers in the Ministry of Agriculture, the lessor can revoke the leased land at any time and has created a sense of insecurity among potential investors and preclusion among investors which had already leased land. The following are the obligations of the lessor: (i) to supply investment land which is free of any constraint on the ground to the lessee within ten days from the time of contractual agreement; however, this does not happen in practice as many of the investors received the land between six and 15 months after conclusion of the contract and faced many local impediments (for instance land assumed as free by the government was already in use by local people), overlapping problems with other investors, and unsuitable land for agricultural practices; (ii) to provide investment privileges and incentives in accordance with available directives promulgated by the government; (iii) to ensure the lessee that there are no impediments (legal or other) in relation to land preparation; (iv) to secure access to the lessee for soil testing facilities or map databases of regional or federal government research centers; (v) to guarantee peace and security (in collaboration with other governmental bodies) around the investment areas free of cost from the side of the investee; and (vi) if the investee fails to start developing land within the agreed time or fails to develop land in accordance with agreements entered or causes any damage to local natural resources or fails to pay lease fees, the lessor may be obliged to extend the time for such compliance or obliged to terminate the contract. Generally, the land lease contractual agreement focuses on maintaining the interests of the two parties: the lessee (the commercial companies which acquire land) and the lessor (the governmental body-either the Ministry of Agriculture or the corresponding delegated body in the regions, i.e., the regional Bureau of Agriculture and Rural Development).There is no single statement in any of the contractual documents or templates which requests the participation of any other stakeholders (e.g., local communities) when signing the land lease contracts.In addition to the rights and obligations of the parties in the contract, the economic, social, and environmental significance of land lease agreements should be explored, for instance: How feasible are the contracts economically?What are the land lease prices across the districts of the Benishangul-Gumuz region?What is the status of large-scale agricultural investment projects in the region and what are the factors which determine the progress at farm level in terms of the amount of cultivated land?Additionally, the social and environmental aspects of the contracts should be explored so as to have a better understanding of the contracts and their practical implementation on the ground.The following section deals with the economic, social, and environmental perspectives of large-scale agricultural investment in the study sites. Farm Land Lease Price The expectation that countries have when using land to market to and attract foreign direct investment is clear; they wish to attract foreign capital, economically use "free" land and improve the internal state revenue through business (or income) tax.Yet, there is no objective land valuation in Ethiopia both at the federal and regional levels, although there are a number of methods and techniques that could be used to measure the benefits of farmland, such as hedonic pricing and contingent valuation [59][60][61][62].Given that, the price quoted for different types of land is not based on its true values and amenities.The federal government proposes the price of land simply on the basis of distance from the capital city to the location of the leased land.To put it in a nutshell, the stated price of land does not reflect the real value of land and the prices are very low compared to local land rental prices.The country is not benefitting from the actual benefits of land because the value of land is not properly determined, which results in extremely low land lease prices for large-scale commercial farmlands compared to the local informal land market values.Furthermore, there is no formal economic benefit-sharing mechanism with local people. It is also hoped that investors would construct infrastructures and transfer technology so as to benefit their host country, yet none of these are formal requirements expected from investors.According to the interviews made with the lessees, the lower land lease prices are considered as a compensation for the state's low infrastructure, bureaucratic land acquisition process, and challenging business operating environment.Although the government of Ethiopia has a number of consecutive reforms to create a smooth investment and business operating environment, according to a recent World Bank report: "Globally, Ethiopia stands at 166 in the ranking of 189 economies on the ease of starting a business" [63].Nevertheless, this ranking is a point of debate as the number of business enterprises formed in the country within a few years has increased by six fold in the period 2002-2012.As noted by GOE (2012) [64] and EIA (2013) [16] there are also a number of investment incentive packages, improved infrastructure, and attractive investment environments now offered to investors.However, the economic benefits of Ethiopian farmlands can be boosted further by pursuing the improvement of the business operating environment together with proper land use valuation and integrated land use planning.Considering the land lease pricing system, since 2008 the Benishangul-Gumuz regional government developed and enforced its own land lease rent prices for each district (Table 5). The Status of Large-Scale Land Deals While the nature of land (cultivated and non-cultivated) is considered as a determinant for setting land lease periods, it is not considered for determining land lease prices.Due to various reasons, the average amount of land cultivated or developed by investors is significantly lower than the average amount of land leased to investors.The main reasons for this difference are the lack of adequate information about the nature and suitability of land upon which investors agreed to develop, and the challenges of farmland overlapping and farm border disputes.Furthermore, there are challenging processes related to the import of agricultural inputs, discrepancies in the capacity of many investors in terms of the capital they promised to invest in and what they actually invested in and fear of contract cancellation by the government (Figure 5 and Table 6). A paired sample t-test was applied to compare the amount of land taken in lease form and the amount of land developed (i.e., either started operation, as operation is measured in terms of "developed" land size, or fully entered into production of crops).According to the regional investment bureau, a "developed land" is a leased land which is under commercial farmers or "investors" and land preparation has started at the very least.Accordingly, the average land size developed (or cultivated) by investors (158.32 ha) is significantly (α ď 0.05) lower than that of the average land size on which investors agreed to develop (461.45 ha).There is a higher variation among the amount of land delivered to investors (with standard deviation of 774.73) as compared to the variation among the amount of land developed so far (with standard deviation of 232.60 and α ď 0.05) (Table 7).As a result, the average land size developed by investors is significantly lower than that of the average land size on which investors agreed for development.This implies that investors lack the capacity to cultivate the land that they took or they are discouraged to cultivate as agreed upon in the contract due to limited infrastructure where the farmlands are located.This shows that in reality the investors are sometimes not very realistic in the estimate of their capabilities either.Ho: mean (diff) = 0 degrees of freedom = 85 Ha: mean (diff) < 0, Ha: mean (diff)!= 0, Ha: mean (diff) > 0 Pr (T < t) = 1.0000,Pr (|T| > |t|) = 0.0000, Pr (T > t) = 0.0000 The Destination of Products A study by Rulli and D'Odorico (2014) [25] shows that the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment.If the food produced were used for domestic consumption, the crops harvested in the acquired land could ensure food security for the local populations.There is also an on-going debate of a policy issue between export promotion and the stabilization of local food markets.As stated in the Ethiopian Investment Guide of 2012, those companies which export half or three-quarters of their production, are entitled to get comparably more incentives (exemption from income tax [25]) compared to those investors which supply their production to the domestic market.Even though domestic food market inflation takes place, this export policy still exists.While encouraging export through explicitly-stated incentives is good for Ethiopia as a developing country, ensuring the appropriate balance of export promotion and stabilizing the domestic food supply market is imperative. Social Aspect In order to create social standards for land deals that make a positive contribution to local development it is necessary to respect the existing land use rights of local people, ensure food security, transparency, good governance, and community consultation and participation.Article 92(3) of the constitution of Ethiopia states that local people have the right to be consulted fully and express their view in the planning and execution of policies, projects, and programs that affect them directly [49].With respect to the participation of indigenous communities, before land is supplied for commercial investors, there are community level consultative discussions.There are also a number of minutes and signed documents in district offices showing the consents of local communities through their representatives.However, nothing is formally stated on the land lease contractual agreements regarding the rights and options for local communities.District authorities mainly have the difficult task of handling claims, conflicts, and grievances voiced by local communities in relation to the pressure from commercial farming companies.The two leading neighboring districts where many of the transnational companies obtained land in the form of long-term land lease contracts in the Benishangul-Gumuz region are the Guba and Dangure districts, where local communities have been relocated or resettled away from the land they lived on for years. Key informants and local administrative officials justify this relocation in different ways.According to the key informants, the eviction is considered as part of the preparation to supply "their land" to investors.The local administrators call the displacement a "villagization" program which was made to supply improved social infrastructure such as roads, schools, clinics, water supplies, electricity, etc., though the on the ground practice has not seen any of those so far.According to the data extracted from the study areas, a total of 2396 households were dislocated in a three-year period (Tables 8 and 9).The resettlement creates pressure upon the recipient villagers and the environment.The "performance" of the villagization program enforced by district administration offices and regional states has affected 13% of the households in Guba and 16% of those in Dangure.If the recipient communities in the villagization program are taken into account, 27% of households in Guba and 30% of the households in Dangure are affected by the villagization programs.Furthermore, part of the presumptions behind the commercial supply of farmlands is that they will create employment opportunities and ensure national food security.According to Ostermeier et al. (2015) [65], employment creation is a key factor amongst potential community benefits and is particularly interesting for two reasons: First, the creation of jobs is one of the most common commitments investors make to local communities when acquiring land.Second, generating employment is a key component for poverty alleviation.Hence, national governments often welcome and even foster large investments in their countries.Although wage employment is considered a positive effect of foreign investment, Dessalegn (2011) [66] argues that the international land deals barely benefit rural people; rather it is considered as a threat to the local population's livelihoods.Based on government records, nearly 89,000 citizens (in the whole Benishangul-Gumuz region) are expected to benefit from land acquisition in the form of social gains such as employment opportunities for local people.While, until now, throughout the region, employment opportunities were created for 4094 people, of whom 848 are permanent and 3246 temporary.When companies operate at full scale to develop the whole land they leased, further employment opportunities are created.Furthermore, according to the data extracted from the study areas, a total of 2396 households were dislocated.According to the regional and federal government officers, the phenomenon is not considered as "dislocation" but "resettlement which is intended to bring local communities together so as to provide better infrastructure and social services to local communities which used to lead life along scattered hamlets and villages".However, the fact on the ground is that the land which was occupied for years by the dislocated communities was leased for investors, following the state driven community resettlement programs.The temporary employment opportunities benefited citizens, mainly for the influx of labourers from neighbouring highlands, though some of the migrated labourers shifted to encroach on "free" forested land and could "upgrade" themselves to the status of "investors".Therefore, the threat to local communities is not only from the commercial farmers who are recognized by the state but also from those labourers employed in commercial farm lands who illegally invade "free" forest lands for their own business.As the forest is a source of many food and non-food resources, the "free" forest lands are important economic buffer zones for local communities, especially during periods of stress on their livelihoods.Hence, local communities frequently see land deals as threats to their livelihoods, potentially leading to loss of land and household income.There is a high possibility of further employment opportunities that will be available in parallel to the farmland's operational progress and the growth of companies in the region.For further understanding of the leased and developed land effects on the level of local employment generation, regression analysis was applied. As shown in Table 10, five factors, which are, "size of land leased", "permanent employees", "temporary employees", "distance from road", and "level of education" are entered into the equation that altogether can explain about 73% of the variations of the dependent variable (size of cultivated land) (adjusted R 2 = 0.73).The regression model is statistically significant and indicates that, overall, the applied model can significantly predict the dependent variable (F (5,80) = 47.38;p ď 0.01).Beta coefficients show that increasing one unit to the standard deviation of "size of land leased", "permanent employees", "temporary employees", "distance from road" and "level of education", will respectively cause 0.841, 0.002, 0.016, ´0.138, and 0.038 of the increase in the standard deviation of the dependent variable, respectively.Among the variables, the total size of land leased has a significant effect on the size of land developed and cultivated by an investor (B = 0.25; p ď 0.01).Similarly, farm distance from roads has a significant effect on the size of land developed or cultivated by an investor (B = ´2.13;p ď 0.05).This implies that improving rural infrastructure (particularly roads) is indispensable in enhancing farm level performance of commercial farms.Other factors, such as amount of labor employed and the investor's level of education have insignificant effects on the size of land developed or cultivated by an investor.The land that has been developed by investors is a subset of the acquired land.However, they are not always related.In fact, we have also seen different situations: sometimes, despite the increasing size of leased lands, the investor did not have enough capital (technology, labor, and the like) to increase the cultivated land as well.This shows that the investors sometimes are not realistic when estimating their capabilities either.For example, an Indian company in the Benishangul-Gumuz region was supposed to clear 50,000 ha forest within 10 years (5000 per year on average), while after five years it had only cleared 3000 ha. The output corresponds with the field observation that much of the temporary labour force is used to clear the land (land preparation) and is commonly supervised by permanent employees.The more permanent employees, the more temporary the employment is.The amount of both taken land and developed land has not yet had a statistically significant effect on the level of temporary employment.The most probable reason for this is that a major proportion of lands have been not developed yet, as temporary employees are used either for weeding or cultivation jobs once investors enter into full production.However, the amount of land developed is expected to have a meaningful effect on the level of (temporary) employment after companies enter into full-scale production in the near future. Friis and Reenberg (2010) [67] in their study show that local food insecurity and increasing poverty can be considered as some of the consequences of farmland acquisition.D'Odorico and Rulli (2014) [68] argue that even those large-scale land acquisitions carried out with the informed consent of local communities can jeopardize the food security and livelihoods of selling communities.With regard to the food security effects of farmland acquisition, there are key concerns among local communities and stakeholders in the country.Local Gumuz communities are losing access to non-timber forest products and their traditional sources of food, the natural forest areas, which they have been accustomed to using for generations.There is a dramatic shift in terms of land use and consequently the livelihood of local people; a shift from forest-dependent means of living to customary farming practices without adequate support and training in farming practices.Furthermore, the socialization of the resettled groups of households (in some cases with different ethnicity) with the recipient ones is not easy.Therefore, it is necessary to understand the wider context and the overall predicaments of local people in relation to the local resource access for local communities. Environmental Aspect As stipulated in Article 92(2) of the national constitution of Ethiopia, the implementation of development programs and projects should not have damaging effects on the natural environment.Furthermore, according to the country's environmental impact assessment law (proclamation no.299/2002), it is mandatory to properly accomplish an assessment of environmental impacts of a development project before its implementation [69].However, in practice, no environmental impact assessment reports are prepared to be cross-checked in various databases and through relevant key informants and confirmed by investors. Environmental damage associated with large-scale land investment might directly occur as a result of forest degradation.A study by Davis et al. (2015) [70] in Cambodia shows that nearly half of the area where concessions were granted between 2000 and 2012 was forested in 2000.Accordingly, they concluded that land acquisitions can act as a powerful driver of deforestation.Furthermore, as noted by Dessalegn (2011) [66], large-scale land investment causes a huge loss in biodiversity.According to Cotula et al. (2009) [71] and Benjaminsen et al. (2011) [72], biodiversity should be monitored as environmental aspects of large-scale land investment.Also, the impacts on biodiversity need to be monitored (e.g., monoculture farming, which may even lead to pest or disease problems) [73].Additionally, as discussed by The Oakland Institute (2011) [74], the influx of huge numbers of workers into an area raises environmental stress and results in increased deforestation, a decrease in the fish population and wildlife, and general negative effects on ecological systems.In this regard, the environmental cost of the villagization program in the study region has clearly contributed to additional deforestation and degradation of forest resources.According to Rulli and D'Odorico (2013) [75], the acquisition of land is associated with an appropriation of land-based resources, particularly water, which is crucial to agricultural production.Also, a study by Rulli et al. (2013) [76] shows that land grabbing is associated with a virtual grabbing of a substantial amount of freshwater resources, including water supplied by both rainfall and irrigation.Their study showed that the high instances of water grabbing are of particular concern because their effects can be felt both at the local and downstream levels, which can contribute to the possible emergence of water stress, poor water quality, and social conflicts [76].In line with this, there is no binding contractual agreement which can prevent companies from implementing irrigation farming, particularly those companies which cultivate along the banks of the Beles River, a river which contributes a significantly higher water supply to the GERD, following the Blue Nile River.If this happens without conservation of land, it will become one of the key issues that may cause dissatisfaction and further suspicion among other Nile Basin states which are under the Nile Basin Initiative cooperative framework, particularly, in the lower riparian states of Sudan and Egypt.Therefore, companies engaging in irrigation farming could cause attitudes to tense in the Basin and extensive public concern among stakeholders. According to Davis et al. (2015) [70], the other environmental impacts of farmland acquisition include soil loss and compaction, elevated runoff and GHG emissions from fertilizers, and increased competition for water resources.In addition, the acquisition of land may ultimately alter the resilience of countries and communities to climate change. Advancing deserts are now on the move almost everywhere [77].It is reported that in the Sahara Desert, 1.5 million hectares of land are becoming barren every year [78] and expanding in every direction [77].The forest resources of the Benishangul-Gumuz region could have been conserved as national buffer zones against the expanding the Sahara desert into western SSA. Parallel to land clearance for commercial farming, forest wildfire is the most prevalent challenges observed during the field visits made in the region.The main causes for forest fire in the region include commercial farmers who burn their wood biomass so as to clear land for tillage, local people who practice wild honey production, and natural wildfire occurring due to the dry-hot seasons.There is much deforestation on the ground by the investors in the Benishangul-Gumuz region but no action has been taken to counteract it for years. Land conversion from forests to "farm lands" has contributed to deforestation of the natural forest resource bases of the region as well.It is very common to observe huge woody masses in every commercial farm, and ongoing land-clearing and preparation (deforestation) (Figure 6).Following deforestation, many of the commercial farms have started growing commercial crops, such as bio-fuel trees and edible crops, resulting in huge tracts of land converted from forest to commercial crop farms.As a result, there is a huge loss in biodiversity (both flora and fauna).Moreover, the environmental cost of the villagization program in the study region is clearly visible and has contributed to additional deforestation and degradation of forest resources. The other pressing issue in the region is locally called "The Dam-in-Between", i.e., the Grand Ethiopian Renaissance Dam (GERD), its construction was begun in 2011.Upon completion, its reservoir will have a capacity of 63 ˆ10 9 m 3 , covering a total area of 1800 km 2 [79].According to ERTA (2011) [80], the dam will be the largest dam in Africa and the eighth largest hydroelectric power dam in the world.However, the area where the dam is located is also where the government of Ethiopia supplied huge parcels of commercial farmlands on a long-term land lease basis.The reservoir will be threatened by sediment deposition if there is no natural buffer zone. A similar challenge was already observed by Haregeweyn et al. (2006) [81] in the Northern part of the country, for instance, that sediment deposition in reservoirs is a serious off-site consequence of soil erosion.To have a natural buffer zone, land and forest areas in the closer upper watershed should have been conserved properly, but the practice on the ground by the land deals has made these areas the opposite of an ideal outcome.Furthermore, as argued in some studies [82,83], restoration of forest ecosystem services will be important both for sustainable agricultural production and protection of aquatic ecosystems. Conclusions As aforementioned, we adapted the Integrated Land Use Management and Responsible Agricultural Investment framework to describe key land-related institutional and governance frameworks and to examine the effectiveness of the land lease process in terms of economic, social, and environmental expectations from agricultural outsourcing in Ethiopia.Based on the results obtained, by the end of 2012, 2.11 million ha of the total 11.5 million ha potential investment land is already delivered to investors out of which 600,254 ha is the share of the Western Ethiopian lowland region called Benishangul-Gumuz followed by Oromiya and Gambella regional states with 458,292 and 399,491 ha, respectively.In the Benishangul-Gumuz region, there is large discrepancy between the amount of land stated on an investor's investment contractual agreement (on the document) and the actual amount of land delivered on the ground where the latter is significantly lower than the former, for many cases lower by 50%-60%.While there is no limit set on the amount of land that an investor could take, investors cannot lease as much land as they want due to a maximum limit on the type of investment land and area of investment.Land size of 5000 ha and above and which are found in a single place are leased by the Ministry of Agriculture, and lands below 5000 ha found in different places are leased by an appropriate regional office.There is a limit both in terms of land size and land lease period.The base for setting the threshold by the federal government is land size while the threshold by the regional government is made in terms of both land size and the length of the lease period.However, the regional government's proposition did not indicate a land lease period limit for investment land of 200 ha and less.Furthermore, some of the regulatory points are contrary to what is stated by the Federal Government.The country is not benefitting from the actual benefits of land because the value of land is not properly determined, which results in extremely low land lease prices for large-scale commercial farmlands even compared to the local informal land market values.Generally, the price quoted for different types of land is not based on its true values and the livelihoods of selling communities.While it is imperative to encourage and attract agricultural investment in the economy of developing countries of Africa, it is equally important to meet the desirable social, economic, and environmental standards of the sector.Lessons learned from the Benishangul-Gumuz region highlight that supplying huge tracts of farm lands for large-scale agricultural investment requires integrated land use planning, appropriate land valuation, functional land governance and monitoring frameworks, and the capacity to implement the various social and environmental laws.Land should be seen not only from the economic benefits of large-scale agricultural investment, but also from the angle of other marketable services, such as payments for environmental services.According to Resosudarmo et al. (2014) [84], sustainable land use alternatives other than commercial farming, such as reducing emissions from deforestation and forest degradation, and fostering conservation and enhancement of forest carbon stocks (REDD+) should be considered, especially in those ecologically fragile areas where the threat of desertification is surging [85].However, a study by Davis et al. (2015) [86] shows that through these carbon credit mechanisms (e.g., reducing emissions from deforestation, forest degradation, and REDD+), land-intensive policies may further heighten the demand for land.It has been suggested that it is time to move beyond the approach of internalizing externalities through payments for ecosystem services.According to Fairhead et al. (2012) [87], while understanding and critiquing the processes that result in the neo-liberalization of nature are important, they are clearly insufficient.Alternatives must emerge, rooted in relational, interconnected, animated understandings and experiences of landscape, ecology, and human-ecological relations, responding to the unruly politics and ecologies of the real world.Generally, large scale land acquisitions should not be criticized just because the price quoted for different types of land is not based on its true values, but more in general because it jeopardizes the livelihoods of local selling communities [68].Countries should not ambitiously supply large-scale farmlands without adequate preparation, land use planning, proper land valuation, and capable monitoring framework.Based on the results, there is no single statement in any of the contractual documents or templates which requests the participation of any other stakeholders (e.g., local communities) when signing the land lease contracts.Moreover, the compensations are not mentioned in the contracts either.Adequate consultations should have been primarily made with local communities, aiming for their active participation when formulating the contract and all agreements.Accordingly, commercial land acquisitions should pass through "inclusive" deals that integrate the biophysical environment, stakeholders, governance, and institutions [35].Following the acquisition of land by transnational companies, there is a dramatic shift in terms of land use and the livelihoods of local people: a shift from forest-dependent means of living to customary farming practices without adequate support and training in farming practices. Environmental damage associated with large-scale land investment in Ethiopia happens directly as a result of forest degradation and land conversion which cause biodiversity loss and soil erosion [66].Integration and harmonization of large-scale commercial farming with other local development projects is crucially important for countries which supply tracts of large-scale farm lands to investors.For instance, there is a large concern in the region, where the supply of large-scale commercial farmlands around the Grand Ethiopian Renaissance Dam (GERD) and long-term land use contracts are made between the government of Ethiopia and commercial companies [88].Accordingly, future studies should address the long-term effect upon the functionality and sustainability of the reservoir of the dam caused by those large-scale mechanized commercial farms which are operating in the upper watersheds of the region.Introducing cadastral systems and fit-for-purpose land management approaches can contribute to resolving the prevailing challenges and predicaments of large-scale private commercial farming in the region [28,29].Large-scale agricultural investment in the Benishangul-Gumuz region in general, and the Dangure-Guba districts in particular, is at the cross point of protecting the reservoir of the hydroelectric dam or continuing the supply of large-scale farm lands, which has resulted in significant deforestation and land conversion.Additionally, the forest resources of the region could have been conserved as national buffer zones for the expanding Sahara Desert, as the critical functions of forestlands in deterring the expansion of the Sahara desert to western SSA are indispensable.More lessons can be learned from commercial farmland acquisition in Ethiopia for future studies on issues such as identifying factors that influence the size of cultivated land, e.g., type of crops and type of investment land, the determining factors of employment among commercial farmlands in commercial land supplying developing countries in general, and in the study area in particular, household level welfare effects, land use land cover changes, and the nexus between large-scale commercial farming and the mega hydro dam and its reservoir.Lastly, improving rural infrastructure, particularly road, is essential to boost the farm level performance of commercial farms.So as to widen the participation of foreign investors in addition to the domestic ones.Stated four forms of investment and set minimum capital requirements for foreign investors (100,000 USD for single investment; 60,000 USD jointly with domestic investors), allocation of land, and further rights and privileges for different forms and types of investors [69]. 2004: Reorganization of government organs of Ethiopia, proclamation no.380/2004 Restructured the powers and duties of the Ministry of Agriculture and Rural Development mandated with the power to draft land use policy, land administration guidelines, conservation and use of forest and related resources such as wildlife [48].Targeted to increase the land tenure security, enhance farm land productivity, and circumvent expectation of land redistribution among citizens.Farmers hold a perpetual use right on their farm holdings, and this use right should be strengthened through the issuance of land holding-land use certificates and registration, followed by cadastre.A federal framework for rural land administration and land use proclamation, each regional state is mandated to arrange its own legal framework to register land in a region [40].Security of land tenure versus agricultural investment has been a point argument which requires further investigation.Although the relationship between tenure security of land and agricultural investment varies, tenure security has a significant effect upon farmers' investment in certain counties in Ethiopia [89].The amount of compensation for a property situated on a land to be expropriated should be determined on the basis of current market prices.Provisions are set concerning compensation for a building, fences, non-crops, perennial crops, trees, protected grass, permanent improvement on rural land, relocated property, a mining license, and burial ground.Furthermore, formulas for calculating the amount of compensation for the stated properties are set [49]. 2009: Benishangul-Gumuz Region Rural Investment Land Use Regulation, Regional Council's Regulation no.29/2009 Explains investment land supply procedures, investment landholding, lease system and duration of land use, forest protection, land evaluation, land use contract, land lease price in the different districts (woredas) of the region, rights and obligations of investors, etc. [57]. 2010: Definition of power and duties of executive organs, proclamation no.691/2010 The proclamation established twenty ministries one of which was Ministry of Agriculture.The Ministry of Agriculture and Rural Development (MoARD) is dissolved and replaced with Ministry of Agriculture (MoA).Powers and duties which had formerly given to the MARD were transferred to the MoA.The MoA is mandated to ensure conservation of biodiversity, and "the administration of agricultural investment lands entrusted to the federal government on the basis of powers of delegation obtained from regional states" [90]. Figure 5 . Figure 5.The status of large-scale agricultural investment projects in the Benishangul-Gumuz region.Source: Ministry of Agriculture (Addis Ababa) and Benishangul-Gumuz Regional Investment Bureau (Asosa) and own survey. Figure 6 . Figure 6.Deforestation is part of the land preparation process by companies in the study area (Photo: First author). 2007: Payment for compensation for property situated on landholding expropriated for public purposes, Council of Ministers regulation no.135/2007 Table 1 . Data sources and methods of data collection. Table 2 . Land size of agricultural investment in Ethiopia. Table 3 . [56]mum threshold of land size for an investment project in Ethiopia[56]. Table 4 . Type of investment land and land lease duration. Table 5 . Farm land lease price across districts of the region. Table 6 . Description of the agricultural investments in Ethiopia and Benishangul-Gumuz. Table 7 . Comparison of land size between land leased and cultivated land (t-Test). Table 8 . Relocated and recipient households in the Guba district.: Own computation at Guba District office, Mankush.* A "kebele" is the smallest administrative unit in Ethiopian governance structure and it consists of a minimum of five hundred households or a population of 3500-4000 persons while 1200-3200 persons in Benishangul-Gumuz which is a sparsely populated region. Source Table 9 . Relocated and recipient households in the Dangure district. Table 10 . Regression analysis of the main factors influencing size of cultivated land. [40]opriation of landholdings for public purposes and payment of compensation Proclamation no.455/2005Defined the key principles that should be considered to determine compensation for a person whose landholding has been expropriated for various development purposes.It also stated the state bodies that have the mandate to determine the responsibility to pay the compensation for land.Generally, it is part of the constitutional requirements of the constitution of the country, article 51/(5) and article 40/(8) to enact laws concerning the utilization of land.A district (woreda) or urban administrations are given mandate to expropriate rural or urban landholdings for public objectives[40]. [64]sformation Council and Agency Establishment, Council of Ministers regulation no.198/2010Lead the identification, design, and effective implementation of solutions to the challenges of agricultural development, for instance, identification of soil fertility problems and solutions for the same[71].Stated provisions which could enhance investment not only in the agriculture but also in the manufacturing sector and improve some laws stated in the previous investment proclamation.Areas of investment for domestic investors, foreign investors, and investments to undertaken jointly are delineated.Amendment on minimum capital requirements for foreign investors are set[64].Specified various types of incentives for investors depending on different criteria such as type of investment, location of investment, performance of investment or progress.Exceptions for income tax and exemptions from custom duty for two-nine years.Specifically, investors who invest in Afar, Benishangul-Gumuz, Gambela, and Somale Regions are entitled to 30% income tax reduction.Similar income tax reduction will be made if companies invest in theGuji and Borena zones of Oromia Region and in many of the areas in South-Omo, Segen, Bench-Maji, Sheka, Dawro, Kefa Zones, and some Woredas of the State of Southern Nations and Nationalities People[64].
2016-03-01T03:19:46.873Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "bf35bb082df36f869873541ba72eabb67ab60309", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/8/3/213/pdf?version=1456746963", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "bf35bb082df36f869873541ba72eabb67ab60309", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Economics" ] }
265291897
pes2o/s2orc
v3-fos-license
Discovery, Identification, and Insecticidal Activity of an Aspergillus flavus Strain Isolated from a Saline–Alkali Soil Sample Aphids are one of the most destructive pests in agricultural production. In addition, aphids are able to easily develop resistance to chemical insecticides due to their rapid reproduction and short generation periods. To explore an effective and environmentally friendly aphid control strategy, we isolated and examined a fungus with aphid-parasitizing activity. The strain (YJNfs21.11) was identified as Aspergillus flavus by ITS, 28S, and BenA gene sequence analysis. Scanning electron microscopy and transmission electron microscopy revealed that the infection hyphae of ‘YJNfs21.11’ colonized and penetrated the aphid epidermal layer and subsequently colonized the body cavity. Field experiments showed that ‘YJNfs21.11’ and its fermentation products exerted considerable control on aphids, with a corrected efficacy of 96.87%. The lipase, protease, and chitinase secreted by fungi help aphid cuticle degradation, thus assisting spores in completing the infection process. Additionally, changes were observed in the mobility and physical signs of aphids, with death occurring within 60 h of infection. Our results demonstrate that A. flavus ‘YJNfs21.11’ exhibits considerable control on Aphis gossypii Glover and Hyalopterus arundimis Fabricius, making it a suitable biological control agent. Introduction Aphids are highly diverse, with 10 families and 4400 species discovered to date [1].Unfortunately, aphids are also one of the most destructive crop pests, reducing both quality and yield in diverse plants, including wheat [2], melons [3,4], peaches [5,6], and cotton [7], among others.Currently, chemical pesticide application is considered the most efficacious method of control.However, chemical pesticides alter the molecular physiology of target insects, resulting in altered enzyme activity, genetic mutations, and pesticide resistance [8].In addition, chemical pesticides have been linked to both environmental and food safety risks. To address these issues, the importance of entomopathogenic fungi as alternative pest control agents is increasing.For example, certain fungi have been found to effectively control aphids [9].These include Beauveria bassiana [10][11][12], the Metarhizium [13,14] and Verticillium [15,16] genera, as well as fungi belonging to the Entomophthorales order [17].These mycopesticides mainly use propagules such as conidia, blastospores, or hyphae.These propagules have the advantages of directly killing the target pest as well as secondary infection via horizontal transmission of spores from cadavers with mycosis.Several of these fungi have been used to develop wettable powder-based biopesticides for use against foliar sap-sucking pests [18]. Based on initial reports, the cosmopolitan saprophytic fungus Aspergillus flavus may be a promising aphid biocontrol agent [19,20].Laboratory bioassays indicate that A. flavus is pathogenic to aphids at doses of 1.23 × 10 3 spores/mL (LC50) and 1.34 × 10 7 spores/mL (LC90) [21].In another study involving cabbage and wheat, 33% of cabbage aphids and 37% of wheat aphids were found to be susceptible to fungal biocontrol, with A. flavus among the most effective pathogens [22].However, the mechanism by which A. flavus exerts control over aphids, and the active substances involved, remains largely unknown. In this study, we isolated A. flavus from a sample of Chinese saline-alkali soil.The insecticidal activity of the strain was verified with lab-based and field-based bioassays.Subsequently, microscopic techniques were used to analyze histopathological changes in infected aphids and the influence of enzyme activity on biocontrol efficacy was explored.The results presented here will be valuable for the realization of sustainable aphid control strategies. Experimental Procedures 2.1. Isolation and Identification of the Strain In October 2021, we isolated a strain in saline-alkali soil sample from Ningxia Autonomous Region, China.The soil sample was first cleared of debris and filtered using a 100-mesh screen, then washed and rinsed three times with sterile water [23].Subsequently, the soil was diluted 10×, 10 2 ×, 10 3 ×, 10 4 ×, 10 5 ×, 10 6 ×, 10 7 ×, and 10 8 × with sterile water and inoculated (100 µL) onto potato dextrose agar (PDA) using the spread plate technique [24,25].Each gradient was repeated three times and was cultured for 3-5 d in a constant temperature (28 • C) incubator.Purification was repeated until a single colony was obtained [26]. Preparation of Spore Suspension To prepare the spore suspension, incubation plates were washed with sterile water on a sterile operating table.The spores were collected in a conical flask with glass beads, shaken thoroughly, and scattered.A single spore suspension was obtained via filtration with a cotton-stuffed syringe.The concentration was adjusted to 1 × 10 7 spores/mL using a hemocytometer.Finally, 0.1% tween-20 was added and the suspension was refrigerated (4 • C) for subsequent analyses [27,28]. Preparation of Fermentation Solution The strain was fermented under aerobic conditions.The fermentation medium (35 g soluble starch, 15 g sucrose, 12.5 g yeast extract, 7.5 g soybean cake powder, 1.0 g KH 2 PO 4 , 1.1 g anhydrous MgSO 4 , 1.0 g NaCl) was prepared in 1 L of distilled water and distributed into ten 250 mL conical flasks.The flasks were sealed with eight layers of gauze and then autoclaved for 20 min.The spore suspension was injected into the fermentation medium at a concentration of 5%.Following inoculation, the flasks were again sealed with eight layers of gauze and cultured for 3 d in a shaker (165 r/min) at 32 • C. Next, the fermentation solution was homogenized (JJ-2, Hangzhou, China) to prevent mycelium from clogging the nozzle.Finally, 0.1% tween-20 was added.We found that sealing with gauze was more beneficial to the growth of A. flavus.This may be due to the breathability of the gauze.In order to prevent bacterial contamination, we used gauze to seal the conical bottles and then coated the gauze in kraft paper to maintain sterility and prevent water vapor pollution.In addition, after inoculation we used ultraviolet radiation to sterilize the replacement gauze. Preparation of Mycelium Reagents The fermented broth was filtered with eight layers of gauze and eluted three times with water.Next, the mycelium was weighed, homogenized, and diluted 100×.Finally, 0.1% tween-20 was added and the suspension was refrigerated (4 • C) for subsequent analyses. Preparation of Methanol Extract Reagent After the fermentation solution was filtered with eight layers of gauze, the mycelium was weighed.Next, the mycelium was mixed with methanol at a ratio of 1:10 and ultrasonicated for 30 min.This process was repeated twice, followed by dilution with 0.1% tween-20 to ensure that the organic solvent content was ≤1%.The resulting methanol extract reagent was refrigerated (4 • C) for subsequent analyses. Preparation of Fermentation Filtrate After filtration with 8 layers of gauze, the fermentation solution was centrifuged (12,000 r/min) for 5 min and the supernatant was filtered using a 0.22 µm microporous filtration membrane.Finally, 0.1% tween-20 was added to the filtered supernatant and the suspension was refrigerated (4 • C) for subsequent analyses. Evaluation of Biocontrol Efficacy Here, we used naturally occurring A. gossypii and H. arundimis in order to better test the aphicidal activity of the strain.Naturally occurring aphid populations are more resilient and resistant to biotic and abiotic stressors than lab-reared aphids.In addition, due to exposure to environmental toxins and pesticides, naturally occurring aphids are much more resistant to biotic and abiotic stressors, and reproduce more quickly, than lab-reared aphids. In the lab-based bioassay [29][30][31], we tested the efficacy of this strain against H. arundimis, which thrive on peach trees and are often difficult to control because of their thick waxy layers.Screening experiments were performed to measure the efficacy of the strain on aphid nymphs.All treatments were carried out in 90 mm Petri dishes.Peach leaves exhibiting vigorous growth and hosting large numbers of aphids were excised, and the petioles were wrapped in cotton moistened with water.The experimental leaves were immersed in both the original and diluted (2×, 10×, and 50×) fermentation broths for approximately 3 s [32], with 0.1% tween-20 used as a negative control, followed by blotting with absorbent paper to remove excess liquid.The aphids were counted and then transferred to sterile Petri dishes containing a 9 cm diameter section of filter paper.Three parallel experiments were carried out for each group.Dishes were placed in a culture chamber with a temperature of 25 (±1) • C and a relative humidity of ≥85% [33].Aphid mortality rates were recorded at 16 h, 24 h, and 48 h after treatment.Subsequently, the population reduction rate and corrected efficacy were calculated according to the equations below. In the field-based bioassay, we tested the aphicidal efficacy of this strain against A. gossypii, which are a major pest of fruit and vegetable crops such as melons.Both the original and diluted (2×, 10×, and 50×) fermentation broths were sprayed evenly on the leaves of vigorously growing muskmelon, with 0.1% tween-20 used as a negative control.Five to eight muskmelon plants were used for each treatment.Three parallel experiments were carried out for each group.The test area (1 m 2 ) was randomly arranged in blocks, and no other chemical agents were applied to the field during the test [34].The number of aphids was recorded before application, and the number of dead aphids was recorded on the 1st, 3rd, and 5th day after application.Subsequently, the population reduction rate and corrected efficacy were calculated according to the equations below.Corrected efficacy refers to a statistical method for correcting calculations of control efficacy over a specific treatment area to account for changes in insect body condition.The day of the experiment was sunny with a suitable temperature.Subsequent to this strain exposure, the feeding ability, activity [35], and symptoms of infected and control A. gossypii were observed continuously with a stereomicroscope (LEICA M165 FC, LEICA, Wetzlar, Germany).At 24 h, 36 h, 48 h, and 60 h post infection, A. gossypii were fixed with 2.5% glutaraldehyde at 4 • C.Then, the levels of this strain adhesion, germination, and invasion were observed using scanning electron microscopy.In addition, at 24 h, 36 h, 48 h, and 60 h post infection, the susceptible A. gossypii were fixed for 3 h with paraformaldehyde at 4 • C. The fixed specimens were infiltrated and embedded in Epon 812, then sliced (80 µm thickness) using an ultrathin slicer (EMUC7, Leica Instrument Co. LTD, Vienna, Austria), fixed on a copper mesh, stained with uranium acetate and lead citrate, and observed using transmission electron microscopy (TECNAI G2 SPIRIT BIO, FEI Corporation, Hillsboro, OR, USA). Evaluation of the Effect of the Strain Infection on Aphid Body Wall Extracellular Enzyme Degradation 2.5.1.Formulation of Culture Medium from Aphid Cuticles H. arundimis were carefully removed from peach tree branches with paintbrushes and placed in beakers, and then subsequently rinsed repeatedly with tap water to remove impurities.The internal organs of the aphids were removed using a dissecting microscope (SZ51, Beijing, China), and the wax shells and body walls of the aphids were dried in an oven at 65 • C. Next, the dried body walls were ground into powder and sieved (40 mesh).In total, 5 g of aphid body wall powder was added to the base medium (1.5 g KH 2 PO 4 , 0.6 g MgSO 4 , 0.5 g KCl, 1 mg FeSO 4 •7H 2 O, 1 mg ZnSO 4 •7H 2 O, 1 L distilled water).A total of 40 mL of the mixture was allocated to each 150 mL conical vial and autoclaved (120 • C for 30 min).This process ensured that the liquid medium contained aphid body walls and wax shells as the only carbon sources. Extraction of Crude Enzyme Solution Each conical vial was inoculated with 10 mL of spore suspension and placed in a thermostatic shaking table (28 • C, 150 r/min) for 8 d.Three 0.5 mL samples were collected each day and centrifuged (10,000 r/min for 20 min, 4 • C).The supernatant was collected as the crude enzyme solution and stored in an ultralow temperature refrigerator at −80 • C for subsequent analyses. Determination of Lipase Activity Briefly, 3 mg of p-nitrophenyl palmitate (p-NPP) was dissolved in 1 mL of isopropyl alcohol, and the mixture was subsequently dissolved in 9 mL of Tris-HCl (0.05 mol/L, pH 8.0) containing 20.7 mg sodium deoxycholate and 10 mg gum arabic to prepare the matrix solution.During the reaction, 200 µL of the substrate solution was preheated in a Microorganisms 2023, 11, 2788 5 of 18 37 • C water bath for 10 min, and then 20 µL of enzyme solution was added and stirred uniformly.Next, 300 µL of trichloroacetic acid was added after water bath heating (37 • C, 20 min) and stirred well, then left to stand at room temperature for 5 min to terminate the reaction.Finally, 320 µL of NaOH (0.05 mol/L) was added.For the control group, trichloroacetic acid was added first, followed by the enzyme solution.OD values (410 nm) were measured using a microplate reader (Bio Tek Synergy H1, Winooski, VT, USA).The enzyme activity was calculated according to the p-nitrophenol standard curve.A lipase activity unit was defined as the amount of enzyme catalyzing the decomposition of fat to produce 1 µg of p-nitroaniline per minute. Determination of Protease Activity A 1% (w/v) casein solution was prepared with Tris-HCl buffer (0.05 mol/L, pH 8.5).In total, 1 mL of this solution was mixed with 1 mL of the prepared enzyme solution in a centrifuge tube and heated in a water bath (37 • C for 30 min).The reaction was terminated with 3 mL trichloroacetic acid (0.4 mol/L).For the control group, 1 mL of enzyme solution and 3 mL of trichloroacetic acid (0.4 mol/L) were successively added to the centrifuge tube, and then 1 mL of casein solution was added after mixing.Next, 1 mL of the filtrate was added to 5 mL of 0.55 mol/mL Na 2 CO 3 and mixed well.Finally, 1 mL of Folin reagent was added and the solution was mixed well and reacted for 30 min.OD values (680 nm) were measured using a microplate reader.The enzyme activity was calculated according to the tyrosine standard curve.A protease activity unit was defined as the amount of enzyme catalyzing the decomposition of protein to produce 1 µg of tyrosine per minute. Determination of Chitinase Activity Both 0.1 mL of enzyme solution and 0.1 mL of colloidal chitin were mixed in a centrifuge tube, water-bathed (37 • C for 4 h), and then centrifuged (8000 r/min for 5 min) to terminate the enzymatic hydrolysis reaction.A total of 50 µL of the supernatant was mixed with 20 µL of potassium tetraborate solution, shaken thoroughly, reacted in a boiling water bath for 5 min, and then quickly cooled to room temperature with tap water.Next, 300 µL of 10% dimethylaminobenzaldehyde reagent was added and the mixture was waterbathed (37 • C for 20 min) and subsequently cooled to room temperature with tap water.OD values (585 nm) were measured using a microplate reader.The enzyme activity was calculated according to the N-acetylglucosamine standard curve.A chitinase activity unit was defined as the amount of enzyme catalyzing the decomposition of chitin to produce 1 µg of N-acetylglucosamine per minute. Statistical Analyses In order to eliminate certain analytical errors, exact values were utilized for statistical analyses.SPSS 26.0 was used to perform single-factor analysis of variance, and the results were expressed as the mean ± standard deviation.Duncan's method was used for multiple comparisons.Statistical significance was determined at the p < 0.05 level. Isolation and Identification of the Strain After 24 h of culture on PDA, the strain formed loose, flat, regular-edged colonies 2-3 cm in diameter (Figure 1A).The colonies were initially white with short, fluffy hyphae growing outward (Figure 1B).The center of the colonies later turned green, with a large number of green spores growing (Figure 1C,D).The nutritional hyphae exhibited septa, and a portion of the aerial hyphae formed a long, rough conidium, giving rise to a nearly spherical apical sac (Figure 1E).The surface gave rise to several small peduncles bearing clusters of coarse-surfaced spherical conidia (Figure 1F).Additionally, rough-surfaced spherical conidia were generated on the surface of the small peduncle (Figure 1G-I). ulated that this might be a new strain, and we temporarily named it 'YJNfs11' for the convenience of subsequent studies.A sample specimen was deposited in the China General Microbiological Culture Collection Center (CGMCC) with the deposit number 40260.It will be used as a new portal for further research, such as whole-genome sequencing analysis.Phylogenetic tree based on concatenated ITS, LSU (28S), and BenA of strain 'YJNfs21.11'was shown in Figure 2. Through an ITS gene sequence analysis and search in the BLAST database (NCBI), the strain was identified as Aspergillus flavus (GenBank: OP071264), and the homology reached 99.33%.Then, we further analyzed the gene sequences of LSU (28S) and BenA, and compared them with the sequences reported in the BLAST database (NCBI).We speculated that this might be a new strain, and we temporarily named it 'YJNfs11' for the convenience of subsequent studies.A sample specimen was deposited in the China General Microbiological Culture Collection Center (CGMCC) with the deposit number 40260.It will be used as a new portal for further research, such as whole-genome sequencing analysis.Phylogenetic tree based on concatenated ITS, LSU (28S), and BenA of strain 'YJNfs21.11'was shown in Figure 2. Biocontrol Efficacy of A. flavus For laboratory experiments, the corrected control efficacy of the 100× diluted spore suspension reached 85.69% after 48 h (Table 1).These results suggest that the best infective agents were spores.The biocontrol efficacy of each solution, before and after application, is shown in Figure 3a. Biocontrol Efficacy of A. flavus For laboratory experiments, the corrected control efficacy of the 100× diluted spore suspension reached 85.69% after 48 h (Table 1).These results suggest that the best infective agents were spores.The biocontrol efficacy of each solution, before and after application, is shown in Figure 3a.Field experiments revealed that A. flavus 'YJNfs21.11'and its fermentation products exerted considerable control on aphids, with a corrected control efficacy of 96.87% after 5 days of treatment with the undiluted fermentation liquid (Table 2).The biocontrol efficacy of each solution, before and after application, is shown in Figure 3b.Observation under a stereomicroscope revealed no significant changes in the mobility or physical signs of aphids between 24 h and 36 h after infection (Figure 4A,B).However, aphid mobility decreased considerably after 48 h of infection, with aphids exhibiting a darkened body color and black intersegmental folds (Figure 4C).After 60 h of infection, white hyphae were observed on the heads, abdomens, and tails of aphids (Figure 4D), indicating that they had died [36].Furthermore, their bodies had shrunk and stiffened.Aphids infected via spore transmission can form new sources of infection that continue to invade other aphids [37]. Adhesion and Germination of A. flavus 'YJNfs21.11' on the Surface of Aphids Scanning electron microscopy revealed that A. flavus 'YJNfs21.11'primarily invaded aphid bodies by penetrating the epidermis.After 24 h of infection, the conidia were observed to be attached to the head vertices, antennae, and bases of the mouthparts.The conidia further attached to the spinous projections and intersegmental folds on the thorax and abdomen (Figure 5A,B).After 48 h of infection, the conidia gradually germinated to form germ tubes specialized in appressoria and penetration pegs at the invasion site (Figure 5C,D).The conidia penetrated the epidermis using mechanical force, causing cracks, blackening the aphid body surface, and deforming the epidermis (Figure 5E).During the invasion process, the epidermal tissue was destroyed and cracks and cavities appeared, likely because the fungus secreted enzymes on the surface of the insect (Figure 5F).Many spores penetrated the body wall through these cracks, thereby completing the invasion process (Figure 5G).Through examination of magnified photographs, we observed that the mycelium had many conidia attached to it (Supplementary Materials).We also examined photographs of aphid bristles and, through comparison, we identified the structure as mycelium rather than bristles.The mucilage produced by these spores promoted spore adhesion, germination, and aphid invasion (Figure 5H).Some of the hyphae grew and extended around the setae (Figure 5E).Fungal growth in the peripheral space of the anterior epidermis is generally lateral and can rupture the epidermis, thereby facilitating invasion [38].At the invasion site, the body surface appeared concave, and the body wall appeared darker in color.In summary, fungal invasion destroyed the epidermal layer and resulted in cracks forming on the surface of the aphid body, allowing A. flavus 'YJNfs21.11'to enter (Figure 5I).Tissue sections and transmission electron microscopy revealed that the overall aphid body structure was intact after 24 h of infection (Figure 6A,B).From the outside to the inside, the aphid body wall consists of the upper epidermis, pro-epidermis, and dermis (Figure 6C).Muscles and fat are distributed in the hemocoel between the digestive tract and body wall.The muscles are attached to the inner surface of the body wall and the surface of the digestive tract in a strip or block shape.In addition, body fat is linearly distributed in the hemocoel (Figure 6D).After 48 h of infection, the spores had invaded the aphid body, resulting in the loosening of fat structures, the shredding of muscle tissues, and the infection of nerve ganglia (Figure 6E,F).In addition, the nerve cell bodies appeared slightly blackened and some had dissolved (Figure 6G,H).In the late stages of infection, fat was consumed by the spores and the tissue became unstructured.Tissue sections and transmission electron microscopy revealed that the overall aphid body structure was intact after 24 h of infection (Figure 6A,B).From the outside to the inside, the aphid body wall consists of the upper epidermis, pro-epidermis, and dermis (Figure 6C).Muscles and fat are distributed in the hemocoel between the digestive tract and body wall.The muscles are attached to the inner surface of the body wall and the surface of the digestive tract in a strip or block shape.In addition, body fat is linearly distributed in the hemocoel (Figure 6D).After 48 h of infection, the spores had invaded the aphid body, resulting in the loosening of fat structures, the shredding of muscle tissues, and the infection of nerve ganglia (Figure 6E,F).In addition, the nerve cell bodies appeared slightly blackened and some had dissolved (Figure 6G,H).In the late stages of infection, fat was consumed by the spores and the tissue became unstructured. Effect of Strain YJNfs21.11 on Extracellular Enzyme Activity during Aphid Body Wall Degradation During the degradation of the H. arundimis body wall, the activities of lipase, protease, and chitinase tended to first increase and then decrease over time (Figure 7).Both lipase activity (2.212 U/mL) and chitinase activity (0.032 U/mL) were highest on the 3rd day.Protease activity peaked last, and its activity (0.906 U/mL) was highest on the 4th day.During days 3-6, the body wall cracked, the aphid body stiffened and atrophied, and many tissues and organs collapsed, resulting in a decline in lipase activity.In terms of absorbance, the content of the lipase-specific cleaved substrate was significantly higher than that of chitinase or protease.Taken together, these results indicate that lipase played an important role in the degradation of the aphid body wall.day.Protease activity peaked last, and its activity (0.906 U/mL) was highest on the 4th day.During days 3-6, the body wall cracked, the aphid body stiffened and atrophied, and many tissues and organs collapsed, resulting in a decline in lipase activity.In terms of absorbance, the content of the lipase-specific cleaved substrate was significantly higher than that of chitinase or protease.Taken together, these results indicate that lipase played an important role in the degradation of the aphid body wall. Relationship between Extracellular Enzyme Activity and A. flavus Pathogenesis Linear regression was performed between enzymatic activity and corrected mortality rates on days 1, 3, and 5 of infection (Table 3).The regression curve of corrected efficacy Relationship between Extracellular Enzyme Activity and A. flavus Pathogenesis Linear regression was performed between enzymatic activity and corrected mortality rates on days 1, 3, and 5 of infection (Table 3).The regression curve of corrected efficacy and lipase activity was y = 36.254x− 1.7864 and the correlation coefficient (R 2 ) was 0.7191.The regression curve of corrected efficacy and protease activity was y = 98.036x + 12.999 and R 2 = 0.8707.The regression curve of corrected efficacy and chitinase activity was y = 2433x − 0.92 and R 2 = 0.6681.In conclusion, the activities of lipase, protease, and chitinase were positively correlated with aphid mortality.These results suggest that during the late stage of infection, cracks appearing in the aphid body wall were the result of enzymatic breakdown.In this study, the effects of extracellular enzymes on aphid body wall degradation and their virulence were investigated, which provides a theoretical basis for the application of A. flavus for the biological control of aphids. Discussion Certain fungi penetrate the host body through a combination of mechanical pressure and enzymatic degradation [39,40].After germination, spores form germ tubes and appressoria under suitable conditions [41,42].Each appressorium produces only one penetration peg which exerts a selective effect on nutrients before invading the body, resulting in physical pressure and forcing the penetration peg to invade the nutrient-rich aphid body [43,44]. Using transmission electron microscopy, we observed considerable degradation of the aphid epidermis around the infection site [45].In fact, conidial germ tubes are known to secrete substances that degrade extracellular enzymes, decompose or soften the insect body wall [46], and aid in the invasion of the insect body [47].These enzymes, such as esterase, protease, and chitinase, can also provide nutrients required for fungal growth and reproduction [48,49].During the late stages of infection, a portion of the upper aphid epidermis disappeared, indicating that A. flavus spores may secrete extracellular lipases, proteases, and chitinases during invasion.This was later confirmed when lipases activity was found to reach the maximum on the 3rd day. To date, the majority of research on fungal biocontrol agents has focused on simple insecticidal assays [50,51].These simple experiments do not address the process of fungal infection nor the mechanism by which fungi exert control on their insect hosts, and generally do not study the enzymatic digestion process in detail.In this study, the mechanism by which A. flavus 'YJNfs21.11'infects aphids was studied in depth, including the enzymatic dynamics associated with the infection process.Because spores were identified as the infective agent, our study focused on spores.However, we cannot rule out that either the mycelia or other bioactive metabolites may be involved in the process, and suggest further study. Aspergillus flavus is distributed all over the world, especially in warm and moist fields [52].It can produce a variety of secondary metabolites [53], among which aflatoxins have been most studied because of their strong carcinogenicity [54], which includes AFB1, AFB2, AFG1, and AFG2 [55]. A. flavus strains AF13 and K54A are notorious for producing aflatoxins [56].However, not all A. flavus produce aflatoxin [57][58][59][60]; for example, NRRL 21882 [61] and SU-16 [62,63] are non-toxin-producing strains and have been used for biological control of plant diseases and brewing of yellow rice wine, respectively [64].In this study, the fermentation products of A. flavus 'YJNfs21.11'were detected with LC-MS (AB SCIEX, Singapore), and no aflatoxins were detected (Supplementary Materials).Meanwhile, we also did not detect the presence of aflatoxin synthesis-related genes using PCR (Supplementary Materials). A. flavus 'YJNfs21.11'was isolated and extracted from soil, and its metabolites were natural products; these natural products can be degraded and reused by microorganisms.In field and laboratory bioassays, the study was carried out between biological half-lives [65].Therefore, A. flavus 'YJNfs21.11' is safe for the environment and people.In addition, strategies for the degradation of other aflatoxinproducing strains of aflatoxin are now available.One major determinant of aflatoxin production in A. flavus is temperature, and oxidative stress may also indeed be a prerequisite for aflatoxin production [66,67].The most promising strategy currently being used to reduce preharvest contamination of crops with aflatoxin is to introduce non-aflatoxin (biocontrol) A. flavus into the crop environment [68,69].In conclusion, A. flavus is worth Figure 5 . Figure 5. Attachment and germination of A. flavus 'YJNfs21.11'on aphid body surface.Note: (A) conidia (Co) attach to the aphid near acanthoid processes (AC); (B) growth and extension of germ tubes to form mycelium (Hy); (C) mycelium (Hy) attaches to the surface of the aphid near scaly hairs (SH); (D) portion of mycelium (Hy) attached to aphid legs; (E) mycelium (Hy) searches for invasion sites near setae (Se); (F) the mycelial tip (Hy) at the abdominal internode fold is specialized into an invasive nail (Peg); (G) mycelium (Hy) invades the abdominal aphid surface; (H) mycelium (Hy) and sporophore (Cp) invade aphid tail; (I) cracks and voids appear in the aphid body wall and mycelium was observed inside the aphid body.3.3.3.Invasion of the Aphid Internal Organs by A. flavus 'YJNfs21.11' Figure 5 . Figure 5. Attachment and germination of A. flavus 'YJNfs21.11'on aphid body surface.Note: (A) conidia (Co) attach to the aphid near acanthoid processes (AC); (B) growth and extension of germ tubes to form mycelium (Hy); (C) mycelium (Hy) attaches to the surface of the aphid near scaly hairs (SH); (D) portion of mycelium (Hy) attached to aphid legs; (E) mycelium (Hy) searches for invasion sites near setae (Se); (F) the mycelial tip (Hy) at the abdominal internode fold is specialized into an invasive nail (Peg); (G) mycelium (Hy) invades the abdominal aphid surface; (H) mycelium (Hy) and sporophore (Cp) invade aphid tail; (I) cracks and voids appear in the aphid body wall and mycelium was observed inside the aphid body. Figure 6 . Figure 6.Invasion of aphid fatty layer, muscle, and sheath cell layer by A. flavus 'YJNfs21.11'.Note: (A) relatively complete aphid body structure as observed under ultrathin section; (B) sporophores (Cp) were observed in the fatty layer; (C) from outside to inside, the aphid epidermis consists of spinous process (AC), upper epidermis (Ep), proto-epidermis (Pro), epidermis (Cu), and dermis (Epi); (D) sporophore (Cp) observed in aphid body; (E) conidia (Co) destroy the fat structure (FB) in vivo; (F) muscle (Mu) is loosely divided and small intestine (SB) is relatively intact; (G) conidia (Co) enter the sheath cell layer (Pe) through the nerve perinema (NL); (H) the nerve cell body (NC) is dissolved and blackened, although the structure of the nerve perimembrane (NL) is relatively intact. Figure 6 . Figure 6.Invasion of aphid fatty layer, muscle, and sheath cell layer by A. flavus 'YJNfs21.11'.Note: (A) relatively complete aphid body structure as observed under ultrathin section; (B) sporophores (Cp) were observed in the fatty layer; (C) from outside to inside, the aphid epidermis consists of spinous process (AC), upper epidermis (Ep), proto-epidermis (Pro), epidermis (Cu), and dermis (Epi); (D) sporophore (Cp) observed in aphid body; (E) conidia (Co) destroy the fat structure (FB) in vivo; (F) muscle (Mu) is loosely divided and small intestine (SB) is relatively intact; (G) conidia (Co) enter the sheath cell layer (Pe) through the nerve perinema (NL); (H) the nerve cell body (NC) is dissolved and blackened, although the structure of the nerve perimembrane (NL) is relatively intact. The experiment was carried out in the key open dryland test field of Northwest A&F University, China.The experimental field contained loamy soil of medium fertility. Note: All data are mean ± standard deviation.Different lowercase letters within the same column indicate significant differences between treatments (p < 0.05). Table 3 . Mortality and corrected efficacy of spore suspension against H. arundimis.
2023-11-20T16:15:13.857Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "fa9b13e15537b965a001faa65e72e0fcd50390a9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/11/11/2788/pdf?version=1700213447", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "915128821eabbf7e9dea70cd479878ba9ddd22cc", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
230541478
pes2o/s2orc
v3-fos-license
Output Characteristics of Graphene Field Effect Transistors The use of graphene, which has high mobility of charge carriers, high thermal conductivity and a number of other positive properties, is promising for the creation of new semiconductor devices with good output characteristics. The aim was to simulate the output characteristics of field effect transistors containing graphene using the Monte-Carlo method and the Poisson equation. Two semiconductor structures in which a single layer (or monolayer) of graphene is placed on a substrate formed from 6H-SiC silicon carbide material are considered. The peculiarity of the first of them is that the contact areas of drain and source were completely located on the graphene layer, the length of which along the longitudinal coordinate was equal to the length of the substrate. The second structure differed in that the length of the graphene layer was shortened and the drain and source areas were partly located on the graphene layer and partly on the substrate. The main output characteristics of field-effect transistors based on the two semiconductor structures considered were obtained by modeling. The modeling was performed using the statistical Monte Carlo method. To perform the simulation, a computational algorithm was developed and a program of numerical simulation using the Monte-Carlo method in three-dimensional space using the Poisson equation was compiled and debugged. The results of the studies show that the development of field-effect transistors using graphene layers can improve the output characteristics – to increase the output current and transconductance, as well as the limit frequency of semiconductor structures in high frequency ranges. Introduction The study of output characteristics for semiconductor compounds containing graphene is an urgent task related to the development of fast and powerful devices of microwave and microwave ranges. Great prospects are connected with the use of graphene in compounds with silicon carbide, boron nitride and other materials in the composition of developed new semiconductor devices [1][2][3]. Studies have shown that the material of silicon carbide SiC can be used to form a layer of graphene on its surface and the implementation of new structures of fieldeffect transistors. Detailed analysis of the output characteristics in the SiC-based devices obtained in such a way is either absent or performed using a simplified hydrodynamic model [3]. The most correct method for the analysis of physical processes in semiconductor structures is considered to be the Monte-Carlo statistical method, which allows to take into account the scattering dynamics of charge carriers in a semiconductor, to obtain their distribution dependencies for stationary and nonstationary processes and, finally, to determine the output characteristics of devices. Based on the Monte-Carlo method of statistical modeling, the studies of output characteristics of semiconductor structures containing a single layer of graphene placed on a silicon carbide substrate were carried out. For these purposes, a computational algorithm was developed, a program of numerical simulation by the Monte-Carlo method in threedimensional space using the Poisson equation was compiled and debugged. Constructions of semiconductor structures using graphene and features of modeling their output characteristics Let us consider the use of a single layer (or monolayer) of graphene in a semiconductor structure that uses silicon carbide material type 6H-SiC as a substrate. Figure 1 shows one of the possible designs for a semiconductor structure using a single layer of graphene. For all the dependencies obtained as a result of modeling, except those specified separately, the following dimensions of the structure were taken: length along the x coordinate (DX parameter) -1.0•10 -6 m, height -0.34•10 -8 m (DS parameter, y coordinate), width (z coordinate) -8•10 -6 m ( Figure 1). The values of the DA (drain length) and DB (source length) parameters were taken as the same and equal to 0.2•10 -6 m. The gate length (DZ parameter) was also assumed to be 0.2•10 -6 m. The thickness of the graphene layer was assumed to be 0.34•10 -9 m. Calculations were made for the temperature value T = 300 K. The electron drift channel was formed along the x coordinate by applying constant bias voltages, which were applied to the two contact areas -the drain and the gate. For 6H-SiC material, the values of its electrophysical parameters were selected from the data presented in [4,5] at a concentration of electrons equal to 1•10 17 cm -3 . The number of simulated particles for the entire structure with a layer of graphene and a SiC substrate was assumed to be 10000. Input and output of electrons from the structure according to the Monte Carlo procedure [6,7] was carried out from the contact regions 3 of the source and 6 of the drain ( Figure 1), but the regions 3 and 6, as well as region 5, were not considered in the modeling procedures. The design of the transistor presented in Figure 2 is generally similar to that presented in Figure 1, but the length of the graphene layer is shortened, and the value of parameters DK and DM ( Figure 2) is 2•10 -8 m. Thus the layer of graphene is located between contact areas of a source and a drain (elements 3 and 6, accordingly, in Figure 1) so that these areas are partially placed on a semiconductor substrate from silicon carbide (element 2 in Figure 2), and partially on a layer of graphene (element 1 in Figure 2). At modeling of electron transfer processes in silicon carbide material of 6H-SiC type, the model consisting of M 1 -L-M 2 valleys of conductivity zone Devices andMethods of Measurements 2020, vol. 11, no. 4, pp. 298-304 V.N. Mishchenka Приборы и методы измерений 2020. -Т. 11, № 4. -С. 298-304 V.N. Mishchenka having the lowest energy was used. The value of the gap between the M 1 and M 2 valleys was assumed to be 0.18 eV [5,6]. For the region of the structure containing graphene, the mechanisms of scattering of electrons on optical phonons, on impurities, on acoustic phonons were taken into account [8,9]; electron-electronic scattering was also additionally considered and its analysis is presented in [11]. In the developed program using the Monte-Carlo method for the analysis of electron drift in the region consisting of 6H-SiC material, the scattering mechanisms on optical phonons, on impurities, on acoustic phonons, as well as the inter-valley scattering between nonequivalent valleys [10]. To investigate the electron transfer process in graphene, the linear dependence of the energy E of electrons on the wave vector k was used, which is true in the region of usually considered energy values [8,9]: where k x , k y , k z -the components of the wave vector (wave numbers) along the coordinates x, y, z, respectively; υ F -the Fermi speed in graphene, the value of which is usually taken as 1.0•10 8 cm/s; ħreduced Planck's constant. Figure 1; 2 -obtained for the structure shown in Figure 2 In this case, the value of DC voltage at the drain of U o , for all curves presented in this figure, was equal to 1.5 V. Curve 1 was obtained for the structure shown in Figure 1, using a single layer of graphene. Curve 2 is obtained for the structure shown in Figure 2, using a single graphene layer of the same thickness as curve 1 in this Figure. Figure 4 shows the dependencies of drain output current I and the transconductance of output characteristic g m on the value of constant voltage at the gate U g for a structure without using a single layer of graphene, while observing all other parameters, as in the structure shown in Figure 1. In doing so, the graphene layer shown in Figure 1 was replaced by a layer of silicon carbide of similar size, so that a homogeneous layer of this material is formed. Simulation results The analysis of curves 1 и 2 in Figure 3 and curve 1 in Figure 4 shows that using a layer of graphene placed between the drain and the source allows to increase the output current several times at the same size of semiconductor structures and the same values of direct voltages at the drain and the gate. Figure 5 shows the dependencies (curves 1 and 2) of the transconductance of the output characteristic g m on the value of direct voltage at the gate U g , obtained from the characteristics of the output current, presented in Figure 3 by curves 1 and 2, respectively. The analysis of these dependencies shows that the maximum transconductance of the output characteristic g m reaches a value of approximately 0.075 mS at a gate voltage of approximately 0 V for structure 1 shown in Figure 1. Devices andMethods of Measurements 2020, vol. 11, no. 4, pp. 298-304 V.N. Mishchenka Приборы и методы измерений 2020. -Т. 11, № 4. -С. 298-304 V.N. Mishchenka For structure 2, shown in Figure 2, the transconductance of the output characteristic g m has lower values compared to structure 1, shown in Figure 1, reaching a maximum value of approximately 0.031 mS at a gate voltage of approximately 0 V. The analysis of dependencies 1 and 2 presented in Figure 5 and curve 2 in Figure 4 shows that the use of a graphene layer allows to significantly, several times increase the transconductance of the output characteristic, and thus increase the gain of the structure. Figure 6 are obtained at the gate voltage U g equal to minus 0.15 V. Curve 1 is obtained for the design shown in Figure 1, using a single layer of graphene. Curve 2 is obtained for the structure shown in Figure 2, which is characterized by a shorter length of a layer of graphene compared with the structure shown in Figure 1. Curve 3 is obtained for the transistor structure described above, which does not contain a layer of graphene. Figure 1; 2 -obtained for the structure shown in Figure 2; 3 -obtained for the structure that does not contain a layer of grapheme The analysis of curves 1, 2 shows that the output drain current has a feature that is not observed in conventional transistor designs without using a layer of graphene, and this feature is associated with the presence of flow current, which is not equal to zero, at a voltage on the drain equal to zero. The closed state of the transistor, whose design is shown in Figure 1, is observed at a voltage on the drain, approximately equal to minus 0.29 V, and the transistor, shown in Figure 2 at a voltage on the drain approximately −minus 0.015 V. The theoretical possibility of operating the semiconductor structure shown in Figure 1, which has a total length of the structure equal to 1•10 −6 m and the length of the gate equal to 0.2•10 −6 m, as an amplifier for signals in the EHF range was studied. Figure 7 of curve 1 shows the output current dependence of this semiconductor structure obtained by simulation. In this case, an external harmonic signal with a frequency of 200 GHz and amplitude U s = 0.15 V (curve 2 in Figure 7) was applied to the gate of the semiconductor structure at a constant voltage at the gate U g , equal to minus 0.15 V. Curve 3 in Figure 7 shows the dependence of output current on the time of this semiconductor structure, built using 6H-SiC material, but without a layer of graphene. The constant voltage applied to the drain was 1.5 V, while the gate constant voltage was minus 0.15 V for both calculation options. Figure 1; 2 -dependence of the external signal applied to the gate; 3 -output current dependence of the semiconductor structure constructed using 6H-SiC material but without a graphene layer Analysis of Figure 7 (curve 1) shows that a semiconductor structure with a layer of graphene is capable of transmitting and amplifying an input signal at 200 GHz in the EHF range with a gate length of 1•10 −6 m. A normal semiconductor structure based on silicon carbide but without graphene is unable to transmit and amplify input signals at 200 GHz because of the low speed and mobility of charge carriers in silicon carbide (Figure 7, curve 3). Thus, the introduction of a layer of graphene into the transistor design with a silicon carbide substrate allows for a significant expansion of the frequency range of the amplifier in the EHF range. Figure 8 shows the actual part of the complex spectrum amplitude, which was obtained by direct conversion of Fourier values of drain currents. This transformation was applied to the output current dependence data array, which is represented by curve 1 in Figure 7. The analysis of Figure 8 shows that in the received spectrum of output signal there is a component with frequency equal to 200 GHz. The analysis of Figures 7 and 8 shows that the semiconductor structure with a layer of graphene is able to transmit and amplify the input signal in the EHF range when selecting the appropriate length of the gate. Conclusion Modeling of output characteristics of field-effect transistors containing single graphene layer was performed using Monte-Carlo method and solution of Poisson equation. The simulation results show that the use of graphene in semiconductor compounds opens up new opportunities for improving highfrequency field-effect transistors due to high charge transfer rate, good scalability prospects, high thermal conductivity and a number of other advantages. For graphene semiconductor structures it is possible to achieve a higher average speed of electrons than in similar silicon transistors and transistors, which are based on other known semiconductor materials. Due to the use of graphene with such characteristics of charge carriers transfer it is possible to achieve high current densities in the open state and high values of transconductance, which provides good functioning characteristics and high operating frequency of field effect transistors. The results obtained allow predicting the wide use of transistors using graphene layers for amplifiers and other devices with high output characteristics. The developed designs of graphene field-effect transistors can find wide application in radio-electronic, radar and radio-navigation systems due to expected significant improvement of output characteristics of semiconductor devices designed for operation in the microwave and microwave frequency bands.
2020-12-24T09:09:42.373Z
2020-12-17T00:00:00.000
{ "year": 2020, "sha1": "f84b07f2f9ec023f143bca9c65c7ae46c9daa722", "oa_license": "CCBY", "oa_url": "https://pimi.bntu.by/jour/article/download/683/570", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bfa4f6437fedf5dc1fe65b7b5cfab5621e473d44", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
118583695
pes2o/s2orc
v3-fos-license
Supergravity in (2+1) dimensions from (3+1)-dimensional Supergravity In the context of the formalism proposed by Stelle-West and Grignani-Nardelli, it is shown that Chern-Simons supergravity can be consistently obtained as a dimensional reduction of (3+1)-dimensional supergravity, when written as a gauge theory of the Poincare group. The dimensional reductions are consistent with the gauge symmetries, mapping (3+1)-dimensional Poincare supergroup gauge transformations onto (2+1)-dimensional Poincare supergroup ones. I. INTRODUCTION Supergravity in (2 + 1) [1,2] and in (3 + 1) [3,4] dimensions can be formulated as a gauge theory of the Poincarè superalgebra. The first-order formalism permits one to write the three dimensional supergravity as a Chern-Simons theory [5], for which (2 + 1)-dimensional supergravity is a good theoretical laboratory for the construction of a quantum theory [6]. Then it is interesting to find a link between supergravities in (2 + 1) and in (3 + 1) dimensions. The action for supergravity in (2 + 1)-dimensions S = ε abc R ab e c + 4ψDψ , with ψ a two component Majorana spinor, is invariant under Lorentz rotations, Poincarè translations and supersymmetry transformations. The dreibein e a µ , the spin connection ω ab µ and the gravitino ψ a µ transform as components of a connection for the super Poincarè group. This means that the supersymmetry algebra implied by the corresponding supersymmetry transformations is the super Poincarè algebra. (3 + 1)-dimensional supergravity invariant under the Poincarè supergroup is based on the supersymmetric extension of the Stelle-West-Grignani-Nardelli formalism (SW GN ) [3,7,10]. The fundamental idea of the formalism is founded on the definition [3,7,9] of the vierbein V A and the gravitino Ψ, which involves the Goldstone fields ξ A , χ. In the supersymmetric extension of the (SW GN )-formalism: (i) the vierbein V A is not identified with the component e A of the gauge potential along the translation generators, but is given by (ii) the gravitino field is not identified with the component ψ of the gauge potential along the supersymmetry generator, but is given by Ψ = ψ + Dχ * Electronic address: pasalgad@udec.cl † Electronic address: fizaurie@udec.cl ‡ Electronic address: edurodriguez@udec.cl The purpose of the present work is to find the supersymmetric extension of the successful formalism of refs. [10,11]. This means that, in the context of the procedure of refs. [10,11], (3+1)-dimensional supergravity can be dimensionally reduced to Chern-Simons supergravity. This procedure can be used because both supergravity in (2 + 1) [5] and supergravity in (3 + 1)-dimensions [3,4] can be formulated as theories genuinely invariant under the Poincarè supergroup. The paper is organized as follows: In sec.II, we shall review some aspects of the Supersymmetric extension of the Stelle-West formalism and of supergravity as a gauge theory of the Poincarè supergroup. The dimensional reduction is carried out in sec.III where the principal features of the dimensional reduction process are presented. Section IV concludes the work with brief comment. II. SUPERGRAVITY INVARIANT UNDER THE POINCARÈ GROUP In this section we shall review some aspects of the Supersymmetric extension of the Stelle-West formalism and of supergravity as a gauge theory of the Poincarè group. The main point of this section is to display the differences in the invariances of the supergravity action when different definitions of vierbein are used. A. Non-Linear realizations The non-linear realizations can be studied by the general method developed in ref. [12,13]. Following these references, we consider a Lie (super)group G and a subgroup H. Let us call {V i } n−d i=1 the generators of H. We assume that the remaining generators {A l } d l=1 are chosen so that they form a representation of H. In other words, the commutator [V i , A l ] should be a linear combination of A l alone. A group element g ∈ G can be represented (uniquely) in the form where h is an element of H. The ξ l parametrize the coset space G/H. We do not specify here the parametrization of h. One can define the effect of a group element g 0 on the coset space by or where If g 0 − 1 is infinitesimal, (4) implies The right-hand side of (8) is a generator of H. Let us first consider the case in which g 0 = h 0 ∈ H. Then (4) gives Since the A l form a representation of H, this implies The transformation from ξ to ξ ′ given by (9) is linear. On the other hand, consider now . In this case (4) becomes This is a non-linear inhomogeneous transformation on ξ. The infinitesimal form (8) becomes The left-hand side of this equation can be evaluated, using the algebra of the group. Since the results must be a generator of H, one must set equal to zero the coefficient of A l . In this way one finds an equation from which δξ i can be calculated. The construction of a Lagrangian invariant under coordinate-dependent group transformations requires the introduction of a set of gauge fields a = a i associated respectively with the generators V i and A l . Hence ρ + a is the usual linear connection for the gauge group G, and the corresponding covariant derivative is given by: and its transformation law under g ∈ G is where f is a constant which, as it turns out, gives the strength of the universal coupling of the gauge fields to all other fields. We now consider the Lie algebra valued differential form [12] The transformation laws for the forms p(ξ, dξ) and v(ξ, dξ) are easily obtained. In fact, using (11),(12) one finds [14] p The equation (17) shows that the differential forms p(ξ, dξ) are transformed linearly by a group element of the form (11). The transformation law is the same as by an element of H, except that now this group element h 1 (ξ 0 , ξ) is a function of the variable ξ. Therefore any expression constructed with p(ξ, dξ) which is invariant under the subgroup H will be automatically invariant under the entire group G, the elements of H operating linearly on ξ, the remaining elements non-linearly. B. Supersymmetric Stelle-West Formalism The basic idea of the Stelle-West formalism is founded on the non-linear realizations in anti de Sitter space [7]. The supersymmetric extension of this formalism [4] is based in the non-linear realizations of supersymmetry in anti de Sitter space [14]. The formalism consider as G the graded Lie algebra having as generators Q α , P A and M AB . It has as a subalgebra H that of the de Sitter group SO(3, 2) with generators P A and M AB . This, in turn, has as subalgebra L that of the Lorentz group SO(3, 1) with generators M ab . An element of G can be uniquely represented in the form where h ∈ H and l ∈ L. On can define the effect of a group element g 0 on the coset space G/H by Clearly h 1 = h 1 (g 0 , χ) and l 1 = l 1 (g 0 , χ, ξ). We consider now the following cases: Both χ and ξ transform linearly. If, on the other hand, we know only that g 0 = h 0 ∈ H, in particular, if is a pseudo-translation, (22) gives while (23) gives In this case χ transforms linearly, but the transformation law (33) of ξ under pseudo-translations is inhomogeneous and non-linear. Infinitesimally is a supersymmetry transformation, one must use (22) and (23) as they stand. Observe, however, that (23) has the same form as (33) except for the fact that h 1 is a function of χ while h 0 is not. Therefore, the transformation law for ξ under a supersymmetry transformation has the same form as that under a de Sitter transformation but, with parameters which depend in a well defined way on χ. An explicit form for the transformation law of ξ a under an infinitesimal AdS boost can be obtained from (34). The result is where z = m (ξ a ξ a ) = mξ. The transformation of ξ A under an infinitesimal Lorentz transformation l 0 = e i 2 κ AB JAB is and, under local supersymmetry transformation (35), ξ A transforms as Using (25) with g 0 − 1 = εQ, one finds that Working in first order formalism, the gauge fields vierbein e A , spin connection ω AB and gravitino ψ are treated as independent. The key observation is that (e A , ω AB , ψ), considered as a single entity, constitute a multiplet in the adjoint representation of the AdS supergroup. That is, we can write: where A is the gauge field of the AdS supergroup, P A , J AB , Q α being the generators of the AdS boosts. Then, based on these, we can define the corresponding non-linear connections (V a , W ab , Ψ) from (16): (42) If G = OSp(1, 4) and H = SO(3, 2), the gauge fields V A form a square 4 × 4 matrix which is invertible and can be identified with the vierbein fields. In the same way we have that W AB is a connection and that Ψ can be identified with the Rarita-Schwinger field. From (42) one can obtain the fieldsV A , W AB , Ψ in terms of the fields e A , ω AB , ψ. The results are given in equations (81), (83) and (84) of ref. [4]. The corresponding transformation laws for V a , W ab , Ψ can be obtained from (17), (18). In fact, one can verify that, under the AdS supergroup, the non-linear connections transform as: The nonlinearity of the transformation with respect to the elements of G/H means that the labels associated with the parts of the algebra of G which generate G/H are no longer available as symmetry indices. In other words, the symmetry has been spontaneously broken from G to H. An irreducible representation of G will, in general, have several irreducible pieces with respect to H. Since, in constructing invariant actions, one only needs index saturation with respect to the subgroup H, as far as the invariance is concerned it is possible to select a subset of nonlinear fields with respect to G, which form irreducible multiplets with respect to H. C. Supergravity invariant under the AdS group Within the supersymmetric extension of the Stelle-West-formalism, the action for supergravity with cosmological [15] constant can be rewritten as which is invariant under the supersymmetric extension of the AdS group. From such equations we can see that the vierbein V a and the gravitino field transform homogeneously according to the representation of the AdS superalgebra but, with the nonlinear group element h 1 ∈ H. The corresponding equations of motion are obtained by varying the action with respect to ξ a , χ, e a , ω ab , ψ. The field equations corresponding to the variation of the action with respect to ξ a and χ are not independent equations. Following the same procedure of Ref. [16], we find that equations of motion for supergravity genuinely invariant under Super AdS are: (ii)the transformation laws of ξ A under an infinitesimal Poincarè translation, under an infinitesimal Lorentz transformation, and under a local supersymmetry transformation are given respectively by where ρ A , κ AB = −κ BA and ε are the infinitesimal parameters corresponding to Poincarè translations, Lorentz rotations and supersymmetry respectively (iii) the transformation laws of χ under an infinitesimal Poincarè translation, under an infinitesimal Lorentz transformation, and under a local supersymmetry transformation are given respectively by In this limit G is the Poincarè supergroup and H = SO(3, 1); and the fields vierbein V A , the connection W AB , and the Rarita-Schwinger field Ψ are given by where now The corresponding components of the curvature two-form are now A. Supergravity in (3 + 1) The limit m → 0 of the action 46 is obviously the action for N = 1 Supergravity in (3 + 1)-dimensions: which is genuinely invariant under the Poincarè group. In fact, d = 3 + 1, N = 1 supergravity is based on the Poincarè supergroup, whose generators P A , J AB , Q α satisfy the Lie-superalgebra (53). Using this algebra and the general form for gauge transformations on A we obtain that e A , ω AB , and ψ, under local Lorentz rotations, transform as (69) under local Poincarè translations, transform as and under local supersymmetry transformations, as This means that the vierbein V A transforms, under the Poincarè supergroup, as The space-time supertorsion ∧ T A is given by where It is direct to verify that the action (66) IV. COMMENTS We have shown that the successful formalism proposed in refs. [10] and [11] can be extended to the supersymmetric case. That is, (3 + 1)-dimensional supergravity can be dimensionally reduced to supergravity in (2 + 1)dimensions following the method of refs. [10] and [11]. Finally we can say that supergravity genuinely invariant under the Poincarè supergroup [3], [4] is a natural context to connect, preserving the invariance under the Poincarè supergroup, such a theory with (2 + 1)dimensional supergravity. It is interesting to note that all the terms containing ξ a , χ disappear from the action and that e a , ψ can be interpreted as the space-time dreibein and gravitino, and yet the theory is invariant under the Poincarè supergroup, contrary to what happens in (3 + 1)-dimensions. The absence of the ξ a , χ variables of (91) and the interpretation of e a , ω ab and ψ as gauge fields makes of (91) an action that can be conceived as a Chern-Simons three form.
2019-04-14T02:51:37.492Z
2003-06-24T00:00:00.000
{ "year": 2003, "sha1": "5afd9cc825a28ca38f111047fe737669cd856119", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0403218", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "969305279fa58a0e14a56890faba6b3b9519ca17", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
238408199
pes2o/s2orc
v3-fos-license
NEWRON: A New Generalization of the Artificial Neuron to Enhance the Interpretability of Neural Networks In this work, we formulate NEWRON: a generalization of the McCulloch-Pitts neuron structure. This new framework aims to explore additional desirable properties of artificial neurons. We show that some specializations of NEWRON allow the network to be interpretable with no change in their expressiveness. By just inspecting the models produced by our NEWRON-based networks, we can understand the rules governing the task. Extensive experiments show that the quality of the generated models is better than traditional interpretable models and in line or better than standard neural networks. Introduction Neural Networks (NNs) have now become the de facto standard in most Artificial Intelligence (AI) applications. The world of Machine Learning has moved towards Deep Learning, i.e., a class of NN models that exploit the use of multiple layers in the network to obtain the highest performance. Research in this field has focused on methods to increase the performance of NNs, in particular on which activation functions (Apicella et al. 2021) or optimization method (Sun et al. 2019) would be best. Higher performances come at a price: (Arrieta et al. 2020) show that there is a trade-off between interpretability and accuracy of models. Explainable Artificial Intelligence (XAI) is a rapidly growing research area producing methods to interpret the output of AI models in order to improve their robustness and safety (see e.g. (Ghorbani, Abid, and Zou 2019) and (Bhatt, Ravikumar et al. 2019)). Deep Neural Networks (DNNs) offer the highest performance at the price of the lowest possible interpretability. It is an open challenge to attain such high performance without giving up on model interpretability. The simplest solution would be to use a less complex model that is natively interpretable, e.g., decision trees or linear models, but those models are usually less effective than NNs. We ask the following question: can we design a novel neural network structure that makes the whole model interpretable without sacrificing effectiveness? NNs are black-box models: we can only observe their input and output values with no clear understanding of how those two values are correlated according to the model's parameters. Although a single neuron in the NN performs a relatively simple linear combination of the inputs, there is no clear and straightforward link between the parameters estimated during the training and the functioning of the network, mainly because of the stacking of multiple layers and non-linearities. In this work, we propose a generalization of the standard neuron used in neural networks that can also represent new configurations of the artificial neuron. Thus, we discuss a specific example that allows us to interpret the functioning of the network itself. We focus our efforts on tabular data since we investigate how NEWRON works only in the case of fully connected NNs. It is more straightforward to produce human-readable rules for this kind of data. We also remark that our goal is not to improve the performance of NNs, but rather to create interpretable versions of NNs that perform as well as other interpretable models (e.g., linear/logistic regression, decision trees, etc.) and similarly to standard NNs, when trained on the same data. Motivating Example Consider a simple dataset: MONK's 1 . Each sample consists of 6 attributes, which take integer values between 1 and 4 and a class label determined by a decision rule based on the 6 attributes. For example, in MONK-2, the rule that defines the class for each sample is the following: "exactly two" out of the six attributes are equal to 1. It is impossible to intuitively recover rules from the parameter setting from a traditional, fully connected NN. We shall see in the following that our main idea is that of inverting the activation and aggregation. In NEWRON the nonlinearity directly operates on the input of the neuron. The nonlinearity acts as a thresholding function to the input, making it directly interpretable as a (fuzzy) logical rule by inspecting its parameters. Consider the following network, represented in Figure 1: 2 hidden layers, the first with 1 neuron, the second with 2 neurons, and 1 output neuron. The x i 's are the inputs of the model, y is the output. We present the form of a typical architecture composed by NEWRON in Figure 1. We show how we can interpret the parameters obtained from a trained network. The rectangles represent the plot of a function that divides the input domain into two intervals, separated by the number below the rectangle, taking values 1 and 0. Figure 1: An example of a network for the MONK-2 dataset. x i are the inputs, y is the output. The red and blue rectangles represent the plot of functions, with input range on the x-axis and output on the y-axis. The green rectangles contain the aggregation function. The numbers in bold represent the thresholds for the step functions. The functions that process the input give output 1 only if the input is less than 1.1, given that inputs are integers and assume values only in {1, 2, 3, 4}, this means "if x i = 1". The sum of the output of all these functions, depicted in the green rectangle, then represents the degree of soundness of those rules are. The second layer has two neurons: the first outputs 1 if it receives an input greater than 1.9, i.e. if at least 2 of the rules x i = 1 are valid, while the second outputs 1 if it receives an input less than 2.1, i.e. if 2 or less of the rules x i = 1 are valid. Notice that the two neurons are activated simultaneously only if x i = 1 is true for exactly two attributes. In the last layer, functions in the blue rectangles receive values in {0, 1} and do not operate any transformation, keeping the activation rules unchanged. The sum of the outputs of these functions is then passed to the function in the red rectangle. This function outputs 1 only if the input is greater than 1.9. Since the sum is limited in 0, 1, 2, this happens only when it receives 2 as input, which occurs only if the two central neurons are activated. As we have seen, this only applies if exactly 2 of the rules x i = 1 are valid. Contributions The main contributions of this work are the following: • We propose NEWRON, a generalization of the In the former, a neuron is introduced that performs both a sum and a product of the inputs in parallel, applies a possibly different activation function for the two results, and then sums the two outcomes. Despite promising results, given the use of fewer parameters, better performance, and reduced training time compared to standard MLPs and RNNs, the proposed neuron, rather than being a generalization, is a kind of union between two standard neurons, one of which uses the product, instead of sum, as aggregation function. In the second paper, starting from the notion that the traditional neuron performs a first-order Taylor approximation, the authors propose a neuron using a secondorder Taylor approximation. Although this improves the capacity of a single neuron, the authors do not demonstrate any gains in terms of training time or convergence. Indeed, this can be considered a particular case of the higher-order neural units ( proposes an alternative neural network structure, based on a Binning Layer, which divides the single input features into several bins, and a Kronecker Product Layer, which takes into account all the possible combinations between bins. The parameters estimated during training can be interpreted to translate the network into a decision tree through a clever design of the equations defining the network. Although interpretable, the main issue in this work is its scalability. The Kronecker Product Layer has an exponential complexity that makes training time unfeasible when the number of features grows. The NEWRON Structure A neuron, in the classical and more general case, is represented by the equation b is called the bias, w i are the weights, and x i s are the inputs. f represents the activation function of the neuron. Usually, we use the sigmoid, hyperbolic tangent, or ReLU functions. We first generalize the above equation, introducing NEWRON as follows: ... ... Figure 3: Structure of NEWRON, the generalized artificial neuron. The blue rectangles represent the processing function sections, the green rectangles contain the aggregation function, and the red rectangles represent the activation part. Same colors are also used in Figure 2 Each input is first passed through a function h i , which we will call processing function, where the dependence on i indicates different parameters for each input. G, instead, represents a generic aggregation function. Using NEWRON notation, the standard artificial neuron would consist of the following: G does not have any parameters, while b parametrizes the activation function. Inverted Artificial Neuron (IAN) We present 3 novel structures characterized by an inversion of the aggregation and activation functions. We name this architectural pattern: Inverted Artificial Neuron (IAN). In all the cases we consider the sum as the aggregation function and do not use any activation function: G = , and f (z) = z. Heaviside IAN The first case we consider uses a unit step function as activation. This function, also called the Heaviside function, is expressed by the following equation: According to (1) we can define the processing function as follows: where w i and b i are trainable parameters. Sigmoid IAN We cannot train the Heaviside function using gradient descent, and it represents a decision rule that in some cases is too restrictive and not "fuzzy" enough to deal with constraints that are not clear-cut. A natural evolution of the unit step function is therefore the sigmoid function σ(x) = 1 1+e −x . This function ranges in the interval (0, 1), is constrained by a pair of horizontal asymptotes, is monotonic and has exactly one inflection point. The sigmoid function can be used as a processing function with the following parameters: h( Product of tanh IAN Another option we consider as a processing function is the multiplication of hyperbolic tangent (tanh). For simplicity, we will use the term "tanhprod". The tanh function tanh(x) = e 2x −1 e 2x +1 is on its own very similar to the sigmoid. An interesting architecture is that using M tanh simultaneously. Each tanh applies its own weights, on each individual input. While the sigmoid is monotonic with only one inflection point, roughly dividing the input space into two sections, the multiplication of tanh, by being not monotonic, allows us to divide the input space into several intervals. The multiplication would remain in (−1, 1), but can be easily rescaled to (0, 1). We can therefore write the processing function in the case of the tanh multiplication as follows: Note how, in this case, the weights depend on both the input i and the m-th function. Such a neuron will therefore have M times more parameters than the Heaviside and sigmoid cases. Output layer The output layer would produce values ranging in the interval (0, N ) ({0, 1, ..., N } for the Heaviside case), where N represents the number of neurons in the penultimate layer. This is because the last neuron makes the sum of N processing functions restricted in the interval (0, 1) ({0, 1} for the Heaviside case). To allow the last layer to have a wider output range and thus make our network able to reproduce a wider range of functions, we modify the last layer processing function h * as follows: where α i are trainable parameters. In the same way, as for a traditional neural network, it is important, in the output layer, to choose an adequate activation function. We need, indeed, to match the range of the output of the network and the range of the target variable. In particular, in the case of output in (0, 1), we use a sigmoid In the case of a classification problem with more than 2 classes, a softmax function (s(z j ) = e z j l e z l ) is used to output probabilities. Note(s) The writing w(x − b) is theoretically identical to that w * x + b * , where simply w * = w and b * = −bw. This notation allows us to interpret the weights directly. From b, we already know the inflection point of the sigmoid; while looking at w, we immediately understand its direction. Interpretability (Arrieta et al. 2020) presented a well-structured overview of concepts and definitions in the context of Explainable Artificial Intelligence (XAI). They make a distinction among the various terms that are mistakenly used as synonyms for interpretability. According to them: • Interpretability: is seen as a passive feature of the model and represents the ability of a human to understand the underlying functioning of a decision model, focusing more on the cause-effect relationship between input and output. • Transparency: very similar to interpretability, as it represents the ability of a model to have a certain degree of interpretability. There are three categories of transparency, representing the domains in which a model is interpretable. Simulatable models can be emulated even by a human. Decomposable models must be explainable in their individual parts. For algorithmically transparent models, the user can understand the entire process followed by an algorithm to generate the model parameters and how the model produces an output from the input. • Explainability: can be seen as an active feature of a model, encompassing all actions that can detail the inner workings of a model. The explanation represents a kind of interface between a human and the model and must at the same time represent well the functioning of the model and be understandable by humans. In this paper, we show decomposable models that, in some cases, are also algorithmically transparent. Heaviside The interpretability of an architecture composed of Heaviside IANs has to be analyzed by discussing its four main sections separately. First layer -Processing function A single processing function h(x) = H(w(x − b)) divides the space of each variable x in two half-lines starting from b, one of which has a value of 1 and one of which has a value of 0, depending on the sign of w. Aggregation Using sum as the aggregation function, the output takes values in {0, 1, ..., n}; where 0 corresponds to a deactivation for each input, and n represents an activation for all inputs, and the intermediate integer values {1, 2, ...k, ..., n − 1} represent activation for k of inputs. (5) where we simplified the notation using h i = h (x i ). The Heavisides of the layers after the first one receive values in {0, 1, ..., n}, where n represents the number of inputs of the previous layer. In the case where 0 ≤ b ≤ n and w > 0, the Heaviside will output 1 only if the input received is greater than or equal to b, therefore only if at least b of the rules R i of the previous layer are true, which corresponds to a rule of the type b − of − {R 1 , R 2 , ..., R n }. In the opposite case, where 0 ≤ b ≤ n and w < 0, Heaviside will output 1 only if the input received is less than or equal to b, so only if no more than b of the rules of the previous layer are true. This too can be translated to an Mof-N rule, inverting all rules R j and setting M as n − b i : Last layer -Aggregation In the last layer we have to account for the α factors used to weigh the contribution of each input: We have an activation rule for each of the n Heavisides forcing us to calculate all the 2 n possible cases. The contribution of each input is exactly α i . So, the output corresponds to the sum of the α i 's for each subset of inputs considered. Sigmoid In the case of sigmoid IAN, b i represents the inflection point of the function, while the sign of w i tells us in which direction the sigmoid is oriented; if positive, it is monotonically increasing from 0 to 1, while if negative, it is monotonically decreasing from 1 to 0. The value of w i indicates how fast it transitions from 0 to 1, and if it tends to infinity, the sigmoid tends to the unit step function. Sigmoid Interpretation The sigmoid can be interpreted as a fuzzy rule of the type where the absolute value of w i indicates how sharp the rule is. The case w i = 0 will always give value 0.5, so that the input does not have any influence on the output. If w i is very large, the sigmoid tends to the unit step function. If, on the other hand, w i takes values for which the sigmoid in the domain of x i resembles a linear function, what we can say is that there is a direct linear relationship (or inverse if w i < 0) with the input. The fuzzy rule can be approximated by its stricter version x i > b i , interpreting fall under the methodology seen for Heaviside. However, this would result in an approximation of the operation of the network. It is more challenging to devise clear decision rules when we add more layers. Imagine, as an example, a second layer with this processing function: where y is the aggregation performed in the previous layer of the outputs of its processing functions, its value roughly indicates how many of the inputs are active. In the second layer, consider as an example a value of w * > 0. To have an activation, this means that we might need k inputs greater than or equal to b * /k. Although this does not deterministically indicate how many inputs we need to be true, we know how the output changes when one of the inputs changes. The last case to consider takes into account the maximum and minimum values that the sigmoid assumes in the domain of x. If they are close to each other, that happens when w is very small, the function is close to a constant bearing no connection with the input. Product of tanh The multiplication of tanh has more expressive power, being able to represent both what is represented with the sigmoid, as well as intervals and quadratic relations. tanh-prod Interpretation In this case, it is not possible to devise as quickly as in the previous case decision rules. Indeed, it is still possible to observe the trend of the function and draw some conclusions. When the product of the two tanh resembles a sigmoid, we can follow the interpretation of the sigmoid case. In other cases, areas with quadratic relations can occur, i.e., bells whose peak indicates a more robust activation or deactivation for specific values. Summary of Interpretation The advantage of this method lies in the fact that it is possible to analyze each input separately in each neuron, thus easily graph each processing function. Then, based on the shape taken by the processing function, we can understand how the input affects the output of a neuron. The Heaviside is the most interpretable of our models, allowing a direct generation of decision rules. Sigmoid and tanh-prod cases depend on the parameter w. When it is close to 0, the activation is constant regardless of the input. When w is large enough, the processing function is approximately a piecewise constant function taking only values 0 and 1. In all the other cases, the processing function approximates a linear or bell-shaped function. Even if we can not derive exact decision rules directly from the model, in these cases, we can infer a linear or quadratic relation between input and output. Each layer aggregates the interpretations of the previous layers. For example, the processing function of a second layer neuron gives a precise activation when its input is greater than a certain threshold, i.e., the bias b of the processing function. The output of the neuron of the first layer must exceed this threshold, and this happens if its processing functions give in output values whose sum exceeds this threshold. A separate case is the last layer, where the α parameters weigh each of the interpretations generated up to the last layer. We can interpret a traditional individual neuron as a linear regressor. However, when we add more layers, they cannot be interpreted. Our structure, instead, remains interpretable even as the number of layers increases. Universality A fundamental property of neural networks is that of universal approximation. Under certain conditions, multilayer feed-forward neural networks can approximate any function in a given function space. In (Cybenko 1989) it is proved that a neural network with a hidden layer and using a continuous sigmoidal activation function is dense in C(I n ), i.e., the space of continuous functions in the unit hypercube in R n . (Hornik, Stinchcombe, and White 1989) generalized to the larger class of all sigmoidal functions. To make the statement of theorems clearer we recall that the structure of a two-layer network with IAN neurons and a generic processing function h is When the processing function is the Heaviside function we proved that the network can approximate any continuous function on I n , Lebesgue measurable functions on I n and functions in L p (R n , µ) for 1 ≤ p < ∞, with µ being a Radon measure. More precisely, the following theorems hold; we detail the proofs of the theorems in the appendix. Theorem 5.1. When the processing function is the Heaviside function the finite sums of the form (8) are dense in L p (R n , µ) for 1 ≤ p < ∞, with µ being a Radon measure on (R n , B(R n )) (B denote the Borel σ-algebra). Theorem 5.2. When the processing function is the Heaviside function the finite sum of the form (8) are m-dense in M n . Where M n is the set of Lebesgue measurable functions on the n-dimensional hypercube I n . Theorem 5.3. Given g ∈ C(I n ) and given > 0 there is a sum ψ(x) of the form (8) with Heaviside as processing function such that When the processing function is the sigmoid function or tanh-prod, we proved that the finite sums of the form (8) are dense in the space of continuous functions defined on the unit n-dimensional hypercube. Theorem 5.4. When the processing function is a continuous sigmoidal function the finite sums of the form (8) are dense in C(I n ). Theorem 5.5. Let ψ(x) be the family of networks defined by the equation (8) when the processing function is given by (4). This family of functions is dense in C(I n ). Experiments Datasets We selected a collection of datasets from the UCI Machine Learning Repository. We only consider classification models in our experiments. However, it is straightforward to apply NEWRONarchitectures to regression problems. The description of the datasets is available at the UCI Machine Learning Repository website or the Kaggle website. We also used 4 synthetic datasets of our creation, composed of 1000 samples with 2 variables generated as random uniforms between −1 and 1 and an equation dividing the space into 2 classes. The 4 equations used are bisector, xor, parabola, and circle. We give more details about the datasets in the appendix. Methods We run a hyperparameter search to optimize the IAN neural network structure, i.e., depth and number of neurons per layer, for each dataset. We tested IAN with all three different processing functions. In the tanh-prod case, we set M = 2. Concerning the training of traditional neural networks, we tested the same structures used for NEWRON, i.e., the same number of layers and neurons. Finally, we also ran a hyperparameter search to find the best combinations in the case of Logistic Regression (LR), Decision Trees (DT), and Gradient Boosting Decision Trees (GBDT). We include all the technical details on the methods in the appendix. Table 1 presents on each row the datasets used while on the columns the various models. Each cell contains the 95% confidence interval for the accuracy of the model that obtains the best performance. Results Results obtained with the new IAN neurons are better than those obtained by DTs and LRs (interpretable) models. Moreover, IAN's results are on par, sometimes better than, results of traditional NNs and GBDT classifiers. These last two methods, though, are not transparent. Amongst the Heaviside, sigmoid, and tanh-prod cases, we can see that the first one obtains the worst results. The reason may be that it is more challenging to train, despite being the most interpretable among the three cases. tanh-prod instead performs slightly better than sigmoid, being more flexible. Sigmoid, being more straightforward to interpret than tanh-prod, could be a good choice at the expense of a slight decrease in accuracy that remains, however, similar to that of a traditional neural network. Circle dataset example In order to first validate our ideas, we show what we obtained by applying a single neuron using multiplication of 2 tanh in the case of our custom dataset circle. Figure 3. x 1 and x 2 are the inputs of the network and y is the output. The processing and activation functions are plotted with input on the x-axis and output on the yaxis. Coordinates of the inflection points are indicated above the plots. In Figure 4 we can see how the multiplication of tanh has converged to two bells centred in 0, while α 1 and α 2 have gone to 30. According to the IANinterpretation method, values below 30 correspond to an activation function output of 0, while it is 1 for values above 38. In the middle range, the prediction is more uncertain. Combining this data with the previous prediction, we can conclude that we need the sum of the two values output by the two processing functions to be greater than 38 to have a prediction of class 1. Therefore, if one of the two inputs is 0 (output 30), it is enough for Current limitations The extraction of proper rules from the network can be harrowing; in the Heaviside case, they might be too long in the sigmoid and tanh-prod cases because their simplicity depends on the final value parameters. Nevertheless, methods of regularization during training or additional Rule Extraction methods may help to simplify interpretability. We defer the study of regularization to future works. Also, we have not compared NEWRON against state-ofthe-art Deep Learning models for tabular data, as our main goal was to show that our formulation was more suitable than traditional neurons compared to "traditional" interpretable models. Comparisons with more advanced solutions for tabular data will be the subject of future work. Conclusions and Future Work We have introduced the concept of a generalized neuron and proposed three different specializations, along with the corresponding method to interpret the behavior of the network. Also, in cases where from the network we cannot devise exact rules (e.g., in the sigmoid and tanh-prod cases), the structure of the neuron and the parameters allow the visual-ization of its behavior. Indeed, for every input, we apply the nonlinearity operation before the aggregation reducing it to a one-dimensional space allowing the analysis of each input separately. Through universal approximation theorems, we have proved that the new structure retains the same expressive power as a standard neural network. In future studies we will investigate more in detail the expressiveness of IAN based models with respect to the number of layers or neurons in arbitrarily deep but width-limited networks and arbitrarily wide but depth-limited networks. Experiments conducted on both real and synthetic datasets illustrate how our framework can outperform traditional interpretable models, Decision Trees, and Logistic Regression, and achieve similar or superior performance to standard neural networks. In the future, we will investigate the influence of hyperparameters (network depth, number of neurons, processing functions) and initialization on the model quality. Also, we will refine the analysis of the tanh-prod case as the number of tanh increases. In addition, we will investigate IAN with additional processing functions, such as ReLU and SeLU. Finally, we will extend this method to other neural models, such as Recurrent, Convolutional and Graph Neural Networks. Supplementary Materials A Universality Theorems This is the appendix to the Universality section in the main article. In this section, we shall prove the mathematical results concerning the universal approximation properties of our IAN model. In particular, we restrict ourselves to some specific cases. We consider the cases where the processing function is the Heaviside function, a continuous sigmoidal function ,or the rescaled product of hyperbolic tangents. Heaviside IAN Theorem 5.1. The finite sums of the form with N ∈ N and w ij , w j , α j , b ij , b j ∈ R are dense in L p (R n , µ) for 1 ≤ p < ∞, with µ a Radon measure on (R n , B(R n )) (B denote the Borel σ-algebra). In other words given, g ∈ L p (R n , µ) and > 0 there is a sum ψ(x) of the above form for which To prove that a neural network defined as in equation (9) is a universal approximator in L p , for 1 ≤ p < ∞ we exploit that step functions are dense in L p and that our network can generate step functions. Proposition 1. Let R be the set of the rectangles in R n of the form We denote by F the vector space on R generated by 1 R , R ∈ R i.e. F is dense in L p (R n , µ) for 1 ≤ p < ∞, with µ a Radon measure on (R n , B(R n )). Proof. To prove that a neural network described as in equation (9) can generate step functions we proceed in two steps. First, we show how we can obtain the indicator functions of orthants from the first layer of the network. Then we show how, starting from these, we can obtain the step functions. An orthant is the analogue in n-dimensional Euclidean space of a quadrant in R 2 or an octant in R 3 . We denote by translated orthant an orthant with origin in a point different from the origin of the Euclidean space O. Let A be a point in the n-dimensional Euclidean space, and let us consider the intersection of n mutually orthogonal half-spaces intersecting in A. By independent selections of half-space signs with respect to A (i.e. to the right or left of A) 2 n orthants are formed. Now we shall see how to obtain translated orthant with origin in in a point A of coordinates (a 1 , a 2 , ..., a n ) from the first layer of the network i.e. For this purpose we can take w i = 1 ∀i ∈ {1, ..., n}. The output of , ..., n} and depends on how many of the n Heaviside functions are activated. We obtain the translated orthant with origin in A by choosing b i = a i ∀i ∈ {1, ..., n}. In fact, The i-th Heaviside is active in the half-space x i ≥ a i delimited by the hyperplane x i = a i that is orthogonal to the i-th axis. Therefore, the Euclidian space R n is divided in 2 n regions according to which value the function n i=1 H(x i − a i ) takes in each region. See Figure 5 for an example in R 2 . There is only one region in which the output of the sum is n, which corresponds to the orthant in which the condition x i ≥ a i ∀i = 1, ..., n holds. We denote it as positive othant (the red colored orthant in the example shown in Figure 5). Going back to equation (9), let us now consider the Heaviside function applied after the sum. As before, we can choose w j = 1. If we take b j > n − 1, the value of the output is 0 for each of the 2 n orthants except for the positive orthant. This way, we get the indicator function of the positive orthant. The indicator function of a rectangle in R can be obtained as a linear combination of the indicator function of the positive orthants centered in the vertices of the rectangle. See Figure 6 for an example of the procedure in R 2 . In general, the procedure involves considering a linear combination of indicator functions of positive orthants centered in the vertices of the rectangle in such a way that op-posite values are assigned to the orthants corresponding to adjacent vertices. For example, suppose we want to obtain the indicator function of the right-closed left-open square [0, 1) 2 in R 2 (see the illustration in Figure 6). Denoting by 1 (x P ,y P ) the indicator function of the positive orthant centered in the point of coordinates (x P , y P ), we can write: The numbers in the orthants shows the sum of the indicator functions that are active in that orthant. For instance if x = (x 1 , x 2 ) belongs to the blue part of the plane, i.e. it is true that 0 < x 1 < 1 and Now suppose we want the linear combination of the indicator functions of K rectangles with coefficents α 1 , ...α K . With suitably chosen coefficients the indicator function of a rectangle can be written as The linear combination of the indicator functions of K rectangles with coefficents α 1 , ...α K can be derived as The summation (11) can be written as a single sum, defining a sequence β j = (−1) j α m with m = j 2 n for j = 1, ..., 2 n K. Thus (11) becomes N =2 n K j=1 β j H j that is an equation of form (9). We have therefore shown that for every step function ρ in F there are N ∈ N and α j , w ij , b ij , b j , w j ∈ R such that the sum in equation (9) is equal to ρ. Proof of Theorem 5.1. The theorem follows immediately from Lemma 2 and Proposition 1. Remark 1. In Lemma 2 we proved that a network defined as in equation (9) can represent functions belonging to set F defined as in equation (10). Note that if the input is bounded, we can obtain indicator functions of other kinds of sets. For example, suppose x ∈ [0, 1] n . If we choose w ij = 1 and b ij < 0 ∀i, j and if we choose the weights of the second layer so that they don't operate any transformation, we can obtain the indicator function of [0, 1] n . By a suitable choice of parameters, (9) may also become the indicator functions of any hyperplane x i = 0 or x i = 1 for i ∈ {1, .., n}. Furthermore we can obtain any rectangle of dimension n−1 that belongs to an hyperplane of the form x i = 1 or x i = 0. We have proven in Lemma 2 that a network formulated as in equation (9) can represent step functions. By this property and by Proposition 3 we shall show that it can approximate Lebesgue measurable functions on any finite space, for example the unit n-dimensional cube [0, 1] n . We denote by I n the closed n-dimensional cube [0, 1] n . We denote by M n the set of measurable functions with respect to Lebesgue measure m, on I n , with the metric d m defined as follows. Let be f, g ∈ M n , We remark that d m -convergence is equivalent to convergence in measure (see Lemma 2.1 in (Hornik, Stinchcombe, and White 1989)). Theorem 5.2. The finite sums of the form (9) with N ∈ N and w ij , w j , α j , b ij , b j ∈ R are d m -dense in M n . M n is the set of Lebesgue measurable functions on I n . This means that, given g measurable with respect to the Lebesgue measure m on I n , and given an > 0, there is a sum ψ of the form (9) such that d m (ψ, g) < . Proposition 3. Suppose f is measurable on R n . Then there exists a sequence of step functions {ρ k } ∞ k=1 that converges pointwise to f (x) for almost every x. Proof. See Theorem 4.3 p. 32 in (Stein and Shakarchi 2005). Proof of Theorem 5.2. Given any measurable function, by Proposition 3 there exists a sequence of step functions that converge to it pointwise. By Lemma 2 we have that equation (9) can generate step functions. Now m(I n ) = 1 and for a finite measure space pointwise convergence implies convergence in measure, this concludes the prof. Remark 2. Notice that for Theorem 5.2 we don't need the fact that I n , is a closed set. For this theorem in fact it is sufficient that it is a bounded set (so that its Lebesgue measure is finite). The compactness of I n will be necessary for the next theorem. Proof. Let g be a continuous function from I n to R, by the compactness of I n follows that g is also uniformly continuous (see Theorem 4.19 p. 91 in (Rudin 1976)). In other words, for any > 0, there exists δ > 0 such that for every x, x ∈ [0, 1] n such that ||x − x || ∞ < δ it is true that |g(x) − g(x )| < . To prove the statement of Theorem 5.3, let > 0 be given, and let δ > 0 be chosen according to the definition of uniform continuity. As we have already seen in Lemma 2 the neural network described in (9) Suppose that for all j ∈ {1, ..., N } we choose x j ∈ R j , and we set α j = g(x j ). If x ∈ [0, 1] n there is j so that x ∈ R j , hence x satisfies ||x − x j || ∞ ≤ δ, and consequentially |g(x) − g(x j )| ≤ . Therefore the step function h = Theorem 5.4. Let σ be a continuos sigmoidal function. Then the finite sums of the form: with w ij , α j , b ij , b j , w j ∈ R and N ∈ N are dense in C(I n ). In other words, given a g ∈ C(I n ) and given > 0 there is a sum ψ(x) of the form (12) such that Proof. Since σ is a continuous function, it follows that the set U of functions of the form (12) with α j , w ij , b ij , w j , b j ∈ R and N ∈ N is a linear subspace of C(I n ). We claim that the closure of U is all of C(I n ). Assume that U is not dense in C(I n ), let S be the closure of U , S = C(I n ). By the Hahn-Banach theorem (see p. 104 of (Rudin 1987) ) there is a bounded linear functional on C(I n ), call it L, with the property that L = 0 but L(S) = L(U ) = 0. By the Riesz Representation Theorem (see p. 40 of (Rudin 1987)), the bounded linear functional L, is of the form for some signed regular Borel measures µ such that µ(K) < ∞ for every compact set K ⊂ I n (i.e. µ is a Radon measure). Hence, We shall prove that (13) implies µ = 0, which contradicts the hypothesis L = 0. Using the definition of U , equation (13) can also be written as Note that for any w, x, b ∈ R we have that the continuous functions converge pointwise to the unit step function as λ goes to infinity, i.e. By hypothesis is true that for all λ 1 , λ 2 in R It follows that for all λ 2 : Now applying the Dominated Convergence Theorem (see Theorem 11.32 p 321 of (Rudin 1976)) and the fact that σ is continuous: Again, by Dominated Convergence Theorem we have: Hence we have obtained that ∀α j , w ij , b ij , w j , b j ∈ R and ∀N ∈ N The function γ is very similar to the Heaviside function H, the only difference is that H(0) = 1 while γ(0) = σ(φ). Let R i denote an open rectangle, ∂ a R i its left boundary (i.e. the boundary of a left-closed right-open rectangle) and ∂ b R i its right boundary (i.e. the boundary of a right-closed leftopen rectangle). Repeating the construction seen in Lemma 2 to obtain rectangles, with the difference that here γ takes value σ(φ) on the boundaries, we get that Every open subset A of I n , can be written as a countable union of disjoint partly open cubes (see Theorem 1.11 p.8 of (Wheeden and Zygmund 2015)). Thus, from the fact that the measure is σ-additive we have that for every open subset A of I n , µ(A) = 0. Furthermore µ(I n ) = 0. To obtain I n from it is sufficient to choose the parameters so that w ij (x i − b ij ) > 0 ∀x i ∈ [0, 1] and so that w j , b j maintains the condition on the input. From the regularity of the measure, it follows that µ is the null measure. tanh-prod IAN Theorem 5.5. The finite sums of the form with w jl , w ijk , α j , b jl , b ijk ∈ R and M j , N, m i ∈ N, are dense in C(I n ). In other words given g ∈ C(I n ) and given > 0 there is a sum ψ(x) defined as above such that Since tanh is a continuous function, it follows that the family of functions defined by equation (14) is a linear subspace of C(I n ). To prove that it is dense in C(I n ) we will use the same argument we used for the continuous sigmoidal functions. This is, called U the set of functions of the form (14), we assume that U is not dense in C(I n ). Thus, by the Hahn-Banach theorem there exists a not null bounded linear functional on C(I n ) with the property that it is zero on the closure of U . By the Riesz Representation Theorem, the bounded linear functional can be represented by a Radon measures. Then using the definition of U we will see that this measure must be the zero measure, hence the functional associated with it is null contradicting the hypothesis. We define To proceed with the proof as in the case of the proof for continuous sigmoidal functions, we need only to understand to what converges the function when λ 1 and λ 2 tend to infinity, and h iλ indicates the processing function related to input i. Once we have shown that for some choice of the parameters they converge pointwise to step function we can use the same argument we used in the proof of Theorem 5.4. The first step is therefore to study the limit of equation (16). Let us focus of the multiplication of tanh in the first layer, given by equation (15). The pointwise limit of h λ (x) for λ → ∞ depends on the sign of the limit of the product of tanh, that in turn depends on the sign of w k (x − b k ) for k ∈ {1, ..., m}. Remark 3. We remark that for x ∈ [0, 1], from the limit of equation (15) we can obtain the indicator functions of set of the form x > b or x < b for any b ∈ R. We just have to choose the parameters in such a way that only one of the tanh in the multiplication is relevant. Let us define there is only one i ∈ {1, ..., m} so that its weight are significant it holds that taking into account that σ(2φ) = 1 2 (tanh(φ) + 1). Proof of Theorem 5.5. Considering Remark 3, the proof of Theorem 5.5 is analogous to that of Theorem 5.4. B Experimental settings All code was written in Python Programing Language. In particular, the following libraries were used for the algorithms: tensorflow for neural networks, scikit-learn for Logistic Regression, Decision Trees and Gradient Boosting Decision Trees. A small exploration was made to determine the best structure of the neural network for each dataset. We used a breadth-first search algorithm defined as follows. We started with a network with just one neuron, we trained it and evaluated its performance. At each step, we can double the number of neurons in each layer except the output one or increase the depth of the network by adding a layer with one neuron. For each new configuration, we build a new structure based on it, initialize it and train it. If the difference between the accuracy achieved by the new structure and that of the previous step is lower than 1%, then a patience parameter is reduced by 1. The patience parameter is initialized as 5 and is passed down from a parent node to its spawned children, so that each node has its own instance of it. When patience reach 0, that configuration will not spawn new ones. Before the neural network initialization, a random seed was set in order to reproduce the same results. As for the initialization of IAN, the weights w are initialised using the glorot uniform. For the biases b of the first layer a uniform between the minimum and the maximum of each feature was used, while for the following layers a uniform between the minimum and the maximum possible output from the neurons of the previous layer was used. For the network training, Adam with a learning rate equal to 0.1 was used as optimization algorithm. The loss used is the binary or categorical crossentropy, depending on the number of classes in the dataset. In the calculation of the loss, the weight of each class is also taken into account, which is inversely proportional to the number of samples of that class in the training set. The maximum number of epochs for training has been fixed at 10000. To stop the training, an early stopping method was used based on the loss calculated on the train. The patience of early stopping is 250 epochs, with the variation that in these epochs the loss must decrease by at least 0.01. Not using a validation dataset may have led to overfitting of some structures, so in future work we may evaluate the performance when using early stopping based on a validation loss. The batch size was fixed at 128 and the training was performed on CPU or GPU depending on which was faster considering the amount of data. The Heaviside was trained as if its derivative was the same as the sigmoid. For Decision Trees (DT) and Gradient Boosting Decision Trees (GBDT), an optimisation of the hyperparameters was carried out, in particular for min samples split (between 2 and 40) and min samples leaf (between 1 and 20). For GBDT, 1000 estimators were used, while for DT the class weight parameter was set. For the rest of the parameters, we kept the default values of the python sklearn library. C Datasets 19 out of 23 datasets are publicly available, either on the UCI Machine Learning Repository website or on the Kaggle website. Here we present a full list of the datasets used, correlated with their shortened and full-lenght name, and the corresponding webpage where the description and data can be found. The 4 synthetic datasets of our own creation are composed of 1000 samples with 2 variables generated as random uniforms between −1 and 1 and an equation dividing the space into 2 classes. The 4 equations used are: These datasets are also represented in Figure 7. D Examples Heart dataset -Heaviside IAN The Statlog Heart dataset is composed of 270 samples and 13 variables of medical relevance. The dependent variable is whether or not the patient suffers from heart disease. In Figure 8 you can find the network based on Heaviside IAN trained on the heart dataset. Only the inputs with a relevant contribution to the output are shown. From now on, we will indicate with R k,j,i the rule related to the processing function corresponding to the i-th input, of the j-th neuron, of the k-th layer. From the first neuron of the first layer we can easily retrieve the following rules: R 1,1,1 = x 1 ≤ 54.29, R 1,1,3 = x 3 ≤ 3.44, R 1,1,4 = x 4 ≤ 123.99, R 1,1,5 = x 5 ≥ 369, 01, R 1,1,9 = x 9 ≤ 0.48, R 1,1,10 = x 10 ≤ 1.22, R 1,1,11 = x 11 ≤ 1.44, R 1,1,12 = x 12 ≤ 0.52, R 1,1,13 = x 13 ≤ 6.26. The second neuron of the first layer is not shown for Table 2: Publicly available datasets, with the short name used in in our work, their full-lenght name and the webpage where data and description can be found. The UCI MLR URL is the following: https://archive.ics.uci.edu/ml/datasets/ lack of space, but its obtained rules are R 1,2,2 = x 2 ≥ 0.79, R 1,2,3 = x 3 ≥ 3.59, R 1,2,4 = x 4 ≥ 99.95, R 1,2,5 = x 5 ≥ 253.97, R 1,2,8 = x 8 ≤ 97.48, R 1,2,9 = x 9 ≤ 0.04, R 1,2,10 = x 10 ≥ 2.56, R 1,2,11 = x 11 ≥ 1.53, R 1,2,12 = x 12 ≥ 0.52, R 1,2,13 = x 13 ≥ 5.47. Moreover, input x 7 gives always 1, so this must be taken into consideration in the next layer. Moving on to the second layer, we can see in the first neuron that the second input is irrelevant, since the Heaviside is constant. The first processing function activates if it receives an input that is greater or equal to 2.99. Given that the input can only be an integer, we need at least 3 of the rules obtained for the first neuron of the first layer to be true: R 2,1,1 = 3 − of − {R 1,1,i }. Following the same line of reasoning, in the second neuron of the second layer we see that we get R 2,2,1 = 5 − of − {¬R 1,1,i } and R 2,2,2 = 5 − of − {R 1,2,i } (5 and not 6 because of x 7 processing function). In the last layer, the first processing function has an activation of around 2.5 if it receives an input that's less than 1.17. This can happen only if R 2,1,1 does not activate, so we can say: R 3,1,1 = ¬R 2,1,1 = 7 − of − {¬R 1,1,i }. The second processing function gives a value of around −2.5 only if it gets an input less than 0.99, so only if the second neuron of the second layer does not activate. This means that R 2,2,1 and R 2,2,2 must be both false at the same time, so we get R 3,1,2 = ¬R 2,2,1 ∧ ¬R 2,2,2 = 5 − of − {R 1,1,i } ∧ 6 − of − {¬R 1,2,i }. Now there are 4 cases for the sum, i.e. the combinations of the 2 activations: {0 + 0, 2.5 + 0, 0 − 2.5, 2.5 − 2.5} = {−2.5, 0, 2.5}. Given that both have around the same value for the α parameter, the set is reduced to two cases. Looking at the processing function, we can see that is increasing with respect to the input, so since α 1 is positive, we can say that rule R 3,1,1 is correlated to class 1, while rule R 3,1,2 , having a negative α 2 , has an opposite correlation. Looking at its values, we can see that for both 0 and 2.5 inputs, the activation function gives an output greater than 0.5. If we consider this as a threshold, we can say that only for an input of −2.5 we get class 0 as prediction. This happens only if R 3,1,2 is true and R 3,1,1 is false. Summarizing we get R 0 = R 3,1,2 ∧ ¬R 3,1,1 = 5−of −{R 1,1,i }∧6−of −{¬R 1,2,i }∧3−of −{R 1,1,i } = 5 − of − {R 1,1,i } ∧ 6 − of − {¬R 1,2,i }, so that we can say "if R 0 then predicted class is 0, otherwise is 1". Although we are not competent to analyse the above results from a medical perspective, it is interesting to note for example that the variables x 1 and x 4 , representing age and resting blood pressure respectively, are positively correlated with the presence of a heart problem. Xor -sigmoid IAN Our custom xor dataset divides the 2D plane in quadrants, with the opposites having the same label. The network based on sigmoid IAN trained on xor dataset is represented in Figure 9. As we can see, all the processing functions of the first layer converged to nearly the same shape: a steep inverted sigmoid centered in 0. Therefore, we can say the rules obtained are R 1,1,1 = R 1,2,1 = x 1 ≤ 0 and R 1,1,2 = R 1,2,2 = x 2 ≤ 0. In the last layer, the first processing function has a value of about −15 for inputs in [0, 1], then it starts growing slowly to reach almost 0 for an input of 2. This tells us that it doesn't have an activation if both rules of the first neuron are true, so if x 1 ≤ 0 ∧ x 2 ≤ 0. On the other hand, the second processing function has no activation if its input greater than 1, that happens for example if we have a clear activation from at least one of the inputs in the second neuron of the first layer. So looking at it the opposite way, we need both those rules to be false . Since this rule describes the opposite to xor, for class 1 we get the exclusive or logical operation. Iris datasettanh-prod IAN A dataset widely used as a benchmark in the field of machine learning is the Iris dataset. This contains 150 samples, divided into 3 classes (setosa, versicolor and virginica) each representing a type of plant, while the 4 attributes represent in order sepal length and width and petal length and width. In Figure 10 you can see the final composition of the network generated with the tanh-prod2 IAN neuron. Considering the first neuron of the first layer, we see that it generates the following fuzzy rules: R 1,1,2 = x 2 > 3.08 (sepal width), R 1,1,3 = x 3 < 5.14 (petal length) and R 1,1,4 = x 4 < 1.74 (petal width). For the first attribute (sepal length) it does not generate a clear rule, but forms a bell shape, reaching a maximum of 0.5. This tells us that x 1 is less relevant than the other attributes, since, unlike the other processing functions, it does not reach 1. The second neuron has an inverse linear activation for the first attribute, starting at 0.7 and reaching almost 0. The second attribute also has a peculiar activation, with an inverse bell around 2.8 and a minimum value of 0.4. The third and fourth attributes have clearer activations, such as R 1,2,3 = x 3 < 2.51 and R 1,2,4 = x 4 < 1.45. The fact that petal length and width are the ones with the clearest activations and with those specific thresholds are in line with what has previously been identified on the Iris dataset by other algorithms. We denote by y k,j the output of the j-th neuron of the k-th layer. Moving on to the second layer, the first neuron generates the rules "if y 1,1 < 1.83" and "if y 1,2 < 2.66", while the second one generates "if y 2,1 > 2.08" and "if y 2,2 > 2.22". Combined with what we know about the previous layer, we can deduce the following: y 1,1 is less than 1.83 only if the sum of the input activation functions is less than 1.83, which only happens if no more than one of the last three rules is activated (0 + 1 + 0 < 1.83), while the first one, even taking its maximum value, is discriminative only when the input of one of the other rules is close to the decision threshold (0.5 + 1 + 0 + 0 < 1.83, while 0.5 + 1 + 0.5 + 0 > 1.83). For y 1,2 < 2.66, there are more cases. We can divide the second processing function of the second neuron of the first layer in two intervals: one for which x 2 < 3.2 and the other when x 2 ≥ 3.2. In the first interval, the processing function gives a value that is less than 0.66, greater in the second one. With this, we can say that y 1,2 < 2.66 even if R 1,2,3 and R 1,2,4 activates, if x 2 < 3.2 and x 1 is near its maximum. In the second neuron of the second layer, the first processing function is nearly the exact opposite to that of the other neuron; we need at least two of R 1,1,2 , R 1,1,3 or R 1,1,4 to be true, while R 1,1,1 still doesn't have much effect. The second processing function gives us y 1,2 > 2.22. Considering that the minimum for the processing function related to x 2 is 0.4, we may need both rules R 1,2,3 and R 1,2,4 to be true to exceed the threshold, or just one of them active and x 1 to take on a low value and x 2 to be a high value. For the last layer, remember that in this case since there are more than 2 classes, a softmax function is used to calculate the output probability, hence the arrows in the figure that join the layers of the last layer. For the first output neuron, in order to obtain a clear activation, we need the first input to be less than 0.46 and the second greater than 1.42. This is because the α i are 3 and −8, and the output activation function starts to have an activation for values greater than −2. This means that the first neuron of the second layer should hardly activate at all, while the other should activate almost completely. Considering the thresholds for y 1,1 and y 1,2 , we need the first to be greater than 2.08 and the other to be greater than 2.66. So R 3,1,1 = 2 − of − {x 2 > 3.08, x 3 < 5.14, x 4 < 1.74}. For R 3,1,2 is more tricky to get a clear decision rule, but we can say that we may need both R 1,2,3 and R 1,2,4 to be true and x 2 ≥ 3.2. If x 2 < 3.2, we need x 1 to not be near its maximum value. If just one of those two rules is true, we need x 2 < 3.2 and x 1 near 4, or x 2 > 3.2 but with a (nearly) direct correlation with x 1 , such that the more x 1 increases, the same does x 2 . In the second output neuron, the second processing function is negligible, while the first one forms a bell shape between 1 and 2. This means that it basically captures when y 2,1 has a value of approximately 1.5, so when the decision is not clear. This is what gives this neuron maximum activation. In the third and last output layer, since the first processing function has a negative α parameter and the activation function is increasing with respect to the input, we want it to output 0, and this requires maximum activation for the first neuron of the second layer. Regarding the second processing function, we want it to output 8, so we need nearly no activation from the second neuron of the second layer. So we need the first neuron of the first layer to output a value lower than 1.83 and the second neuron to output a value lower than 2.22. This means that no more than one rule R 1,1,i needs to be active and at most two rules of R 1,2,i need to be true. We can conclude by saying that both neurons of the first layer are positively correlated with class 1, while they are negatively correlated with class 3. This means that low values of x 3 and x 4 , or high values of x 2 increase the probability of a sample to belong to class 1, while x 1 has almost no effect. For class 2, what we can say is that it correlates with a non-maximum activation of both neurons of the first layer, meaning that it captures those cases in which the prediction of one of the other classes is uncertain.
2021-10-07T01:16:15.518Z
2021-10-05T00:00:00.000
{ "year": 2021, "sha1": "5b18bd83819cc417c9c7c1452fe003b27df9f21a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5b18bd83819cc417c9c7c1452fe003b27df9f21a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2984927
pes2o/s2orc
v3-fos-license
In Situ Text Summarisation for Museum Visitors . This paper presents an experiment on in situ summarisation in a museum context. We implement a range of standard summarisation algorithms, and use them to generate summaries for individual exhibit areas in a museum, intended for in situ delivery to a museum visitor on a mobile device. Personalisation is relative to a visitor’s preference for summary length, the visitor’s relative interest in a given exhibit topic, as well as (optionally) the sum-mary history. We find that the best-performing summarisation strategy is the Centroid algo-rithm, and that content diversification and customisation of summary length have a significant impact on user ratings of summary quality. Introduction With the increasing saturation of mobile technology, museums and other cultural heritage institutions are increasingly looking to deliver content to visitors via their personal mobile device.This has led to a move away from a traditional mode of content delivery via static information on placards in the museum space, to interactive applications on mobile devices supporting path finding, social networking and personalised content delivery (Burnette et al., 2011;Filippini-Fantoni et al., 2011). This paper explores the feasibility and utility of in situ personalised content delivery in a museum context, focusing on document summarisation.Museums provide a compelling context for in situ summarisation, as visitors often wish to access key information relevant to their immediate surroundings, but want to avoid information overload.Personalised summarisation is an integral component of museum content delivery, as exhibits are typically associated with vast amounts of curated information, predominantly in textual form.This can range from simple tabular information such as the date of acquisition of an exhibit, to full-length research articles published by museum curators/researchers relating to the exhibit.Personalised summarisation offers the possibility to present the most salient facets of information to a visitor, according to their interests and preferences.The pragmatic choice of a mobile device such as a smart phone to deliver the content poses challenges in terms of the amount of content that can be effectively presented to the visitor (Yang and Wang, 2003;Otterbacher et al., 2006). Personalised summarisation should ideally be coupled with tracking/geolocating technology to be situation aware (Bohnert et al., 2008;Bohnert and Zukerman, 2009;Bickersteth and Ainsley, 2011) and take place interactively (Callaway et al., 2005).In principle, any evaluation should take place in situ as part of an actual museum visit.However, in this preliminary research, we present the results of a web-based user study targeted at members of Melbourne Museum (Melbourne, Australia), based on a fixed path through the museum.To partly overcome this limitation, we elicit the participants' interest in an exhibit topic, and generate a summary which takes into account this interest level.In this way, we examine the impact of interest level on summary length and different summarisation strategies, as a guide for future research. Our contributions are: (1) we deploy a range of extractive summarisation methods over a fixed path through a museum, focusing on generating summaries for individual exhibit areas, personalised in length according to visitor preferences and interest levels, and diversification of summary content; (2) we carry out a medium-scale user study over the generated summaries, to determine the relative utility of the summaries and the effectiveness of the various strategies trialled; and (3) we present the results of a web-based museum visitor questionnaire on opportunities for in situ personalisation in a museum environment. User Study The goal of this research is to investigate whether automatic text summarisation techniques can be harnessed for in situ personalised content delivery in a museum.We explore this in a two-part webbased survey, where we: (1) presented participants with a questionnaire regarding their interest in mobile device-based personalised content delivery in a museum; and (2) asked participants to rate automatically generated summaries for individual exhibit areas in the museum space.The survey was advertised to members of Melbourne Museum via the official e-newsletter.It was completed outside the museum, but the participant group can reasonably be expected to be very familiar with the museum.In total, we received 34 valid responses to the survey, which form the basis of the results in this paper. Questionnaire on Personalised Content Delivery Survey participants were provided with a brief description of the project, asked a few questions on demographics and their museum visiting patterns, and then asked specific questions about personalised content delivery. The broad findings of the poll are as follows: (1) over two thirds of participants are very likely or likely to actively interact with a mobile device in the museum for personalised content delivery, and would consent to being tracked for user modelling purposes; (2) two thirds or more of the participants are likely to make use of images, videos or sound bites on the mobile device, but participants are considerably less likely to use (museum-related) games or (external) websites; and (3) most participants are at least likely to use recommendations across a range of content types during a visit, excluding recommendations of games.These survey findings show that there is broad support and scope for personalised content delivery within the museum context. Participants were also provided with the facility to provide comments.Specific reasons cited for being interested in personalised content delivery (with or without tracking) were: the desire for personalised learning, difficulties in accessing placards in crowded exhibitions, and frustration with "unfathomable" static displays.Primary reasons for participants not being interested in mobile devices were their unsuitability for smaller children (especially in a group setting), the feeling that mobile devices would be a distraction, and also concerns over data privacy and security.There were also comments suggesting that a personalised post-visit follow-up on topics of interest was superior to in situ content delivery. In sum, participants were generally supportive of in situ personalised content delivery of various types, lending support to the basic premise of this research. Rating of Summaries The second stage of the web survey was to rate the quality of automatically generated summaries.Participants were first presented with three manually generated summaries of varying length (short ≈ 50 words; mid-length ≈ 75 words; long ≈ 100 words) for the Dinosaur Walk exhibit area in Melbourne Museum, and asked to state their preference for summary length relative to the preprepared summaries.This was intended to reduce the effect of summary length bias on summary preference.Next, participants were given the following instructions: Imagine yourself in front of a series of exhibits at Melbourne Museum.First you will be asked about your relative interest level in an exhibit.Assuming a non-zero response (i.e.you have some interest in the exhibit area), you will be presented with three descriptions of the exhibit, and asked to rate each one.Your rating should reflect your response to the quality of the content as well as the language used. The participants were then shown four exhibit areas in Melbourne Museum.For each exhibit area, they were given an indication of its location on a map, and a photo of the exhibit area. The rating of interest level in each exhibit area was on a scale of 0 ("Not interested at all") to 3 ("Extremely interested").If a participant indicated a non-zero interest level in an exhibit area, they were provided with three summaries and asked to rate each on a scale of 1 ("Horrible") to 5 ("Excellent"); if the participant indicated no interest, they were taken directly to the next exhibit area without being shown any summaries.Summary lengths were tailored to both the indicated preference for summary length, and the relative interest level in the exhibit area (Section 3.4). All participants were shown the same four exhibit areas in the same order.The first two exhibit areas were from the same gallery ("Bugs Alive") with a strong thematic connection relating to insects.The next two exhibit areas were deliberately selected from galleries that have little connection with any of the other three exhibit areas (relating to deep sea life and horse racing, respectively).This was done to explore the impact of exhibit area "theming" (i.e., thematic relation) on content differentiation, as described in Section 3.2 (with the first pairing of exhibit areas being themed, and the second two pairings being "unthemed"). Automatic Summarisation The setting where we apply automatic summarisation is characterised by the following properties: • Per-exhibit sequenced summaries: a stand-alone summary is generated for each exhibit area, for presentation to the visitor in situ, in sequence of the order in which the exhibit areas are visited; each summary is personalised to the summary length preference and interest level of the participant, and optionally based on "content diversification" over preceding summaries. • Short summaries: the summaries need to be relatively short, in order to display them effectively on the small screen of a mobile device. • Single primary document: the amount of text associated with a given exhibit area varies, but is generally of the order of slightly over 1000 words, in a single primary document authored by museum curators; this is complemented with a secondary document from Wikipedia which is anywhere from 600 to slightly over 3000 words in length. The secondary Wikipedia document is determined by manual alignment for each exhibit area.It is intended to be the document of best fit within Wikipedia, which can vary from a Wikipedia page dedicated to the exact museum artefact to an article which is only thematically related (e.g., an article on gold mining, to represent a historical diorama of a particular gold mine).Our motivation in including these secondary documents is to reduce data sparseness in the sentence ranking, without allowing the content of the Wikipedia article to be included in the summary, as the article often diverges significantly from the context of presentation of the exhibit area. Our summarisation task is differentiated from conventional document summarisation in: (1) the segmentation into discrete exhibit areas, and generation of short individuated summaries per exhibit area for in situ delivery; (2) the relative sparsity of text (multi-document summarisation is typically based on at least 10 documents); and (3) the interaction between the physical movements of the visitor and summary generation, in terms of the sequentiality of the exhibit area summaries (mirroring the physical path through the museum), and potential interplay between the summaries generated for different exhibit areas.A primary interest in this research is the determination of the utility of established multi-document summarisation algorithms for our novel summarisation task. In the following sections, we outline the summarisation algorithms trialled in this research, describe how we personalise summary length, and outline a simple method for content diversification, to avoid repetition in the summaries for individual exhibit areas. Summarisation Algorithms For our experiments, we implemented five standard extractive summarisation algorithms from the literature (Radev et al., 2002): • First-N sentence algorithm (FIRSTN): select the first N sentences of each document. • Lead-based algorithm (LEAD): select the first N sentences of each paragraph. • Centroid algorithm (CENTROID): cluster the document collection using a variant of TF•IDF, and rank sentences through a weighted sum of token weights based on the cluster centroid, a sentence positional weight, and similarity with the first sentence (Radev et al., 2000). • LexRank algorithm (LEXRANK): cluster the document collection, and rank sentences using a variant of PageRank (Brin and Page, 1998) over the component words (Erkan and Radev, 2004). • Manifold-ranking algorithm (MANIFOLD): score each sentence based on a manifold-ranking process, and rerank sentences based on a diversity penalty (Wan et al., 2007). Each algorithm was run over the primary and secondary document for a given exhibit area.All sentences were ranked, but only sentences from the primary document (that authored by Melbourne Museum) were candidates for selection in the final summary.In this sense, the task we are performing is not, strictly speaking, multi-document summarisation so much as document condensation, using a secondary document (and optionally summary context) to bias the sentence selection.Any findings on the relative successes of the summarisation algorithms should be interpreted accordingly. Content Diversification The task of generating in situ content for museum visitors on an exhibit-by-exhibit basis has an inherent segment granularity (summarise one exhibit area at a time) and chronological order, neither of which is found in generic multi-document summarisation tasks.Exhibit areas vary greatly in similarity (Grieser et al., 2011), and for closely-related exhibit areas (e.g., those found in the same gallery, as is the case with our "themed" exhibit area pairing), there is significant potential for generating overlapping content.Due to the strict chronological ordering, we can of course access content previously delivered to the visitor, and personalise the summary to ensure "content diversity", akin to result diversification in personalised web search (Radlinski and Dumais, 2006).That is, we can reduce redundancy in the information presented to the museum visitor by explicitly dispreferring sentences similar to those the visitor has already seen. We adopt the following rather simple approach: for preceding exhibit area pairs, we bias the sentence ranking by including the summary for the preceding exhibit area as an extra secondary document.Similarly to the Wikipedia document, sentences from this summary are included in the sentence ranking, but cannot be included in the final summary.This has the effect of demoting sentences for the current exhibit area which are very similar to those for the preceding exhibit area, hence leading to diversification. Pronoun Filtering Many sentences in the museum documents associated with a given exhibit area contain personal or possessive pronouns, which may not be resolvable out of context, or may resolve to an unintended antecedent.To avoid this, we considered the following strategies: (1) remove all sentences containing pronouns from the sentence ranking step, but include them in the clustering step (to avoid exacerbating the effects of data sparseness); and (2) allow sentences containing pronouns, but recursively include the preceding sentence from the original document if a selected sentence includes a pronoun. Personalisation and Summary Length In prior work, Berkovsky et al. (2008) found a strong correlation between summary length and interest level, i.e., the more interested a user is in a topic, the greater the likelihood they will prefer a longer summary.To explore this effect further, we first asked participants to state their overall preference for summary length (short, medium or long), prior to presenting them with any exhibit area summaries.For each exhibit area, we generated summaries of varying length for each of three interest levels (1, 2 or 3) and the three summary length preferences, as indicated in Table 1.For each algorithm -optionally in combination with content diversification and pronoun filteringwe fashioned a summary of each of the indicated lengths by monotonically adding sentences from the sentence ranking determined by the summarisation strategy, and selecting the summary which best approximated the required summary length. When presenting a participant with the summaries for a given exhibit area, we show them a summary of the requested length, in addition to a shorter and longer summary, to determine the relative impact of variance in length on their summary ratings.For example, if a visitor's summary length preference were "mid-length" and they indicated an interest level of 1 ("Have a tiny bit of interest") in a given exhibit area, a summary of length of 50 words would be selected; a summary of length 25 and a summary of length 75 words would then be added (for a total of three summaries per exhibit area). Our motivation for varying the summary lengths in this manner was to explore the interaction between summary length and perceived quality for different summarisation strategies, hoping to validate the findings of Berkovsky et al. (2008).Naturally, in a fully deployed in situ summarisation system, we would hope to dynamically learn the level of user interest (Bohnert and Zukerman, 2009) and customise the summarisation length accordingly.In our current research, we simply hope to establish the need for user interest prediction for the purposes of summary length personalisation. Summary Selection To recap, we generate summaries based on five summarisation algorithms, optional pronoun exclusion, and optional content diversification, for a total of 20 basic summarisation configurations over nine possible summary lengths.These are evaluated over four separate exhibit areas, for each of which we present three summaries of differing length; one pairing of the four exhibit areas is themed, and two are unthemed.All summaries were pre-generated to minimise time lag in the trial.Due to content diversification being conditioned on the previous summary, the total number of pre-generated summaries was 180 + 180 2 + 180 3 + 180 4 = 1, 055, 624, 580. To expose each participant to as many summarisation configurations as possible, over their visit, we performed random selection without replacement over the 20 summarisation configurations.Additionally, for each exhibit area, we randomly varied the order in which the "correct" length summary vs. the short and long summaries were presented to the visitor. Clustering of Summarisation Configurations To determine the relative differentiation in content between the pre-generated summaries for each exhibit area, we performed a pairwise summary comparison using the ROUGE-2 metric (Lin and Hovy, 2003).For each pairing of the 20 summarisation configurations, we averaged across the different summary lengths and exhibit areas to generate an overall similarity.Based on these similarity values, we clustered the summarisation configurations using "oblivious" hierarchical agglomerative clustering over the three attributes of summarisation algorithm (binarised into the individual algorithms), pronoun filtering and content diversification.That is, we calculated the single attribute which leads to the (weighted) purest partitioning of the data at each level of the dendrogram in a bottom-up fashion. Overall, the greatest differentiating factor was the choice of summarisation algorithm, followed by the inclusion/exclusion of sentences with pronouns, and finally, content diversification.Comparing the individual algorithms, LEAD was the most different to the other four algorithms, and CENTROID and FIRSTN generally produced very similar summaries (all irrespective of the pronoun and content differentiation settings).MANIFOLD without pronouns produced summaries similar to CENTROID, while MANIFOLD with pronouns was more differentiated. User Study As stated in Section 2, we received a total of 34 valid responses to the web survey.In terms of the summary ratings, this amounted to up to 12 rated summaries per participant,1 and a total of 357 summary ratings.This breaks down into two overlapping subsets: 62 summary ratings for the "themed" exhibit area pairing, and 326 ratings for the two "unthemed" exhibit area pairings. The different factors that potentially impact on the summary ratings are as follows: • Actual summary length (act length ∈ {−1, 0, 1}): the selected summary length, based on the summary length preference, exhibit area interest level, and the shorter ("−1") and longer length ("1") variations over the selected length. To investigate the interaction between the various factors and the summarisation ratings, as well as the interactions between the factors, we perform a factorial ANOVA (Tabachnick and Table 2: Top-15 combinations of factors for "All" exhibit areas, based on factorial ANOVA, compared to the subset of "Themed" exhibit areas and the subset of "Unthemed" exhibit areas (boldfacing indicates a statistically significant combination of factors, p ≤ 0.05) 2006) for each combination of factors over the participants' ratings.In this, we perform separate factorial ANOVAs for each of: (1) the overall set of exhibit areas ("All"); (2) the themed exhibit areas ("Themed"); and (3) the unthemed exhibit areas ("Unthemed"), to investigate whether thematic relations between exhibit areas have any effect on the relative impact of the factor combinations.Table 2 shows the top-15 factor combinations for "All" exhibit areas (i.e., the factor combinations with the lowest p values).Not all feature combinations are represented in the "Themed" data, leading to gaps in the table (labelled as "NaN").It is worth noting that there are no statistically significant (p ≤ 0.05) factor combinations for "Themed" and "Unthemed" outside the top-15 factor combinations for "All" presented in the table. Looking first at the individual factors, we see that indicated is the only factor that universally affects a participant's rating (for all of "All", "Themed" and "Unthemed").This suggests that, despite the instructions provided to the participants, they found it hard to judge the intrinsic quality of the summaries independently of their relative interest level in the exhibit area.This effect was particularly notable for the CENTROID algorithm (Figure 1). Content diversification (diversification) has a significant impact on "All" and "Unthemed" summaries, but no impact on "Themed" ones (p = 0.54 in isolation).Analysis of the data indicates that this is due to diversification negatively affecting unthemed exhibit area pairings (which account for the majority of the data in "All"), but having negligible impact on themed exhibit area pairings.Manual analysis of the summaries for the unthemed exhibit area pairs indicates that this negative impact may be due to diversification artificially removing lead sentences (because of the boilerplate structure of the curated documents); the effect was felt less for themed exhibit area pairs, as even if the lead sentence for the second exhibit was dropped, the similarity in content with what had already been presented to the participant meant that the summary was still coherent.It is disappointing that diversification has no impact on the visitors' assessment of themed exhibit summaries, despite the fact that it has a noticeable effect on the summaries themselves, by incorporating extra content (and removing content that overlaps with that provided for the preceding exhibit area, including the lead sentence).However, as participants rated the quality of the summaries, rather than the cumulative novel content delivered over the series of the summaries, Table 3: Estimated marginal means for individual summarisation algorithms, with 95% confidence interval this difference was not reflected in the ratings.Summary length preference (pref length ) has a significant impact on "All" and "Unthemed", but not "Themed".Analysis of the data shows that for "Themed", the choice of algorithm has a highly variable effect on a participant's rating, particularly for participants who indicated they preferred longer summaries (with CENTROID performing strongly), while with the other two datasets, the were more consistent.Whether this is an effect of data sparseness for the "Themed" data or can be replicated over other pairings of themed exhibit areas is left for future work. None of pronoun, act length and algorithm had a significant impact on the participants' ratings.The fact that act length had no impact on results is surprising given the high impact of indicated , suggesting that the participants' ratings were biased heavily by their relative interest level and don't reflect subtle variations in summary length.It would be interesting to carry out follow-up experiments with greater differentiation in summary length, and a clearer separation between the rating of exhibit area interest level and intrinsic summary quality. Table 3 shows the breakdown of estimated marginal means across algorithms (based on "All").It is evident that CENTROID and MANIFOLD rate better on average than the other methods, but not statistically significantly (p > 0.05).Despite strong results in other contexts (Erkan and Radev, 2004), LEXRANK was, surprisingly, the worst performer. It is harder to tease out any strong trend from the combinations of factors which show up as having a significant impact on ratings, other perhaps than algorithm and indicated tending to have a significant impact when combined (including other factors).Also, the impact of pref length tends to be dampened when combined with other factors. Reflecting back on our original question of whether automatic summarisation methods can be applied to personalised content delivery in a museum domain, the answer would appear to be yes, in that the best-performing methods produced summaries which were rated around 3.5 on average (with 3 indicating "Not bad" and 4 "Good").Contrary to our clustering results from Section 4.1, the choice of algorithm was found to have no significant effect on summary rating, with indicated , pref length and diversification having a greater individual impact on summary quality.Finally, the correlation observed by Berkovsky et al. (2008) between interest level and preferred summary length was not strongly evident in our results. There has been work on personalized document search and summarisation in the medical domain for clinicians and patients, based on semantically-enhanced extraction of snippets and terminology standardisation (McKeown et al., 2001;McKeown et al., 2003).Here, however, the personalisation was at the level of two discrete user profiles, and not truly individualised.Radev et al. (2001) present the design of a search engine which supports recommendation, clustering, and personalised summarisation, but do not include any technical details or evaluation.Goren-Bar and Prete (2005) report on a preliminary situated experiment on content delivery in a museum, but found that the simple static text delivery method was preferred to the adaptive method.Berkovsky et al. (2008) present a method for generating summaries of different length, and demonstrate a correlation between the level of user interest in a topic area, and the preferred summary length.However, their method relies on hand-compiled summaries of expanding length, and no attempt was made to automate the summary generation. The content differentiation aspect of our work is somewhat related to the TAC 2008 Update Summarization Task (Dang and Owczarzak, 2008), where participants were provided with a set of 10 documents and a pre-prepared summary, in addition to a set of 10 update documents containing new information, from which a summary was to be generated.In the TAC 2008 task, the update summary was for the same topic as the original summary, and the task was specifically to highlight novel content.In our setup, the exhibit areas overlap in content to varying degrees (depending on theming), and the relative importance of differentiation is more subtle (partly because the museum visitor sees the summaries in discrete chunks, in the context of different exhibit areas). Conclusions We have presented a user study on personalised summarisation for a museum visit.We implemented a range of summarisation algorithms, and used them to generate summaries for individual exhibit areas in a museum, intended for in situ delivery to a museum visitor on a mobile device.Personalisation took the form of adjustment of summary length on the basis of the visitor's indicated interest level in a given exhibit area, as well as (optionally) diversification over previouslydelivered summaries.We found that museum visitors are largely supportive of personalised content delivery, and explored the impact of a range of summarisation parameters on visitors' ratings of summaries. Figure 1 : Figure 1: Graph of indicated vs. user rating for the different summarisation algorithms Table 1 : Summary lengths in number of words Fidell,
2014-10-01T00:00:00.000Z
2011-12-01T00:00:00.000
{ "year": 2011, "sha1": "f294346c2578f99da621d48286131f85481cee01", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "f294346c2578f99da621d48286131f85481cee01", "s2fieldsofstudy": [ "Art", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
470139
pes2o/s2orc
v3-fos-license
A retrospective cohort study of U.S. service members returning from Afghanistan and Iraq: is physical health worsening over time? Background High rates of mental health disorders have been reported in veterans returning from deployment to Afghanistan (Operation Enduring Freedom: OEF) and Iraq (Operation Iraqi Freedom: OIF); however, less is known about physical health functioning and its temporal course post-deployment. Therefore, our goal is to study physical health functioning in OEF/OIF veterans after deployment. Methods We analyzed self-reported physical health functioning as physical component summary (PCS) scores on the Veterans version of the Short Form 36 health survey in 679 OEF/OIF veterans clinically evaluated at a post-deployment health clinic. Veterans were stratified into four groups based on time post-deployment: (1Yr) 0 – 365 days; (2Yr) 366 – 730 days; (3Yr) 731 – 1095 days; and (4Yr+) > 1095 days. To assess the possibility that our effect was specific to a treatment-seeking sample, we also analyzed PCS scores from a separate military community sample of 768 OEF/OIF veterans evaluated pre-deployment and up to one-year post-deployment. Results In veterans evaluated at our clinic, we observed significantly lower PCS scores as time post-deployment increased (p = 0.018) after adjusting for probable post-traumatic stress disorder (PTSD). We similarly observed in our community sample that PCS scores were lower both immediately after and one year after return from deployment (p < 0.001) relative to pre-deployment PCS. Further, PCS scores obtained 1-year post-deployment were significantly lower than scores obtained immediately post-deployment (p = 0.02). Conclusion In our clinical sample, the longer the duration between return from deployment and their visit to our clinic, the worse the Veteran’s physical health even after adjusting for PTSD. Additionally, a decline is also present in a military community sample of OEF/OIF veterans. These data suggest that, as time since deployment length increases, physical health may deteriorate for some veterans. Background Prior to deployment to Afghanistan and Iraq in support of Operations Enduring and Iraqi Freedom (OEF/OIF), US service members report baseline health functioning superior to that of the general US population [1]. Following deployment, however, veterans report they have poorer health [2,3]. In fact, the number of OEF/OIF veterans rating their overall health as fair or poor doubled six months after returning home as compared to their initial postdeployment assessment [2]. This is concerning because lower self-assessed functional health, particularly in the physical domain, has been associated with both greater health care utilization and mortality in veterans of prior conflicts [4][5][6] and in community samples [7,8]. Although several studies suggest that as time since deployment to OEF/OIF increases so too does the prevalence of poor mental health functioning among US service members, [2,9,10] less attention has been paid to how physical functioning changes over time [11][12][13]. Importantly, we do not know whether there is a similar trend toward reduced physical function over time since return from deployment from OEF/OIF. Such investigation is especially warranted given that veterans of prior conflicts generally have worse health functioning than the general U.S. population, [5,14] with the largest disparity observed in the physical domain [6]. Moreover, a worsening of physical functioning over time could lead to long term disability and have numerous public health implications (e.g., greater health care utilization and mortality) that would be especially problematic for this relatively young, working age population. Therefore, the purpose of this study is to examine physical health in OEF/OIF veterans as a function of time since deployment using data from OEF/OIF veterans seen in a post-deployment health clinic. We also examined data from a military community sample as a comparison group to enhance generalizability. This sample participated in a longitudinal, prospective cohort study of OEF/OIF veterans who were followed from predeployment to one year after return from deployment. Because of the strong link between Posttraumatic Stress Disorder (PTSD) and health as well as its high prevalence among OEF/OIF veterans [15,16], we included PTSD in our analysis predicting post-deployment health functioning. Additionally, using data from both clinical (i.e., treatment-seeking) and community samples afforded us the ability to address the concern that our findings were specific only to a treatment-seeking sample. Moreover, if we were to find similar effects across these two different samples, it would suggest a more generalizable impact of military service and thus better inform the public health community. Methods Study design, sample and data extraction (cross-sectional clinical sample) Data from 679 OEF/OIF veterans clinically evaluated (June 2004 -October 2010) at a post-deployment health clinic (New Jersey War Related Illness and Injury Study Center) were examined retrospectively with approval from the Institutional Review Board at the VA New Jersey Healthcare System and in accordance with the Helsinki Declaration. We stratified our sample into groups by time post-deployment which was computed as the difference between the veterans' clinical evaluation date and return from deployment date. This resulted in the following four groups: (1Yr) 0 -365 days; (2Yr) 366 -730 days; (3Yr) 731 -1095 days; and (4Yr+) > 1095 days. Distribution by gender was similar between groups with women accounting for approximately 14% of the total sample (See Table 1 for demographic characteristics). Veterans clinically evaluated at our post-deployment health clinic typically come for a one-time comprehensive medical evaluation, during which they complete an intake packet which is entered into a database for scoring and evaluation. We abstracted from the veterans' clinical evaluation records the following information: demographics, deployment dates, probable diagnosis of PTSD and responses on the Veterans version of the Short Form 36 Health Survey (known as the Veterans RAND or VR-36). Comparison group (longitudinal military community sample) To address the concern that our findings were specific to a clinical sample with data from only one time point, PCS scores from our clinical sample were compared to a sample of 768 healthy Army National Guard and Reserve enlisted soldiers (11% female) who deployed to OEF/OIF and were evaluated pre-deployment and up to 1 year post deployment. PCS scores were available for pre-deployment, immediate post-deployment and approximately one year (+/− 4 months) after return from deployment. All participants provided informed consent and study procedures were approved by three institutional review boards (VA New Jersey Healthcare System, G.V. (Sonny) Montgomery VA Medical Center, and the Walter Reed Army Medical Center Department of Clinical Investigation). Measured variables For both samples we had the same measures of physical function and PTSD symptoms. From the VR-36, we computed the PCS score to provide an overall measure of physical health-related functioning which was the primary outcome variable. PCS scores are standardized from published results of the US population to a mean of 50 and standard deviation of 10, with higher scores reflecting better health [17][18][19][20]. To assess which facets of physical function were most important to overall physical function in this sample, we also assessed four subscales that contribute to the PCS: physical functioning (i.e., the presence and extent of physical limitations), role-physical (i.e., the extent to which one can physically engage in their work or other activities), bodily pain (i.e., intensity and impact of pain) and general health (i.e., rating of health) [21][22][23]. The Posttraumatic Stress Disorder Checklist-Civilian version (PCL-C) was used to indicate the presence of posttraumatic stress symptoms. The 17 items of the checklist correspond to the diagnostic symptoms of PTSD and the measure demonstrates good psychometric properties in veterans when a score of 50 or more is used to define likely presence of PTSD [24]. For consistency with prior reports from the OEF/OIF population, [25] we chose to additionally require that the Diagnostic and Statistical Manual of Mental Disorders (DSM) IV criteria of one intrusion (e.g., recurrent and distressing recollections, flashbacks or dreams about the event), three avoidance (e.g., emotional numbing, avoiding reminders, amnesia for the event) and two hyperarousal symptoms (e.g., hypervigilance, hyperstartle, concentration problems) were also present at the "moderately" or higher level. These two requirements (e.g., PCL-C ≥ 50 and DSM IV criteria) were used to define probable PTSD. Statistical analysis For our clinical sample, the PCS and the four physical health subscales of the VR-36 were analyzed separately via ANCOVA (covariates of probable PTSD, age, gender, and the combination of PTSD, age and gender) with planned linear polynomial contrasts to assess the effect of time between return from deployment and visit to our clinic. Rates of probable PTSD, age and gender were compared between groups using a chi-square test or one-way ANOVA with Tukey post-hoc comparisons, respectively. A 5% alpha level (two-tailed) was considered statistically significant. For our longitudinal military community sample, PCS scores were compared using a mixed model analysis. To handle missing data, we applied the method of multiple imputation [26]. Per Graham [27,28], we created 40 imputed datasets using IVEWare [29] and imputed results were combined using SAS MIANALYZE procedure (SAS v9.1). Demographic variables, including education (as a proxy for socioeconomic status), age and gender, as well as data pertaining to PCS variables collected at all waves were used to generate the imputed datasets. However, multiple imputation was not performed for the clinical sample where less than 2.5% of data were missing for any variable of interest. Bonferonni correction was applied for multiple tests at a 5% alpha level (one-tailed). Results Physical functioning in OEF/OIF veterans seeking care post-deployment (cross-sectional clinical sample) Mean scores (± standard deviation) for the PCS and four physical health subscales are presented in Table 1. A higher proportion of individuals meeting criteria for probable PTSD occurred in the group seen later after return from deployment (Table 1). Specifically, probable PTSD was more likely in those evaluated at 3 Yr and 4 Yr+ after return from deployment in comparison to those evaluated at 1 Yr (1 Yr vs. 4 Yr+: χ 2 (1) = 12.2, p < 0.001; 1 Yr vs. 3 Yr: χ 2 (1) = 6.9, p < 0.01). Gender was similar across groups (F (3, 678) = 0.70, p = 0.69), but age was significantly different (F (3,678) = 2.92, p = 0.03). However, Tukey post-hoc comparisons revealed no significant between-group differences. PCS scores were significantly lower as the time between return from deployment and the visit to our clinic increases after separately controlling for the effects of PTSD (p = 0.02), age (p = 0.003) or gender (p = 0.001), but was not significant after accounting for all three covariates (p = 0.08). Further, both the physical functioning and role-physical subscale scores declined over time (p = 0.03 and p = 0.02, respectively) even after adjusting for PTSD, age and gender. Scores for bodily pain also were lower over time after adjusting for age (p = 0.01) and gender (p = 0.002), but not probable PTSD (p = 0.12) or the combination (p = 0.31). General health was significantly lower as a function of time since deployment after adjusting for probable PTSD (p = 0.03), age (p = 0.001) or gender (p < 0.001), and showed a trend when the combination of covariates were included in the model (p = 0.07). A summary of the F-test for the effect of time between deployment and clinic visit using ANCOVA are presented in Table 2. Figure 1 (raw values) illustrates average PCS scores for each year which are similar and in some cases lower than published disease-specific norms ( Figure 2) [23]. Analyses were also performed using a more liberal PCL-C cut-off of 25 or more, per recommendations from the Department of Veterans Affairs National Center for PTSD for maximizing detection of possible PTSD cases (http://www.ptsd.va.gov/professional/pages/ assessments/ptsd-checklist.asp). All significant differences reported above with a PCL-C of ≥ 50 were also observed using a cut-off of ≥ 25 (data not presented). Physical functioning in OEF/OIF veterans (longitudinal military community sample) In our comparison longitudinal military community sample of Army National Guard and Reserve enlisted soldiers, predeployment PCS scores (55.5 ± 5.2) were significantly lower immediately post-deployment (53.2 ± 7.3; t(227) = 5.88, p < 0.001), and lower still approximately one year later (52.3 ± 8.6; t(135) = 7.67), p < 0.001). Further, PCS scores one-year following return from deployment were lower than immediately post-deployment (t(107) = 2.27, p = 0.01). Differences in PCS scores of 2-5 points are typically considered clinically significant, [23] and are notable given that these individuals have higher than normal pre-deployment PCS scores (i.e., ½ of a standard deviation above the mean for the population). Discussion Veterans evaluated at our post-deployment health clinic endorsed physical health-related functioning that is substantially worse than that of the general US population ( Figure 1) and indicative of impaired physical functioning irrespective of time post-deployment ( Figure 2). However, for those in our clinical sample the longer after the return from deployment that the veteran attended the clinic, the worse his/her physical function. PCS scores remained low (i.e., poor physical function) even after accounting for comorbidity with probable PTSD, suggesting that physical health-related functioning may be an important problem regardless of PTSD status. Similarly, two subscales that comprise the PCS score (i.e., physical functioning and role-physical functioning) also indicated that the longer after return from deployment that Veterans were seen in our clinic, the lower their physical health even with adjustment for PTSD, age and gender. Together these findings suggest that given the average age of our sample (i.e., 32 years), it is critically important to plan carefully for what could be substantial, and long-term future health care needs for this cohort of veterans. A potential limitation of the cross-sectional clinical sample is that PCS scores indicating poor physical function may simply mean that veterans wait to be seen at our post-deployment health clinic until they are sufficiently symptomatic. Since we do not have pre-or early postdeployment data on this clinical sample it is impossible to say if the service members have worsened over time, if their condition has always been poor, or if those who were seen further from their deployment date simply had a higher threshold for seeking care in the face of symptoms. However, the PCS scores in our military community sample of individuals deploying to OEF/OIF obtained before, immediately after and about one-year post-deployment also demonstrated a decrease in PCS scores over time with lower scores post-deployment than pre-deployment despite the fact that the latest post-deployment data was obtained only one year after return (Figure 1). Additionally, we could confirm that on average, this sample was physically healthier than the general US population before deploying. Although a decrease in PCS scores from predeployment to immediately post-deployment may be expected due to the physical rigors of the deployment, it is concerning that physical function not only did not improve by 1 year post-deployment, but continued to decline. In fact, PCS scores obtained at 1-year were significantly lower than those obtained immediately postdeployment. Because PCL scores were not available for all time points in this longitudinal sample, we did not control for PTSD in this analysis. Regardless of the factors that may be affecting physical health functioning, such as the presence of probable PTSD, when compared to pre-deployment, PCS scores of veterans one year after deployment were more than three points lower, a difference that in prior literature has been considered clinically significant [23]. Equally concerning is that from immediately afterdeployment to the one year follow-up, these individuals (mean age = 28 years) demonstrated an approximately 0.9 point decrease in PCS. Thus, even in a military community sample group of veterans over just the first year after return we observed a decrease in physical function from pre-to one-year post-deployment (i.e., over approximately 2 years) that appears to be declining at a faster rate than normal aging. For example, this magnitude [23] as well as pre-deployment values from the Millennium Cohort Study [1]. Note that the dashed vertical line represents the U.S. population average. of decline is equivalent to approximately half the decrease seen in population norms from the mid-thirties to midforties [23]. That a decline in physical health-related functioning is also present in a military community sample reinforces our observations in those veterans seeking treatment and illustrates a potential robust trend towards declining physical health in OEF/OIF veterans. In our clinical sample, the longer the duration between return from deployment and their visit to our clinic, the worse the Veteran's physical health. Irrespective of the length between return from deployment and visit to our clinic, their rating of physical health is worse than that of the general U.S. population (Figure 1). Moreover, veterans 4 Yr+ after a combat deployment report physical health that is worse than individuals with hypertension or liver disease, and their PCS scores begin to approach those of individuals with more severe chronic diseases ( Figure 2) [23]. This is alarming given that PCS scores from over 77,000 service members in the Millennium Cohort Study exceeded the US general population norm (95% CI: 53.3 -53.4) [1] as did the PCS scores from our longitudinal military community sample at pre-deployment ( Figure 1). Further, previous work in veteran and non-veteran community samples, even for those who are middle aged (i.e., Miilunpalo et al. [8]), has found that decreases in physical function (PCS) are related to increased risk of both hospitalization and mortality. For example, in a sample of mostly older veterans, a 10-point decrease in PCS in veterans is associated with an age-adjusted 1.4 -1.8 fold increased risk of hospitalization and a 2.0 -2.6 fold increased risk of mortality [15,30]. A decrement of 5 -10 points significantly increased the risk of hospitalization (OR 1.13) and mortality (OR 1.14) [30]. Considering that we found decrements in PCS ranging from 0.9 (immediately post to one-year) in our longitudinal military community sample and 4.2 (1 Yr to 4 yr+) in our cross-sectional sample, it suggests that continued declines like this in these relatively young veterans could confer a future increased risk of hospitalization and mortality. These preliminary data highlight the need for further longitudinal work beyond one-year post-deployment to determine the extent and mechanisms underlying declines in physical function in veterans seeking and not seeking care. A strength of this study was our ability to compare data from our cross-sectional clinical sample to data from a longitudinal study of community military personnel. We were however, not able to control for PTSD in the community sample. Additionally, we were limited by only having self-report data and not assessing factors contributing to poorer physical health post-deployment such as physical ailments or injuries of particular relevance to this cohort of veterans, e.g. respiratory-related illnesses and mild traumatic brain injury. Future studies should continue to explore factors that contribute to declining physical function after deployment. Conclusions In summary, our observed PCS scores in OEF/OIF veterans were lower with increasing time between return from deployment and the visit to our clinic. These data are important because poorer physical function has been associated with greater health care utilization and increased mortality in both veterans and civilians [6,7]. Though poor physical health may be attributable to a variety of factors, these data should serve as a critical warning as these veterans are self-reporting poor physical health well beyond what is expected for their ages. This work highlights the need for thorough post-deployment screening and early intervention to minimize the decline in physical health and any associated increases in disability and health care utilization that may result if physical dysfunction is not addressed in a timely way for returning OEF/ OIF veterans.
2017-06-18T17:19:46.166Z
2012-12-28T00:00:00.000
{ "year": 2012, "sha1": "027f804a625b9e5f99f980a6155ebd84b33bb65f", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-12-1124", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e1cedfb2fa5226659333599e7893d5467288efd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1236197
pes2o/s2orc
v3-fos-license
Dll4-Notch signaling as a therapeutic target in tumor angiogenesis Tumor angiogenesis is an important target for cancer therapy, with most current therapies designed to block the VEGF signaling pathway. However, clinical resistance to anti-VEGF therapy highlights the need for targeting additional tumor angiogenesis signaling pathways. The endothelial Notch ligand Dll4 (delta-like 4) has recently emerged as a critical regulator of tumor angiogenesis and thus as a promising new therapeutic anti-angiogenesis target. Blockade of Dll4-Notch signaling in tumors results in excessive, non-productive angiogenesis with resultant inhibitory effects on tumor growth, even in some tumors that are resistant to anti-VEGF therapies. As Dll4 inhibitors are entering clinical cancer trials, this review aims to provide current perspectives on the function of the Dll4-Notch signaling axis during tumor angiogenesis and as a target for anti-angiogenic cancer therapy. Introduction The concept that solid tumors require the growth of new blood vessels (angiogenesis) for oxygen and nutrient supply, first proposed by Folkman 40 years ago [1], is now solidly accepted and has spurred substantial efforts to develop anticancer therapeutics that interfere with tumor angiogenesis. Many of the angiogenic signaling pathways necessary for embryonic development are reactivated during tumor angiogenesis, and as such represent targets for anti-angiogenic cancer therapy. VEGF (vascular endothelial growth factor) is a primary endothelial cell growth factor, and blockade of the VEGF signaling pathway is now a clinically approved and widely used therapy for cancer. However, inherent or acquired resistance to anti-VEGF therapy is frequently observed in tumors, thus illustrating the need for targeting additional angiogenesis pathways to fully exploit the promise of anti-angiogenic cancer therapy. Notch signaling has recently emerged as a critical regulator of developmental and tumor angiogenesis. Notch signaling in both endothelial and smooth muscle cells appears to provide critical regulatory information to these cells downstream of the initiating signal induced by VEGF. In particular, the Notch ligand Dll4 (delta-like 4) has been identified as a promising new target in tumor angiogenesis in preclinical studies. Pharmacological Dll4 inhibitors have been developed and are entering clinical trials for solid tumors. This review aims to provide current perspectives on the function of Dll4-Notch signaling axis during tumor angiogenesis and on mechanisms and applications of targeting this pathway for cancer therapy. The Delta/Jagged-Notch signaling pathway The Notch pathway is an evolutionary conserved signaling system that regulates cell fate specification, tissue patterning and morphogenesis by modulating cell differentiation, proliferation, apoptosis and survival [2][3][4]. In mammals, the core components of the pathway include five canonical DSL (Delta, Serrate, Lag2) ligands (called Dll1, 3, 4, and Jagged1 and 2) and four single-pass transmembrane receptors (Notch1-4). Since the DSL ligands are membrane-bound, the Notch pathway relies on direct cell-cell interactions for signal generation. Ligand binding to the extracellular domain of Notch triggers the proteolytic activation of the receptor. Juxtamembrane region cleavage of Notch by ADAM metalloproteinase is followed by γ-secretase complex-mediated cleavage and generation of the Notch intracellular domain (NICD). NICD then translocates to the nucleus, where it interacts with the RBPJ/CSL transcription factor and induces the expression of Notch target genes such as the basic helixloop-helix proteins Hes and Hey. Dll4-Notch signaling in vascular development Functional studies in mice, zebrafish and cultured endothelial cells have demonstrated a critical role for Notch signaling during formation of the vascular system (for recent comprehensive reviews see [5][6][7]). Of the four Notch receptors, Notch1 and Notch4 are expressed by endothelial cells [8]. Gene targeting studies in mice have demonstrated that Notch1 is the primary functional Notch receptor during developmental angiogenesis [9]. Except for Dll3, expression of all Notch ligands has been detected in endothelial cells [5]. Dll4 is the first Notch ligand to be expressed during mouse development, and Dll4 transcripts were detected in most capillary beds and arterial vessels [10,11]. Lack of a single Dll4 allele in mice leads to early embryonic lethality characterized by severe defects in arterial differentiation and vascular remodeling [12][13][14]. A clearer picture of Dll4 function during vascular morphogenesis has emerged from subsequent studies demonstrating that one function of Dll4 is to regulate the specification of endothelial cells into tip and stalk cells during angiogenic sprouting [15][16][17][18][19][20]. Dll4 is induced in endothelial tip cells of angiogenic sprouts in response to VEGF [15,17,21] and activates Notch in adjacent stalk cells. Mosaic analysis has demonstrated that Notch is required cell-autonomously for stalk cell specification by actively repressing tip cell phenotypes [15]. Loss of Dll4 expression leads to dramatically increased capillary sprouting and branching as a result of excessive tip cell formation and endothelial proliferation. Thus Dll4-Notch signaling functions as a regulator of angiogenesis downstream of VEGF. The loss of Notch signaling is associated with an increase in VEGF receptor (VEGFR)-2 and VEGFR-3 expression in stalk cells [20,22,23], indicating that Notch can provide negative feedback to reduce the activity of the VEGF/ VEGFR axis. Dll4-Notch signaling in tumor angiogenesis Studies in humans and mice have demonstrated that Dll4 is strongly expressed by the tumor vasculature and generally not by the tumor cells themselves. In various mouse models, strong Dll4 expression was observed in the majority of tumor vessels, contrasting with significantly lower vascular expression in adjacent normal tissues [11,13,24]. Paralleling vascular development, Dll4 expression in tumor vessels appears to be directly regulated by VEGF; thus Dll4 expression levels in tumors have been found to correlate with those of VEGF [24,25]. In humans, Dll4 expression was analyzed in tumors from kidney, bladder, colon, brain and breast [11,[25][26][27][28][29]. Robust Dll4 expression was observed specifically in the tumor vasculature in all of these tumor types, whereas Dll4 expression was low to undetectable in the vasculature of adjacent normal tissue. For example, Dll4 expression in renal clear cell tumors was confined to the vasculature and detected at nine-fold higher levels than in normal kidney [11,29]. Dll4 expression was generally not observed in the tumor parenchyma, although sporadic tumor cell expression was detected in colorectal and brain tumor samples [27,28]. Interestingly, although most colorectal and breast tumors showed positive Dll4 expression in tumor vessels, some tumors were negative. Further, at least in the case of breast cancer, the degree of Dll4 expression correlated with outcome: tumors with high Dll4 in the vasculature progressed more rapidly [26]. More work is needed to understand the factors that regulate Dll4 expression in tumors and to extend the connection between Dll4 expression and tumor progression. Consistent with its role in endothelial tip/stalk cell specification during development, inhibition of Dll4-Notch signaling caused increased vascular density and vascular sprouting in tumors [24,28,30,31]. Surprisingly, this vascular overgrowth phenotype resulted in tumor growth inhibition in a variety of human and rodent tumor models [24,30,31]. Perfusion studies demonstrated that the hypersprouting tumor vasculature was non-functional and consequently, anti-Dll4-treated tumors exhibited increased levels of hypoxia. Thus, blockade of Dll4-Notch signaling leads to tumor vessel "abnormalization" (i.e. the formation of a hypersprouting, non-functional vasculature) with resultant growth inhibitory effects on tumors [32]. Gene targeting studies suggest that Notch1 is the primary Notch receptor during developmental angiogenesis [9]. Therefore it appears plausible that Notch1 is also the predominant mediator of Dll4-Notch signaling in the tumor vasculature, although Notch4 has also been described as a receptor specifically expressed by endothelial cells. Recently generated Notch1-specific inhibitory antibodies exhibited tumor vessel effects similar to those seen following blockade to Dll4 [33]. Additional Notch receptor-specific reagents or new conditional genetic mouse models will be instrumental for delineating the relative contribution of Notch1 and Notch4 to tumor angiogenesis. Notch ligands other than Dll4 may also affect tumor angiogenesis. For instance, Jagged1 expression was detected in clinical breast cancer samples [34,35], while pro-angiogenic, paracrine functions for Jagged1 in head and neck squamous cell carcinoma have also been suggested [36]. Similarly, Jagged1 tumor angiogenesis-promoting effects were inferred in a mouse mammary tumor model [37]. Of note in this context, opposing effects of Dll4 and Jagged1 on endothelial sprouting were recently reported in the retina angiogenesis model [38]. Finally, Dll1 was implicated in the regulation of tumor angiogenesis in a fibroblast tumor model [39]. Mechanisms of Dll4-Notch inhibition on tumor vessels Blockade of Dll4-Notch resulted in tumor growth inhibition in a variety of human and rodent tumor models associated with the formation of a non-functional, hypersprouting tumor vasculature [24,[30][31][32]. Perfusion studies demonstrated that the hypersprouting tumor vasculature was non-functional, rendering the tumors more hypoxic (tumor vessel abnormalization) [32]. However the precise mechanisms underlying the rapid reduction of tumor perfusion following Dll4-Notch inhibition are not fully understood. Indeed, it is not known whether endothelial cell hypersprouting can lead to reduced tumor perfusion and is thus a primary upstream event, or whether an initial loss of tumor vessel perfusion leads to hypersprouting ( Figure 1). In one set of studies, reduced pericyte coverage and increased vascular leakage were observed in tumors treated with Dll4-Notch inhibitors [40,41]. Such an increase of vascular leakiness associated with impaired vascular integrity may explain a rapid decrease in tumor perfusion. Another possibility is that the abnormalized network as a whole loses the organization or vaso-regulation necessary to support adequate perfusion [30,32]. Alternatively, it is plausible that initiation of hypersprouting structures that are devoid of lumens would result in a loss of perfusion. Endothelial lumen formation is an essential step during vascular morphogenesis, and several mechanisms have been described for lumen formation in different vascular beds, including the coalescence of vacuoles within endothelial cells and the apical surface specialization between adjacent endothelial cells [42,43]. Tumor vessel obstruction by intraluminal endothelial cells was recently reported in response to γsecretase inhibitor treatment in both an orthotopic renal cell carcinoma and a VEGF-driven rabbit hind limb angiogenesis model [41]. In contrast, however, a recent study demonstrated reduced microvascular occlusion and the modulation of vasoconstriction following inhibition of Dll4-Notch signaling during post-angiogenic remodeling in retinal angiogenesis [44]. While it is suggested that endothelial hypersprouting may lead to disorganized cellular organization and ultimately vessel obstruction, this phenomenon is also reminiscent of the loss of cell polarity phenotype recently reported in endothelial β1 integrin mutants [45]. Endothelial cell polarization, linked to the specialized apical-basal distribution of cell adhesion molecules and their interaction with the Par polarity complex [42,43], is a necessary step for the formation of a patent vascular lumen. For example, loss of endothelial β1 integrin results in the loss of Par3 expression and the mislocalization of cell adhesion proteins Claudin-5, PECAM-1, VE-cadherin and CD99 specifically in arteries [45]. The activation of β1 integrin by Dll4-Notch1 in a CSL-independent manner [46] suggests a possible link between Notch signaling and endothelial cell polarity. CCM1 is another emerging regulator of endothelial cell polarization and lumen formation [47]. CCM1 interacts with VE-cadherin and directs adherens junction organization, distribution and association with the Par polarity complex in cultured endothelial cells and in human cerebral cavernous malformation (CCM) lesions [47]. CCM1 interacts with the intracellular protein ICAP1, which in turn binds specifically to β1 integrins [48]. Both CCM1 and ICAP regulate endothelial cell quiescence and angiogenic sprouting by activating Dll4-Notch signaling [49]. Thus, it is possible that some of the cell polarizing functions of CCM1 are mediated by Notch signaling. Based upon these associations, further analysis of the expression and distribution of polarity markers in the context of Dll4-Notch inhibition may provide additional evidence for a role for Notch signaling in regulating lumen formation and/or maintenance. [29,50]. Up-regulation of Dll4 expression by VEGF has been demonstrated in cultured endothelial cells and in endothelial tip cells in the retina angiogenesis model [17,20,29,30,51]. Similarly, strong expression of Dll4 on the growing front of tumor vessels and VEGF regulation of Dll4 in tumors has been documented [24]. Conversely, down-regulation of VEGFR2 and VEGFR3 expression following Notch activation was found in cultured endothelial cells and endothelial stalk cells [20,22,52]. Down-regulation of VEGFR2 and/or VEGFR3 has been proposed as a mechanism to reduce endothelial proliferation and migration and to permit local differentiation of cells within a zone of VEGF-driven angiogenesis [7]. These finding suggest that Notch can provide negative feedback to reduce the activity of the VEGF/VEGFR2/ VEGF3 axis during angiogenic sprouting ( Figure 2). The emerging picture is that VEGF acts as a central driver of angiogenesis, while Notch signaling helps to coordinate the response appropriately [53]. Although this model also applies to tumor angiogenesis, additional complexities appear to exist in the tumor microenvironment [50]. For instance, the fact that potent combination effects are observed for the simultaneous blockade of Dll4 and VEGF [30] is difficult to reconcile with a simple linear negative feedback model. Pathways other than VEGF, such as FGF, Angiopoietin-1/Tie2, Wnt, inflammatory cytokines, extracellular matrix components or Notch signaling itself were also shown to induce the expression of Dll4 in endothelial cells. Thus, it is plausible that some of these additional factors also act upstream of Dll4 in the tumor environment [30,[54][55][56][57][58] (Figure 2). EphrinB2 has been identified as key downstream target gene of Dll4-Notch in cultured endothelial cells and during endothelial specification towards the arterial fate in mouse and zebrafish [6]. More recently, EphrinB2 regulation of VEGFR2 endocytosis and signaling in endothelial tip cells upstream of Dll4-Notch was reported [59,60] (Figure 2). Targeted disruption of EphrinB2 results in early embryonic lethality due to angiogenic remodeling defects [61,62]. It is of great interest to determine how EphrinB2 affects the tumor vasculature in the presence of high levels of VEGF. Suppression of EphrinB2 signaling using soluble EphrinB2-Fc in a subcutaneous squamous cell carcinoma model phenocopied the effects of Dll4 blockade, in particular, reduced tumor growth accompanied by the induction of non-productive angiogenesis [63], suggesting that EphrinB2 acts as a downstream mediator of Dll4/Notch function. Conversely, targeted inactivation of EphrinB2 PDZ-dependent reverse signaling led to decreased vascularization and reduced endothelial sprouting, that is, the normalization of the tumor vasculature, in an orthotopic glioma model [59]. Similarly, blockade of EphrinB2 signaling with soluble EphB4-Albumin resulted in reduced tumor vascularization in the RIP1-Tag2 model of pancreatic islet carcinogenesis [40], consistent with an inhibitory activity on VEGF signaling and a function upstream of Dll4. It is possible that EphrinB2 inhibition exhibits specific effects depending on the tumor model or reagent. Further studies are warranted to elucidate EphrinB2 function during tumor angiogenesis, particularly in the context of anti-VEGF and/or anti-Dll4 combination therapies. Several tip cell-enriched genes have recently been identified in the mouse retina which may yield clues to which signaling pathways mediate the effects of Dll4-Notch blockade on the tumor vasculature [64,65]. Of particular interest are the secreted molecules ESM-1, angiopoietin-2 and apelin. Binding of these factors to their cognate receptors on stalk cells and the regulation of retinal angiogenesis and endothelial proliferation by the apelin/APJ signaling axis were demonstrated [64]. Additionally the induction of Wnt signaling activity in endothelial cells via the Notch target gene Nrarp was recently shown [66]. The functional significance of these pathways in the context of Dll4-Notch blockade in tumors remains to be determined. Therapeutic Inhibition of Dll4-Notch signaling during tumor angiogenesis Pharmacological targeting of Dll4-Notch signaling in preclinical tumor models has been achieved by several different mechanisms including anti-Dll4 antibodies, DNA vaccination, soluble Dll4-Fc and Notch-Fc decoys, Notch antibodies, and γ-secretase inhibitors [24,28,30,31,37,41,67,68]. Unlike γ-secretase inhibitors that broadly block all Notch signaling, specific targeting of Dll4 with anti-Dll4 antibodies did not induce overt gastrointestinal toxicity and has thus emerged as an attractive target for anti-angiogenic cancer therapy [30,69]. Consequently, anti-Dll4 antibodies have recently entered clinical trials for the treatment of advanced solid malignancies. A clinically important question is what types of cancer would benefit most from ani-Dll4 therapy. Thus far, tumor vascular Dll4 expression has been detected in many human tumor samples including kidney, bladder, colon, brain and breast cancer [11,[25][26][27][28][29]. Predictive biomarkers produced by the tumor or expressed within the tumor vessels that confer sensitivity to Dll4-Notch blockade have not yet been identified. A useful starting point may be to assume that high levels of Dll4-Notch are correlated with sensitivity to pathway inhibition [50]. Thus, further analysis of the expression levels of various components of the Delta-Notch signaling pathway in clinical specimens will be useful. Inhibition of VEGF signaling was the first clinically approved therapy targeting angiogenesis in cancer. Anti-VEGF therapy has widespread activity against multiple tumor types, but the effects are variable and incremental, and acquired or innate resistance is frequently encountered [70,71]. Anti-VEGF therapy acts to prune vascular sprouts and reduce new vessel growth [32,72], in contrast to the cellular effects of blocking Dll4-Notch described above. Importantly, preclinical studies have shown that blockade of Dll4 can have potent growth inhibitory effects on tumors that are resistant to anti-VEGF therapies [24,30]. Furthermore, the simultaneous targeting of Dll4 and VEGF has produced additive antitumor effects compared to single agents in a number of tumor models ( [30]; Kirshner and Thurston, unpublished). These observations clearly raise the possibility of combining anti-angiogenic therapies against these two pathways. Although blocking one pathway might sensitize tumor vessels to inhibition of the other, much remains to be learned at the mechanistic levels as to how these two pathways interact during tumor vascularization; certainly not all of the above findings can be explained by a simple linear VEGF-Dll4-Notch4 feedback loop as described in the retinal angiogenesis model [20,22]. The preclinical evaluation of combining Dll4 inhibition and cytotoxic agents represents another area of great clinical importance. Blockade of VEGF exhibits clinical potency predominantly when combined with chemo-or radiation therapy. Preclinical studies have suggested a model in which the normalization of tumor vessels achieved by anti-VEGF therapy allows for more efficient delivery of oxygen and drug into the tumor [73]. Although blockade of Dll4 results in an abnormalization of the tumor vasculature, combining Dll4 inhibition with cytotoxic chemotherapy frequently results in enhanced anti-tumor activity in preclinical tumor models ( [68]; Kirshner and Thurston, unpublished). In addition to the anti-angiogenic mechanism of action of disrupting Dll4 in the tumor vasculature, direct effects on the frequency of tumor initiating cells (cancer stem cells) and tumor growth have been described for tumor cell-specific targeting of Dll4 alone or in combination with chemotherapy [68]. Additional studies are warranted to elucidate the mechanism of this combination therapy approach and to ascertain the clinical validity of this treatment option. Effects of Dll4-Notch blockade on normal organs Dll4 is strongly upregulated in tumor endothelium compared to normal organs, however it is expressed to some degree in smaller arteries and capillaries of normal tissues, as well as in the thymic stroma and the gastrointestinal tract [10,13,[25][26][27]. The differential expression between tumor and normal vessels likely explains the preferential targeting of Dll4-expressing tumor endothelial cells in preclinical tumor models. However, clinical development of Dll4 inhibitors requires careful evaluation of possible adverse effects to establish a formal therapeutic index. In addition to a role in tumor angiogenesis, Dll4-Notch signaling also plays a crucial role in T-cell lymphocyte development and differentiation. Gene targeting studies have demonstrated that Notch1 is the essential Notch receptor for T-cell lineage commitment, while Dll4 is the Notch ligand required to induce Notch signaling in thymic immigrant cells and to actively determine T-cell fate [74][75][76]. Dll4 is constitutively expressed on the thymic epithelial cells (TEC) as well as on endothelial cells [13,24]. Specific deletion of Dll4 in thymic epithelial cells throughout development results in a complete block of T-cell development, associated with thymic acellularity and ectopic appearance of immature B cells [75,76]. Subsequent studies using pharmacologic blockade of Dll4 demonstrated that ongoing Dll4-Notch signaling is required for T-cell differentiation in the adult murine thymus [77]. These latter studies also demonstrated the reversibility of the thymus/T-cell phenotype upon cessation of treatment with anti-Dll4 antibody, which has important implications for potential clinical use of Dll4-Notch inhibitors. A recent report demonstrated that chronic Dll4 blockade induced vascular proliferation in liver of mice, rats, and monkeys with associated hepatocellular changes, as well as the development of subcutaneous vascular lesions in rats, referred to as neoplasia [78,79]. In genetic mouse models, loss of Dll4 or Notch1 function also led to the activation of liver endothelial cells and the formation of hepatic vascular lesions [40,80], although lesions in the skin were not reported. Of note, vascular lesions in liver induced by anti-Dll4 antibody administration appeared to be highly dose-dependent [78,79]. The most pronounced effects were observed at the highest doses, which may be beyond what is needed for blocking Dll4 in tumor vessels in clinical settings. Further, the effects on liver function were shown to be reversible following cessation of treatment [79]. It will be important to assess whether similar pathological changes are observed with doses of anti-Dll4 antibody that are clinically relevant. In one study, the hepatic vascular alterations observed in the Dll4 loss of function mice could be prevented by concomitant treatment with the EphrinB2 signaling inhibitor sEphB4-Alb. Interestingly, the simultaneous blockade of these signaling pathways displayed additive inhibitory effects on pancreatic tumor growth and perfusion [40]. The role of EphrinB2 in this study is consistent with its recently described function as an inhibitor of VEGF signaling in endothelial tip cells upstream of Dll4-Notch [59,60]. This raises the intriguing possibility that the induction of vascular changes in some organs can be manipulated by combination anti-angiogenesis therapy, while at the same time additive anti-tumor effects are achieved. Concluding Remarks and Future Directions The endothelial cell Notch ligand Dll4 has recently emerged as a critical regulator and a promising target in tumor angiogenesis. Blockade of Dll4-Notch signaling in tumors results in excessive, non-productive angiogenesis and inhibitory effects on tumor growth. Thus inhibition of Dll4 functions by a very different anti-angiogenic mechanism than therapies targeting the VEGF pathway. Significantly, enhanced anti-tumor effects in preclinical models have been observed by combined inhibition of Dll4 and VEGF, and blockade of Dll4 was found to have potent growth inhibitory effects on some tumors that are resistant to VEGF inhibition. From a mechanistic perspective, several important questions require further elucidation. Specifically, the mechanisms underlying the reduced tumor vascular perfusion induced by Dll4-Notch inhibition remain unclear. For example, what are the primary upstream events that lead to reduced tumor perfusion, and what are the effects on endothelial cell polarization and lumen formation? The characterization of signaling pathways other than VEGF that either act upstream to regulate Dll4 expression or downstream to mediate the effects of Dll4-Notch signaling also warrants further study. From a therapeutic standpoint, it will be instrumental to identify the tumor types that will benefit most from ani-Dll4 therapy and to further validate combination approaches with anti-angiogenic and chemotherapeutic regimens. Additionally, the careful evaluation of adverse effects on normal organ homeostasis for therapeutically-relevant doses of Dll4 inhibitors will be critical for advancement of Dll4 blocking agents in the clinic.
2014-10-01T00:00:00.000Z
2011-09-18T00:00:00.000
{ "year": 2011, "sha1": "75c5897db76166c77fa84982c41ca087873e043e", "oa_license": "CCBY", "oa_url": "https://vascularcell.com/index.php/vc/article/download/10.1186-2045-824X-3-20/123", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "df33654ee42a0b6777c1af4c7c5a7647b8cfe41e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1767971
pes2o/s2orc
v3-fos-license
Real-time Functional Architecture of Visual Word Recognition Abstract Despite a century of research into visual word recognition, basic questions remain unresolved about the functional architecture of the process that maps visual inputs from orthographic analysis onto lexical form and meaning and about the units of analysis in terms of which these processes are conducted. Here we use magnetoencephalography, supported by a masked priming behavioral study, to address these questions using contrasting sets of simple (walk), complex (swimmer), and pseudo-complex (corner) forms. Early analyses of orthographic structure, detectable in bilateral posterior temporal regions within a 150–230 msec time frame, are shown to segment the visual input into linguistic substrings (words and morphemes) that trigger lexical access in left middle temporal locations from 300 msec. These are primarily feedforward processes and are not initially constrained by lexical-level variables. Lexical constraints become significant from 390 msec, in both simple and complex words, with increased processing of pseudowords and pseudo-complex forms. These results, consistent with morpho-orthographic models based on masked priming data, map out the real-time functional architecture of visual word recognition, establishing basic feedforward processing relationships between orthographic form, morphological structure, and lexical meaning. INTRODUCTION A neurocognitive account of visual word recognition-the core process underpinning human reading-needs to address two basic questions: What is the functional architecture of the recognition process, whereby visual inputs are mapped via orthographic analysis onto representations of lexical form and meaning, and what are the units of analysis-lexical or sublexical-in terms of which these processes are conducted? Despite an enormous research effort over the last 100 years, involving behavioral, neuropsychological, and neuroimaging techniques, there is no agreed answer to these questions (Frost, 2012). Although it is generally accepted that the initial analysis of visual form and orthography engages occipitotemporal cortex, most strongly on the left (e.g., Vinckier et al., 2007;Cornelissen, Tarkiainen, Helenius, & Salmelin, 2003;Cohen et al., 2000;Bentin, Mouchetant-Rostaing, Giard, Echallier, & Pernier, 1999), and that later stages of lexical access and interpretation involve middle temporal and frontotemporal regions, also primarily on the left (e.g., Lau, Phillips, & Poeppel, 2008;Halgren et al., 2002;Bentin et al., 1999), the central properties of this process remain unclear. Here we use magnetoencephalography (MEG), in combination with MRI-based source reconstruction techniques, to delineate the specific spatiotemporal patterns of neural activity elicited by a psycholinguistically rich set of simple and complex written words and pseudowords. We aim to determine (1) under what description the outputs of orthographic analysis are mapped onto lexical-level representations and (2) what is the balance between feedforward and feedback processes in the processing relationship between orthographic and lexical analysis. In doing so, we will integrate behavioral data about the performance characteristics of the system with direct MEG-based evidence about its underlying neural dynamics. Background An important clue to the organization of visual word recognition comes from masked priming research over the past decade, demonstrating equally strong priming between related pairs like hunter/hunt and lexically unrelated pairs like corner/corn (e.g., Marslen-Wilson, Bozic, & Randall, 2008;Longtin & Meunier, 2005;Rastle, Davis, & New, 2004;Longtin, Segui, & Halle, 2003;Rastle, Davis, Marslen-Wilson, & Tyler, 2000). A masked prime like hunter is assumed to prime hunt because it is decomposed into the stem morpheme 1 {hunt} and the grammatical morpheme {-er}, reflecting the meaning of the whole form hunter. The fact that significant priming is also seen for corner, where a decompositional reading as {corn} + {-er} has no relation to the meaning of the word, points to a process of automatic decomposition for any word form that contains potential morphological structure, regardless of the lexical properties of the whole form. The failure of pairs like scandal/scan to show priming highlights the morphemic basis for these effects. Although scan is a potential stem morpheme, dal is not a grammatical morpheme, and this seems to block the decomposition of scandal into {scan} + {-dal}. This pattern of results suggests a recognition process that is dominated in its early stages by an analysis of the orthographic input into sublexical morphemic units and where a representation of the visual input in these terms is projected onto the lexical level in a strongly bottom-up manner, blind to lexical constraints (Marslen-Wilson et al., 2008;Rastle & Davis, 2008). This morpho-orthographic approach is not, however, fully supported either behaviorally (e.g., Diependaele, Sandra, & Grainger, 2009;Feldman, OʼConnor, & del Prado Martín, 2009) or in neuroimaging studies of visual word recognition, where support can be found for contrasting morphosemantic (or interactive) approaches, which claim that early orthographic analysis is modulated by top-down lexical and semantic constraints (e.g., . Within the neuroimaging domain, we focus on studies using EEG or MEG, because it is only these time-sensitive methods that can resolve the specific temporal ordering of different types of analysis during visual word recognition and thus discriminate directly between different proposals for the real-time functional architecture of the recognition system. Recent research based on these techniques falls broadly into two main classes. Several studies, stimulated by the masked priming results, ask whether there is electrophysiological evidence for early sensitivity to the morphological content of visual word forms, independent of lexical constraints. Working primarily with sets of morphologically complex and pseudo-complex word forms, masked priming has been combined with both EEG (e.g., Morris, Grainger, & Holcomb, 2008;Lavric, Clapp, & Rastle, 2007) and MEG (Lehtonen, Monahan, & Poeppel, 2011), whereas a further set of studies have used unprimed lexical decision tasks (e.g., Lavric, Elchlepp, & Rastle, 2012;Lewis, Solomyak, & Marantz, 2011;Zwieg & Pylkkänen, 2009). Taken as a whole, these and similar studies provide evidence for sensitivity to potential morphological structure, where complex and pseudo-complex forms like farmer and corner initially group together relative to orthographic controls like scandal, consistent with a morpho-orthographic view where these processes are not lexically driven. The spatiotemporal distribution of these effects is quite diverse, both in terms of hemispheric involvement (right and/or left) and posterior/anterior location and in terms of timing, with early effects (150-250 msec) seen in some studies (e.g., Lavric et al., 2012;Zwieg & Pylkkänen, 2009) and later effects (350-500 msec) in others (e.g., Lavric et al., 2007;Dominguez, de Vega, & Barber, 2004). A different set of MEG and EEG studies focus instead on the earliness with which lexical and semantic effects can be detected. These studies use unprimed lexical decision tasks and contrast morphologically simple words (nouns and verbs like help and gold) with matched pseudowords (e.g., Hauk, Coutout, Holden, & Chen, 2012;Hauk, Davis, Ford, Pulvermüller, & Marslen-Wilson, 2006;Assadollahi & Pulvermüller, 2003). Early lexical effects, although small relative to later N400 time frames, have been reported in a range of posterior and middle temporal sites. Hauk et al. (2012), for example, report word-pseudoword differences for the time period 180-220 msec in left anterior middle and inferior temporal lobes, whereas Shtyrov, Goryainova, Tugin, Ossadtchi, & Shestakova, (2013) observe even earlier lexicality effects (at around 100 msec) in an EEG study using MMN techniques. It is hard to determine, however, what the implications of these results are for the functional organization of the word recognition process. This is partly because the stimulus materials used rarely overlap across the two research strands, with the lexically oriented work, for example, generally not including morphologically complex material. This means that there is little direct evidence, under conditions where early lexical effects are detected, whether these serve to modulate candidate morphological decompositions-so that, for example, the segmentation of corner into {corn} + {-er} is inhibited. A further issue is the use of overt response tasks in combination with EEG and MEG, which all the studies cited above have in common. Behavioral research into the dynamics of language function requires the use of these tasks to provide information about underlying cognitive processes. There are several concerns, however, that argue against their use in the neuroimaging context. The most salient of these is the evidence that such tasks can modulate the actual process under investigation, through attentional tuning of neuronal computations relevant to the task requirements, even at early stages of the cortical analysis of sensory inputs (e.g., Zanto, Rubens, Thangavel, & Gazzaley, 2011;van Atteveldt, Formisano, Goebel, & Blomert, 2007). This raises the possibility that early effects seen in the EEG or MEG studies are induced by the ubiquitous experimental task. Such concerns are compounded when a priming task is used, especially in masked priming, where prime and target overlap closely in time. Under such conditions, it is hard to assign neural effects separately to the properties of the prime, the target, or to interactions between them at different levels of visual analysis. We address these issues in the current study by (a) ensuring that evidence about the timing of lexical and morphological effects can be linked within the same experiment to evidence about the spatiotemporal organization of word recognition more generally; (b) presenting the materials in a simple viewing paradigm, reducing the likelihood that the experimental situation will induce attentional tuning of specific aspects of the input analysis process; and (c) conducting a separate behavioral masked priming study so that the functional properties of critical stimulus materials can inform the analysis of the MEG responses evoked by a parallel set of stimuli. Experimental Considerations This experiment explores the dynamic roles of morphological, lexical, and semantic variables in the mapping between prelexical orthographic processing and semantically sensitive lexical analysis. To define the spatiotemporal coordinates of these twin poles of the word recognition process-for these stimulus sets and these participants in this specific experimental context-we contrast morphologically simple words (e.g., corn), pseudowords (e.g., frum), and length-matched consonant strings (e.g., wvkp). These simple forms, derived from the complex words and pseudowords (e.g., corner, frumish) used elsewhere in the experiment, establish the anchor points of the recognition process using items that come from orthographic neighborhoods matched across the main experimental conditions. Contrasts between words and pseudowords versus consonant strings (e.g., Cohen et al., 2000) should capture early orthographic effects in occipitotemporal cortex, differentiating word-like forms from random letter strings. The same sets of simple words and pseudowords allow us to locate the other pole of the processing continuum, testing for lexicality effects in a word versus pseudoword contrast. These are likely to be seen later in the access process-possibly in the N400 time frame (e.g., Lau et al., 2008)-with differential responses in left-lateralized middle and anterior temporal regions. To evaluate the properties and timing of the intervening processes that link orthographic analysis to lexical representation, we present complex and pseudo-complex stimuli that vary in morphological and lexical status. The morphological dimension, varying the presence or absence of stems and affixes in potentially complex forms, asks whether the mapping from orthographic analysis onto lexical form and meaning is in terms of morphemic (or pseudo-morphemic) units (cf. Vinckier et al., 2007). Because simple words in English are always also morphemes, this can only be tested by using complex forms that can pull apart the lexical and morphemic properties of a given word form-whether they are made up of potential stems and affixes, as in farmer or brother, or whether they combine an existing affix (e.g., {-ish}) with a pseudo-stem, as in blemish, or an existing stem (e.g., {scan}) with a pseudoaffix, as in scandal. From a lexical point of view, forms like brother, blemish, and scandal are monomorphemic and nondecompositional and should be treated differently from genuinely complex forms like farmer. From a morpho-orthographic perspective, the form brother, analyzable into the potential stem and affix pair {broth} + {-er}, should behave differently from blemish and scandal in the early stages of lexical access but similarly to farmer. It is necessary here to treat derivational morphology (the main focus of masked priming research) separately from inflectional morphology, which involves word forms like played that contain a stem and an inflectional affix (the past tense {-ed}). Regular inflectional morphology is systematic and transparent and does not change the meaning of the stem. Inflected forms are argued to be processed and represented decompositionally, relying on a left-lateralized frontotemporal network (Marslen-Wilson & Tyler, 2007). In contrast, derivational morphemes change the meaning and often the grammatical category of the stem, with a much less predictable relationship between stem and whole form and where emerging neuroimaging evidence suggests that these may not be represented decompositionally (Bozic, Tyler, Su, Wingfield, & Marslen-Wilson, 2013). Morphological structure of both types should be parsed before lexical access based on the presence of a stem and affix, but with potentially different lexical outcomes. For derivational morphology, we contrast a set of potentially complex words (see Table 1) that have a stem and affix (e.g., farmer), a stem but no affix (e.g., scandal ), an affix but no stem (e.g., blemish), and neither stem nor affix (e.g., biscuit). These contrasts test whether initial morphological decomposition depends on the presence of both a stem and an affix. Against the same backdrop of morphologically simple forms (biscuit), we can also evaluate the pattern of effects for inflected words like blinked. All forms containing a potential stem and an affix should trigger morphological segmentation, in contrast to stem-only forms like scandal-which do not elicit a masked priming effect-and simple forms like biscuit. The effectiveness of an embedded affix (blemish) in triggering morphological decomposition has not been tested in masked priming, although research with spoken words shows that the presence of a potential inflectional affix does trigger decompositional processes (Tyler, Stamatakis, Post, Randall, & Marslen-Wilson, 2005). An inflectional pseudoword condition, where the affix is attached to a nonexistent stem (e.g., bected ), tests for this possibility in the visual domain, matched by a further pseudoword set with derivational affixes (e.g., frumish). The second critical dimension of lexical status tests competing claims for the degree of autonomy of the early stages of visual analysis and lexical access. This dimension contrasts semantically transparent forms like farmer with opaque pseudo-complex pairs like corner. On a morphoorthographic account, no difference should be found between these forms before lexical access, but they should begin to diverge once access to the meaning of the whole form is in progress. To mirror the derivational contrasts, we also include semantically transparent and opaque inflectional conditions. Because the inflectional equivalents of corner words are rare in English (i.e., ending in -ed without being an adjective or past-tense form), we use inflected pseudowords analogous to those tested by Longtin and Meunier (2005) in French. Nouns that do not function as verbs (e.g., ash) were used as the stem, resulting in an interpretable but nonexistent pseudoword like ashed. Both semantically transparent inflected forms (blinked) and inflected pseudowords (ashed ) should generate early decompositional processing, as well as significant masked priming. In summary, this study aims to specify the functional architecture of visual word recognition by tracking the patterns of neural activity that underlie processing of morphologically simple and complex words in English. It asks three main questions: (i) is the early output of orthographic analysis structured into morphemic units, (ii) is there a distinct processing phase at which potential morphological structure is identified independent of lexical constraints, and (iii) what is the timing with which these processes are influenced by lexical-level constraints? To relate the MEG results directly to the behavioral evidence for morphoorthographic processing, we will run a separate masked priming study on parallel sets of complex and pseudocomplex materials. Finally, as noted earlier, participants are tested in a simple word viewing situation, accompanied by an occasional recognition task to reinforce sustained attention to the stimuli. MEG Participants Sixteen participants (nine women) took part in the MEG experiment. All were right-handed native British English speakers between the ages of 18 and 35 (mean age of 25) with normal hearing, normal or corrected-to-normal vision, and no history of neurological disease, who gave written consent to take part and were paid for their time. Stimuli and Design In each of the nine MEG test conditions (see Table 1), 50 words were selected, which contrasted the presence of different morphological features. Four conditions contained a potential derivational affix. Three of these were real word conditions: semantically transparent ( farmer), pseudo-derived (corner), and pseudo-affix (blemish), plus a pseudoword condition ( frumish), where the stem was not a real word. Three conditions contained a potential past-tense inflectional {-ed} affix: semantically transparent (blinked ) and pseudo-inflected (ashed ) words, paired with a pseudoword condition (bected ) where the stem is not a real word. The pseudo-inflected items (ashed) contained an embedded word that is only used as a noun in English, creating a pseudoword that could be segmented into an existing stem and an existing affix but which was not itself an existing word. The stems chosen for this condition appeared in the Celex English database (Baayen, Piepenbrock, & Gulikers, 1995) only as a noun, and no instances (or a single instance only) of use as a verb were found in the British National Corpus (www.natcorp.ox.ac.uk/). Two baseline conditions were included that did not contain a potential affix: pseudostem (scandal ) and no stem/no affix (biscuit). Participants also saw the 450 embedded stems and pseudostems (or first syllables for words without embedded stems) extracted from these complex forms. These were accompanied by 160 strings of random consonants, matched to the length of the target items (both stems and whole forms), and varying in length from three to nine letters. These were included both as a general length-matched baseline and to allow specific contrasts with word and pseudoword stimuli to select out regions sensitive to orthographic structure. For use as test items in the recognition task, a set of 50 filler items (words, pseudowords, and consonant strings) were also presented. An additional 10 filler items were included as dummy items at the beginning of each block. The total number of stimuli in the study was 1120 items. For all conditions where an embedded stem was present, pairs of items were presented to native English speakers who rated the semantic relatedness between the two words (e.g., corner/corn) on a scale of 1-7 (unrelated to highly related ). Test items selected for the morphologically transparent condition were rated as 6.5 or above. For the pseudoderived and pseudo-stem conditions, test items were rated as 3.5 or below (weakly related). Items for the nine conditions were selected using the Celex database and the conditions were matched (Table 2) on whole form and stem length, percentage of orthographic overlap between stem and whole form (where applicable), and frequency of the whole form and embedded stem (where applicable). Behavioral Study To provide a bridge between masked priming research and the current study, we ran an initial set of stimuli (40 words per condition) in a conventional masked priming task, using a prime-target SOA of 40 msec with whole forms as primes ( farmer) and stem forms as targets ( farm). This study was conducted to determine what pattern of priming effects we would see for the particular combination of derivations, inflections, real words, and pseudowords chosen for this research (as listed in Table 1). Previous studies had not included all of these conditions in a single stimulus set, and no study in English (as far as we are aware) has used pseudowords like "ashed" and "bected" as primes. Behavioral evidence about which combinations of real and pseudo-stems and affixes do or do not show priming is an essential input to the MEG study and its analysis. We tested 29 new participants (none of whom took part in the MEG study), all right-handed native British English speakers between the ages of 18-34 (mean age of 24). Each trial began with a set of hashmarks as a premask, which appeared in the center of the screen for 500 msec. This was followed by the prime in the same location in lower-case letters for 40 msec and which itself was immediately followed by the target in uppercase letters. The experiment was run in a sound-proof, dimly lit room, using a PC-compatible microcomputer using DMDX software (Forster & Forster, 2003). Trial order was pseudorandomized online using DMDX software, with two items from each condition appearing in each scrambling block (one related prime and one unrelated prime from each condition). Outliers (RTs over 1500 msec) were discarded, accounting for 0.8% of the data. MEG Procedure For the MEG study, the stimulus materials (see Tables 1 and 2) were based on those used in the masked priming study. To enhance the signal-to-noise ratio in the MEG environment, 10 additional stimuli were added to each condition, increasing the number of items to 50 per condition (listed in Appendix 1). The stimuli were randomly assigned to one of two blocks, each further divided into five subblocks with the constraint that each subblock contained five items from each condition and 16 consonant strings (for a total of 106 items per subblock). Whole forms and their equivalent stem forms (e.g., farmer and farm) were placed in separate blocks and always appeared in corresponding subblocks across the experiment (i.e., farmer in Subblock 1 of Block 1 and farm in Subblock 1 No stem no affix biscuit 6.3 n/a n/a 12.0 n/a of Block 2). The order of the two blocks was alternated for each participant, so that presentation of the whole form and equivalent stem ( farmer and farm) were alternated, with the stem appearing first for half of the participants. The order of the five subblocks was randomized for each participant in a cyclical order (e.g., subblock order 1-2-3-4-5 for Participant 1, and 2-3-4-5-1 for Participant 2). This preserved the order of the subblocks so that the repeated stem and whole form were always separated by the same number of subblocks, with a mean distance of 559 trials (range of 448-670 trials). Trial order was randomized within each subblock using E-Prime 1.0 software (Psychology Software Tools, Inc.). Each trial began with a fixation cross in the middle of the screen for 500 msec to direct the attention of the participant to the appropriate location on the screen. This was followed immediately by presentation of the stimulus for 100 msec, centered at the same location. The short presentation prevented participants from making saccades. A blank screen was then presented for 1.4-1.6 sec, jittered randomly for each trial, before the next stimulus appeared. At the end of a subblock, a screen appeared asking if the letter string indicated had been seen in that subblock. Participants were instructed to make a response within 3000 msec using the button boxes. Ten items were used in each recognition task over the 10 subblocks for a total of 100 items (50 old/50 new). Each subblock was separated by a break at the completion of the recognition task, and participants could control the length of each break. Participants sat in a dimly lit magnetically shielded room (IMEDCO AG, Switzerland), viewing items as they were presented on a screen at eye level. All stimuli were displayed in bold Arial font in black letters on a light gray background. Participants received spoken and written instructions about the task and were given 10 practice items. They were instructed to read the items silently but not to articulate or make any movements. Because each subblock contained approximately 100 items, participants were instructed not to attempt to memorize the items but to simply attend to them. Participants did not make button presses during blocks of trials but used two button boxes (one in each hand) to perform the recognition task at the end of each subblock. The experiment was run using E-Prime 1.0 and lasted approximately 45 min. MEG Acquisition MEG data were continuously acquired at a sampling rate of 1000 Hz (passband 0.01-300 Hz), with triggers placed at the onset of each stimulus. Neuromagnetic signals were recorded continuously with a 306-channel Vectorview MEG system (Elekta Neuromag, Helsinki, Finland). Before recording, four electromagnetic coils were positioned on the head and digitized using the Polhemus Isotrak digital tracker system (Polhemus, Colchester, VT) with respect to three standard anatomical landmarks (nasion, left and right preauricular points). During the recording, the position of the magnetic coils was tracked using continuous head position identification, providing information on the exact head position within the MEG dewar for later movement correction. Four EOG electrodes were placed laterally to each eye and above and below the left eye to monitor horizontal and vertical eye movements. MEG Preprocessing Continuous raw data were preprocessed offline with the MaxFilter (Elekta Neuromag) implementation of the signal-space separation technique with a temporal extension (Taulu & Simola, 2006). Averaging was performed using the MNE Suite (Athinoula A. Martinos Center for Biomedical Imaging). Epochs containing gradiometer, magnetometer, or EOG peak-to-peak amplitudes larger than 3000 fT/cm, 6500 fT, or 200 μV, respectively, were rejected. Trials were averaged by condition with epochs generated from −100 to 500 msec from onset of the target word. Averaged data were baseline corrected using −100 to 0 msec interval and low-pass filtered at 45 Hz. For sensor-level analyses, MEG data were transformed to the head position coordinates of the participant with the median head position within the helmet to minimize transformation distance. Sensor-level Analyses These analyses were conducted on gradiometers and magnetometers separately using the SensorSPM analysis method implemented in SPM5 (www.fil.ion.ucl.ac.uk/ spm/). Magnetometer data were used as such, whereas for each pair of gradiometer channels, a vector sum was calculated that reconstructed the field gradient from its two orthogonal components and its amplitude (computed as a square root of the sum of squared amplitudes in the two channels). For each participant and condition, a series of F tests were performed on a three-dimensional topography (2-D sensors by time image), which extended through 601 samples (1 msec each), allowing for the application of random field theory as in fMRI analysis (Kiebel & Friston, 2004). The 3-D images were thresholded at a voxel level of p < .005 and corrected for cluster size at p < .05. These clusters could extend in space (distributed across the topography) and in time. This made it possible to compare conditions across every sensor over the entire time window while still correcting on a whole-brain basis for multiple comparisons. This procedure eschews any preselection of time windows of interest and provides a data-driven selection process, which is not restricted to specific peaks found through visual inspection of the data. Source Estimation MP-RAGE T1-weighted structural images with a 1 × 1 × 1 mm voxel size were acquired on a 3-T Trio Siemens scanner for each participant, which were used for reconstruction of the cortical surface using Freesurfer (Athinoula A. Martinos Center for Biomedical Imaging). The L2 minimum-norm estimation (Hämäläinen & Ilmoniemi, 1994) technique was applied for source reconstruction as implemented in the MNE Suite. An individual MRI-based one-layer boundary element model (BEM) was created for each participant and was used to compute the forward solutions. An average cortical solution, containing 10,242 dipoles per hemisphere, was created from the 16 participants, and data from individual participants were morphed to this cortical surface in 10-msec time steps. ROIs were defined from Free-Surfer anatomical ROIs, with the exception of the large temporal and fusiform ROIs, which were subdivided into anterior, middle, and posterior regions. ROIs were defined on the average cortical surface, and for each participant, the mean value for all dipoles within each region was extracted for statistical analysis. The source-level analyses, using repeated-measures ANOVAs on the participant means within a given ROI, were restricted to the time windows where significant effects (after correction for multiple comparisons) were found in the sensor analyses. The results are visualized on the inflated cortical surface of the average participant. Recognition Task Results For the recognition task, mean accuracy was at 65% and did not vary significantly between words, pseudowords, and consonant strings, with accuracy at 66%, 62%, and 66%, respectively. Performance was assessed statistically using signal detection theory to test the discriminability index (d 0 ) against 0 using a paired t test, where d 0 = 0 would represent no difference between signal and noise. Discriminability was significantly greater than 0 (d 0 = .98, p < .0001), suggesting participants were reliably attending to the items. MEG Results In the results, we combine sensor-based and source-space analyses in each section. Sensor-level results are presented separately for gradiometers and magnetometers, followed by source-space analyses. The results are organized into two analysis streams, relating basic stages in the visual word recognition process to the processes that map between them. One stream focuses on the morphologically simple word and pseudoword stems, together with matched consonant strings, and the other on the morphologically complex and pseudo-complex forms. These statistically rigorous analyses, on sets of matched simple and complex materials, provide a well-controlled backdrop for evaluating how lexical, morphological, and semantic variables relate to different stages of the visual word recognition process. The Detecting Orthographic Structure and Emergence of Neural Sensitivity to Morphological Structure sections focus on the relationship between orthographic analyses and the early stages of lexical access. The Processing Lexical Identity and Lexical Effects for Morphologically Complex Words sections address the role of lexical constraints in the analysis of orthographic inputs. Detecting Orthographic Structure The first set of analyses contrasted words and pseudowords with consonant strings to establish spatiotemporal coordinates for effects associated with processing readable letter strings. To conduct these analyses, we used 100 pseudoword stems ( frum, bect) from the ( frumish) and (bected) conditions, excluding the pseudoword stems from the (blemish) and (biscuit) conditions, because these could be interpreted as the initial portion of an existing word. One hundred word stems ( farm, corn) were also selected, together with 100 consonant strings, all matched in length. At the sensor level (see Figure 1A), the SensorSPM contrast of words and pseudowords against consonant strings showed significant effects emerging between 155 and 230 msec in both gradiometers and magnetometers bilaterally. In the gradiometers, the cluster was significant from 155 to 230 msec within posterior right sensors with the peak at 195 msec. In the magnetometers, significant bilateral clusters appeared in left hemisphere (LH) sensors from 170 to 230 msec, peaking at 190 msec, and in right hemisphere (RH) sensors from 175 to 220 msec, peaking at 200 msec. All of these clusters reflect a stronger response to consonant strings than to words and pseudowords. Figure 1B plots early orthographic effects for LH and RH gradiometers and magnetometers at the peak of the significant sensor-level cluster in each hemisphere. In both hemispheres, there is an initial common response to all three stimulus types, peaking at 140 msec, followed by a second peak, at around 190 msec, which differentiates consonant strings from words and pseudowords. This indicates the presence of processes that are sensitive to orthographic structure but not to the lexical properties of the strings being analyzed. The location and timing of these orthographically sensitive processes is consistent with earlier research. Previous fMRI studies have shown increased activation for consonant strings in posterior occipital regions (e.g., Vinckier et al., 2007), indicating that visual word forms are differentially processed on the basis of their orthographic structure. Processing between 150 and 200 msec has been shown to be specific to letter strings but not yet to word-like strings (Cornelissen et al., 2003), although some studies have found effects associated with orthographic typicality as early as 100 msec (Hauk et al., 2006). Here we find that the initial component peaking at 140 msec did not differentiate between stimulus types (although we did not explicitly test for typicality). Emergence of Neural Sensitivity to Morphological Structure Here we examine the timing and distribution of analysis processes sensitive to the presence of cues to morphological structure. If potential stems and grammatical affixes are present in an orthographic input string, when do they start to trigger differential neural responses? We were guided here by the masked priming results. The four conditions containing complex forms with a stem and an affix (+S+A) all showed significant priming (see Table 3). These were two derivational sets ( farmer, corner) and two inflectional sets (blinked, ashed). We contrasted these with two noncomplex conditions (the scandal (+S−A) and biscuit (−S−A) sets), neither of which elicited priming. The presence of the pseudo-derived corner forms and the non-existing ashed forms make this a test for morphological effects that are blind to lexical-level variables. In both cases, the significant masked priming effect is direct behavioral evidence that stimuli of this type elicit morphologically driven decomposition that is not blocked by lexical criteria. There was an increase in processing activity for the combined derived and inflected forms compared with noncomplex forms in anterior left magnetometers, extending from 325 to 350 msec with a peak at 335 msec (see Figure 2A). There was no evidence in these brain-wide (and globally corrected) analyses for earlier or more posterior effects of morphological structure. At the peak magnetometer sensor from the SensorSPM analysis, there was no difference between the four +S+A conditions (F < 1) nor between complex and pseudo-complex forms within complexity type ( farmer vs. corner (t(15) = 1.49, p = .16); blinked vs. ashed (t(15) < 1). Analyzing derived and inflected forms separately, the derived forms show the same magnetometer cluster, from 320 to 365 msec in left anterior sensors with a peak at 335 msec. The effects for the inflected forms fall short of significance (but see source level analyses below). The topography of these sensor-level effects is more anterior, left-lateralized, and later in time than the orthographic effects displayed in Figure 1. Evidence from the magnetometers ( Figure 2B) showed a stronger response to the derivational and inflectional forms at the peak left temporal sensors, sustained over the period 300-450 msec, with peak effects at around 320-330 msec. Turning to the contribution of stems and affixes to these morphologically driven processes, the results confirm that both these elements need to be present, whether to elicit masked priming or to generate different distributions of neural activity. In a further analysis, the (−S, + A) blemish set patterned with the noncomplex scandal and biscuit conditions, consistent with the view that both a potential stem and a potential affix are needed to trigger early segmentation. At the source level ( Figure 2C), focusing on the time window during which significant sensor-level effects were found, activation has shifted more anteriorly and now includes inferior frontal areas, most strongly on the left. Specific contrasts between conditions within ROIs showed that the derivational/noncomplex contrast was significant in a 330-340 msec time window in left middle MTG (F(1, 15) = 4.69, p < .05), at the peak of the effect found at the sensor level. For the inflectional/noncomplex contrast, we see a more complex pattern, with effects in left posterior MTG from 300 to 320 msec (F(1, 15) The left frontotemporal patterning of these inflectional effects, which is identical for both the (+S+A) inflectional conditions (blinked, ashed), is consistent with extensive research using spoken words (e.g., Marslen-Wilson & Tyler, 2007) locating morphosyntactic effects in exactly these left peri-sylvian locations. The pseudoword bected, in contrast, which contains a potential inflectional affix but no stem, did not elicit significant effects in these ROIs compared with the noncomplex forms. This suggests that the requirement for both a stem and an affix to be present extends to inflectional as well as derivational morphology in visual word recognition. Processing Lexical Identity A complementary set of analyses focused on the word and pseudoword stems to test for effects linked to successful lexical access. The same two sets of 100 morphologically simple words and pseudowords were used as before. At the sensor level ( Figure 3A), the gradiometers revealed one cluster at 390-450 msec in left temporal sensors and a smaller cluster from 410 to 440 msec in right temporal sensors, peaking at 430 msec in both hemispheres. In the magnetometers, one cluster emerged at 425-500 msec within anterior left sensors with a peak at 470 msec. All clusters showed increased processing of pseudowords over words. Figure 3B plots the gradiometer and magnetometer response amplitudes for words and pseudowords at the peak of the significant LH cluster, with the two conditions starting to separate at 350 msec and peaking at 430 msec. Source-level analyses ( Figure 3C) focused on the 390-500 msec time window where significant lexicality effects were found in the sensor-level analyses. The overall distribution of activation has shifted anteriorly and frontally, especially on the left, where there is strong activity in temporal and inferior frontal regions for both words and pseudowords. Differences between these conditions emerge more posteriorly, with stronger responses to pseudowords in left posterior STG from 390 to 500 msec (F(1, 15) = 5.12, p < .05), left middle MTG from 410 to 440 msec (F(1, 15) = 5.48, p < .05), and left middle ITG from 430 to 440 msec (F(1, 15) = 4.50, p = .05). These lexicality effects overlap spatially with the morphological effects ( Figure 2) but emerge around 100 msec later. These spatiotemporal and functional patterns are consistent with the standard N400-like effects seen in MEG in terms of timing as well as location (Pylkkänen & Marantz, 2003;Halgren et al., 2002) and with evidence from fMRI showing the involvement of left posterior temporal regions in semantic processing (Hickok & Poeppel, 2007). Lexicality effects on the N400 have been interpreted as reflecting access to lexical representations (e.g., Lau et al., 2008;Kutas & Federmeier, 2000), whereas Dominguez et al. (2004) report prolonged N400 effects at the level of meaning selection because of incorrect morphological decomposition. Lexical Effects for Morphologically Complex Words The four +S+A ( farmer, corner, blinked, ashed) conditions were used to examine potential interactions of lexicallevel effects with the processing of morphologically complex letter strings. For the derivational pair, both farmer and corner are existing words, but corner has a potential interpretation as a pseudo-stem plus a pseudo-affix. For the inflectional pair, ashed is not an existing word, although it is potentially interpretable as a real stem plus a real affix. In each case, if an early segmentation process identifies these forms as potentially morphologically complex real words, a later process sensitive to lexical-level information will need to rescue the perceptual system from these potential garden paths. A significant cluster emerged from 400 to 500 msec within left anterior temporal magnetometers, showing increased activation of both corner and ashed forms relative to their corresponding semantically transparent conditions, peaking at 450 msec ( Figure 4A). The amplitude plots indicate comparable effects for the two contrasts. Consistent with this, the corresponding magnetometer and gradiometer responses ( Figure 4B) show similar trajectories over time, although differences between conditions emerge earlier for the inflectional pairs. Notably, the timing and distribution of these effects are very similar to those seen for the lexicality effects reported in Processing Lexical Identity section for morphologically simple words and pseudowords. At the source level ( Figure 4C), we see substantial bilateral activation in temporal and inferior frontal regions for both complex and noncomplex conditions, but significant differences between them only emerge in more posterior and inferior temporal sites. In all cases these reflect increased processing for pseudo-complex over complex forms and are likely be the result of top-down feedback processes. The combined contrast of complex versus pseudo-complex is significant from 400 to 470 msec in left posterior fusiform (F(1, 15) = 7.92, p < .05) and approaches significance from 400 to 410 msec in left middle ITG (F(1, 15) = 4.20, p = .058) and left middle fusiform (F(1, 15) = 3.85, p = .068). Broken down by morphological type, the 400-470 msec effect in left posterior fusiform gyrus is significant only for the derivational farmer/corner contrast (F(1, 15) = 8.66, p < .01), whereas the brief effect in left middle ITG from 400 to 410 msec is found only for the inflectional blinked/ashed contrast (F(1, 15) = 4.94, p < .05). DISCUSSION This research shows that we can unify the functional characteristics of real-time neural analysis with the functional properties of visual word recognition as revealed in the masked priming data. We can use this linkage across methodological domains to determine the functional architecture of the neurobiological system that generates these properties. In doing so, we benefit in particular from the spatiotemporally specific constraints provided by MEG. Unlike other imaging methodologies, MEG data mapped into neuroanatomically constrained source space allow us to specify not only when processes of different types begin and end, but also (within the limits of MEG source reconstruction) where these processes take place. The proposed architecture based on these results conceptualizes visual word recognition as a two-phase process, where primarily feedforward orthographically driven analyses segment the visual input into potentially meaningful linguistic substrings (words and morphemes) and where these substrings initiate lexical access in middle and frontotemporal locations from about 300 msec after stimulus onset. The initial stages of this access process are dominated by morpho-orthographic factors, with lexical constraints becoming detectable around 100 msec later, as reflected in effects of lexical identity and in increased processing for pseudo-complex strings like corner. We review below the evidence for these claims and their implications. Morphemically Driven Lexical Access The joint behavioral and neuroimaging false segmentation effects for ashedand corner-type stimuli are compelling evidence that the output of orthographic analysis is not in terms of lexical words per se, but in terms of morpheme-like linguistically relevant substrings. The critical MEG contrast is between materials that show decompositional effects in masked priming-the derivationally and inflectionally complex farmer, corner, blinked, and ashed conditions-and materials (scandal, biscuit) that do not ( Figure 2). Consistent with the masked priming results, this contrast reveals a time period-between 300 and 370 msec from stimulus onset-where both complex sets diverge from the noncomplex sets, but where there are no significant differences within each set as a function of their lexical properties. At the source level, the derivational set differs from the noncomplex set in left middle MTG at 330-340 msec, but the farmer/corner subsets do not differ. Similarly, the inflectional set differs from the noncomplex set between 300 and 370 msec in left posterior MTG and LIFG, but the blinked/ashed subsets do not differ. This pattern of results closely parallels the morphoorthographic process hypothesized on behavioral grounds, where the input is analyzed in terms of its morphological properties, but is blind to the lexical properties of the words involved. This is particularly clear for the inflected forms, where both blinked and ashed deviate from the noncomplex forms at around 300 msec and where the presence of the inflectional morpheme activates classic LIFG regions (BA 44 and BA 45) irrespective of the lexical status of the whole form. The finding that these early processes do not discriminate between genuinely complex and pseudo-complex strings demonstrates that the processes generating candidates for lexical access and recognition are blind to the lexical properties of the strings they are generating. The results for ashed and corner also demonstrate that the manner in which these output processes interface with lexical representations is morphologically compositional. For the inflectional morphology, strings like ashed cannot be accessed as stored forms, because they are not existing words. Instead, they must be compositionally constructed, combining the potential stem (ash) and affix (−ed). More striking still, a monomorphemic form like cornerno different at the lexical level from a simple form like biscuit-seems to be temporarily reconstructed as the nonexistent complex form {corn} + {-er}. This patterns in the relevant neural time window-as well as in masked priming-with genuinely complex forms like farmer and blinked and not with noncomplex forms like scandal and biscuit. These effects require the output of orthographical analysis to be morphemically decomposed. More evidence for morphemic constraints on early orthographically driven string segmentation and identification comes from the sensitivity of this process to the morphemic status of both elements of a potential complex form. The masked priming data-here and in earlier studies-together with the MEG analyses involving the partially complex scandal, blemish, frumish, and bected sets, suggest that pseudo-complex forms are not treated as complex forms in the 300-370 msec time window unless they contain both a potential stem and a potential affix-as in the corner and ashed conditions. This points to a segmentation process that is not only sensitive to the presence of linguistically relevant subunits but also to the contexts in which they co-occur. The view that orthographic analysis results in a morphemic output is consistent with the proposals of Dehaene and colleagues (Vinckier et al., 2007;Dehaene, Cohen, Sigman, & Vinckier, 2005), where the endpoint of orthographic analysis in posterior temporal cortex is seen as the identification of "small words and recurring substrings (e.g., morphemes)." It is also consistent with earlier MEG evidence that orthographic processing in inferior and posterior temporal regions, over early 150-250 time windows, is sensitive to the presence of potential stems or affixes (e.g., Lehtonen et al., 2011). More generally, these can be seen as aspects of a ventral stream object recognition process tuned to orthographic analysis over decades of intensive experience with written text. Visual Word Recognition as a Two-phase Process The evidence for the salience of morphemic factors in the visual word recognition process, together with the demonstration of a short period during which structural morphological factors seem to dominate, raises the question of whether this indicates a separate, specifically morphological processing stage, intervening between orthographic analysis and access to lexical representations. The existence of such a stage is both a frequent postulate in cognitive theories of visual word recognition and a major source of disagreement between competing theories. The evidence here is that there is no such separable processing stage and that what we see instead are two intersecting phases of neurocognitive activity. The first, as described above, is located in posterior and inferior temporal and occipitotemporal regions and is concerned with the analysis of the visual input into higher-order linguistically relevant orthographic units. These processes are in themselves neither lexical nor semantic in nature and form a spatiotemporally distinct phase in the recognition process. This can be viewed as a modality-specific input system that projects onto a more distributed morpholexical system, more anterior and frontotemporal, that is sensitive to the morphological structure of complex forms and to lexically represented variables more generally-and which is likely to be largely in common with the target systems accessed from auditory inputs. The separation into two phases is reflected in the spatiotemporal distribution of processes sensitive to orthographic variables but not to morphological structure or lexical identity. For orthographic structure, there is increased activation for consonant strings (relative to words and pseudowords) in the time period 150-230 msec, seen bilaterally in posterior brain regions (Figure 1). Analyses sensitive to morphological structure emerge at around 300 msec, peaking 100-150 msec later than the orthographic effects, whereas the spatial center of gravity shifts anteriorly to more dorsal left frontotemporal sites (Figure 2). None of the inferior temporal and fusiform ROIs that were significant in the orthographic analyses are active in the contrasts sensitive to morphological structure. Although there continues to be RH activation for both complex and pseudo-complex conditions, the only effects that differentiate between conditions are seen in LH middle temporal and inferior frontal sites. A similar though spatiotemporally more clear cut separation from early orthographic processing is seen for the lexically sensitive effects, with increased processing for pseudoword stems from 390 to 500 msec, chiefly in left middle temporal regions ( Figure 3). In complementary analyses contrasting complex pseudowords ( frumish, bected ) with complex real words ( farmer, blinked ), we found comparable effects, with increased processing for pseudowords emerging in left temporal sensors from 425 to 465 msec. Taken together, these data show that there is a clear separation in neural space and time between orthographically centered analyses and those sensitive to morphological structure and to lexical variables. There is little evidence for a similar separation between the latter types of process, on the basis of the contrasts plotted across Figures 2-4. Although different phases of analysis peak at different points in time, with the earliest morphological structure effects emerging around 100 msec earlier than the effects of lexicality, there is no evidence that these activities are spatially distinct-especially where core middle temporal locations are concerned. Given these timing and location constraints-which are consistent, as noted earlier, with earlier EEG and MEG studies-the most plausible account is that, although the engagement and interpretation of lexical constraints have a sequential time course, these processes involve the same set of frontotemporal brain regions as those implicated in the analysis of morphological structure. On this account, morphological effects in visual word recognition will emerge as an interaction between the outputs of orthographic analysis and the properties of morpholexical representation and analysis. These in turn will depend on the properties of simple and complex words in the language and how they are lexically represented. Research in the auditory domain suggests that inflectionally complex words in English are decompositionally represented and analyzed in the neural language system (e.g., Marslen-Wilson & Tyler, 2007), whereas derivationally complex words are accessed as whole forms (Bozic et al., 2013), although with some preservation of internal morphological structure (cf. Marslen-Wilson, 2007). There are signs of this differentiation here, with the analysis of inflected and pseudo-inflected forms (blinked, ashed) closely paralleling the decompositional neural patterns seen in the auditory domain (Bozic, Tyler, Ives, Randall, & Marslen-Wilson, 2010;Marslen-Wilson & Tyler, 2007). Derivationally complex forms, in contrast, primarily activate middle temporal sites, although further research is needed here. Feed-forward Processing and Recurrence The third defining feature of the proposed functional architecture concerns the processing relationship between the orthographic analysis of the input and the broader lexical and contextual context in which this analysis occurs: Does this context modulate early orthographic analyses, as interactionist accounts would require, or do these analyses operate in a primarily feedforward (or bottom-up) manner? There are two aspects to thiswhether lexical constraints are directly coded into the orthographic analysis process and whether this process is dynamically modulated by top-down predictive or recurrent processes. On an encoding account, the orthographic mapping process is tuned to the specific lexical context of the language, so that it would not generate (or would disprefer) outputs that were not lexically valid. The results here argue against this, with the first-pass analysis of letter strings into potential stems and affixes being conducted without reference to the lexical identities of these strings (cf. Marslen-Wilson et al., 2008). Otherwise the misanalysis of corner would be blocked, along with the rejection of nonwords like ashed. The results are also inconsistent with a weaker encoding account, where lexical variables modulate but do not determine early segmentational hypotheses. Significant lexical effects on the analysis of potential complex forms-indexed by increased processing for corner over farmer and for ashed over blinked-are not seen until the 400-470 msec time period, substantially later than the initial emergence of morphological structure effects. On an encoding account, these effects should be seen at earlier time points as well. This leaves open the possibility that early analyses can be modulated by externally generated constraints-for example, by predictive constraints generated in a sentence context or by recurrent constraints generated top-down as a letter string is being processed. In the current study, we only see candidate effects of this type in late time windows (400 msec from word onset), with increased processing for pseudo-complex forms like corner at posterior temporal sites (Figure 4). If this is a top-down feedback effect, then it occurs too late to be evidence for early morphosemantic interactions. Note that in the context of an fMRI study, where the BOLD response sums over a multisecond time window of neural activity, this critical temporal separation is lost, making it possible to misinterpret such an effect as evidence for early interactions between form and meaning. Similar caveats apply to conventional RT tasks, where the temporal ordering of the multiple processes contributing to overall RT is also opaque. However, although the current experiment allows us to evaluate the role of system-internal constraint, it does not allow us to evaluate the effects of contextual variables more generally. The letter strings were presented in isolation, and we took care to minimize task-based effects. The natural habitat of the visual word recognition system is of course reading in context, with potential constraints generated at syntactic, semantic, and pragmatic levels as words are being read. Without properly time-resolved processing measures, however, it is not possible to determine how these contexts operate-whether they operate primarily in the second phase domain of morpholexical interpretation, or whether they directly modulate the operations of the orthographic input system. Finally, we note that some findings from EEG and MEG suggest that the initial sweep through the visual word recognition system to the level of lexical access occurs within 200 msec of word onset (e.g., Shtyrov et al., 2013;Lavric et al., 2012;Hauk et al., 2006;Pammer et al., 2004). These timings contrast with this study, where morphological effects emerge at 300 msec and lexicality effects appear later still. This divergence could be attributed to a number of sources. One of these, as we proposed earlier, may be differences in task demands. Task-driven top-down effects can modulate sequential feed-forward processes in the visual system at early stages of sensory analysis (e.g., Twomey, Kawabata Duncan, , and similar effects may also modulate early processes relevant to the performance of tasks such as lexical decision. Direct evidence for such effects in visual word recognition comes from a further study by Whiting (2011), parallel to the research reported here, which ran the same set of stimuli but under different task conditions. This study replaced the end-of-block recognition test with an occasional lexical decision task, occurring on 10% of trials. The results show both commonalities and divergences relative to the current study. The timing and pattern of early orthographic effects (comparing words and pseudowords with consonant strings) were essentially unchanged. Morphological decompositional effects are detected earlier, but with a similar spatiotemporal distribution-for example, activity is seen in BA 44 for inflectional morphemes from 260 msec, rather than the 320 msec onset seen in the current study. Lexical effects (comparing words and pseudowords) emerged around 100 msec earlier, starting at 310 msec and showing a similar spatial distribution to this study. These selective effects on the timing of different processes leave their relative ordering intact (and consistent with a morpho-orthographic account) but suggest that task demands can indeed shift the timing with which neural processes can be detected. An additional source of divergences between studies may be differences in statistical methods. In the majority of published EEG and MEG studies of visual word recognition, the dominant analysis strategy is to identify potential temporal or spatiotemporal ROIs on the basis of visual inspection of the global energy profile and to focus subsequent analyses around the visible peaks in this profile. In the current study, we avoided any preselection of areas of interest in favor of a brain-wide analysis process (SensorSPM), conducted in sensor space, where the significance of any contrast is corrected on a brain-wide basis for multiple comparisons. We then used the outcome of these analyses to select the time windows of interest within which we conducted the source space analyses. This approach, which is a more conservative-because globally corrected-procedure for selecting time windows of interest, will be less likely to pick up effects that are weak and transient, and this may disfavor some very early effects. This is a possibility, however, that will need careful evaluation in further research. We suggest, in conclusion, that our findings are a robust reflection of the basic underlying structure of the processing system supporting visual word recognition, as revealed in the context of reading words in the absence of a lexical decision task. Top-down effects may well be at work in more predictive natural perceptual contexts-such as the reading of continuous text-to maximize the speed and efficiency of the reading process. Nonetheless, such effects would serve to modulate the performance of the basic feedforward process we have described, not to replace it. APPENDIX 1. MEG
2018-04-03T04:09:58.496Z
2014-02-01T00:00:00.000
{ "year": 2015, "sha1": "4151632a87a0d81800a90e2a3054e96fd995263f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1162/jocn_a_00699", "oa_status": "HYBRID", "pdf_src": "MIT", "pdf_hash": "4151632a87a0d81800a90e2a3054e96fd995263f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology", "Computer Science" ] }
44116208
pes2o/s2orc
v3-fos-license
Aggressive Resection of Cervical Desmoid Tumor Invading the Paraspinal Muscles without Recurrence at Eight Year Followup : A Case Report and Review of the Literature Objective: To provide an overview of head, neck, and spine Desmoid Tumor (DT), to describe the case of a patient who underwent aggressive surgical resection of a large invasive cervical DT, and to analyze the DT literature. We discuss the importance of aggressive surgical margins in decreasing recurrence risk in head, neck, and spine DTs. Methods: A twenty-three-year-old female patient with history of a left cervical neurofibroma resection presented two years later with neck mass regrowth. The mass was aggressively resected, and occiput to T1 fusion was performed. Additionally, the head, neck, and spine DT literature was reviewed regarding surgical margin status and recurrence rate. A “N-1” Chi-squared test was used to compare proportions, between percentage of patients that experienced recurrence after gross total resection, and percentage of patients that experienced recurrence after subtotal resection, to determine statistical significance (p < 0.05). Results: Histologically, the lesion was shown to be a DT, with posterior paracervical spinal musculature invasion. The surgery was well tolerated with no complications. At eight years postoperative followup, the patient is doing well with no tumor recurrence. Nine studies were identified describing surgical margin status and recurrence rate of head, neck, and spine DT. When recurrence rates were compared in patients with grossly negative surgical margins and grossly positive surgical margins for head, neck, and spine DTs, two studies showed statistical significance (p < 0.05) for lower recurrence rate after gross total resection, and seven showed a trend toward lower recurrence rates in patients with gross total resection. Conclusion: DTs are difficult clinical entities to cure due to high recurrence rate after resection. Consistent with previous studies indicating that aggressive surgical margins are associated with a decreased risk of recurrence, we present a case report of a female patient who underwent aggressive resection of a large DT and remains tumor-free after eight years. Thus, we recommend aggressive surgical resection of head, neck, and spine DTs to minimize recurrence risk. Introduction Desmoid Tumors (DTs), also known as aggressive fibromatosis, are benign, fast growing, and locally invasive fibroblastic neoplasms which arise from mesenchymal stem cells [1].While DTs do not spread metastatically, they can invade local tissue and frequently recur after surgical resection [2][3][4]; for this reason they are often categorized with low-grade soft tissue sarcomas [5].Approximately 2-4 cases of DTs per million people occur each year, and account for 0.03% of all neoplasms [6] and 3% of soft tissue tumors [7].DTs have been reported to occur in nearly every bodily location, with an estimated 7% to 15% of DTs occurring in the head and neck [8,9].[14].Additionally, a history of prior surgery or trauma at the site of the DT is often found [15]. Presently, the standard of care for DTs is surgical resection with wide margins [4].Although the post-surgical recurrence rate is high, with up to 72% of patients experiencing recurrence, surgical resection with wide or aggressive margins has been shown to reduce the recurrent rate [16].Several studies suggest that cases in which wide or aggressive margins cannot be achieved, adjuvant radiotherapy may decrease recurrence rates [9,17,18].Medical therapy, including NSAIDs, tamoxifen, and chemotherapy, remains controversial [11]. Here, we present the eight year followup of the case of a patient who underwent aggressive surgical resection of cervical DT with occiput to thoracic fusion.We focus on the importance of instrumentation due to aggressive resection of posterior paraspinal musculature Although most cases of DTs are sporadic [3], there is a distinct association with Familial Adenomatous Polyposis (FAP, Gardner's syndrome) and other Adenomatous Polyposis Coli (APC) gene mutations [10].In fact, patients with FAP have DTs at approximately 850 times the rate of the general population, most commonly located in the small intestine or mesentery [11].Development of DTs is thought to be associated with female sex hormones, as the tumors occur more commonly in females, their growth rate is directly related to endogenous estrogen levels in female patients, and estradiol receptors have been found to be present in the tumor cytosol [12].Indeed, tumor regression has been reported with the use of the antiestrogen compound tamoxifen [5,13,14].Interestingly, in a study reviewing the use of endocrine therapy for desmoid tumors, Wilcken and Tattersall (1991) found that of 23 patients with DTs treated with tamoxifen responded to the therapy Abbreviations: S: Superior; A: Anterior. After the occiput was identified, a periosteal was used to scrape the tumor from the bone.The tumor appeared to emerge from a portion of the left C2 ganglion, and it was separated from the lamina, especially at the occiput and C1 junction and at the junction of the mastoid process to prevent inadvertent tearing of the vertebral arteries. All muscle which appeared abnormal was removed, and the entire mass was eventually resected.Apparent residual tumor was not seen in any other locations. Because of the extensive tumor invasion of the posterior paraspinal muscles requiring resection of all posterior cervical muscles from the occiput to C6-C7, an occiput to T1 fusion was required.An occipital plate with a 10-millimeter screws bolting it to the occiput were placed, followed by screws in the lateral masses of C2, C3, C4, C5, and C6, and then pedicle screws at T1 were placed, all with neuronavigational aide.After all the screws were placed, a rod was then bent into position and cut.All facet complexes were decorticated and then the wound was pulse lavaged. Given the fact that all the muscle posteriorly was removed from the base of the occiput down to about C6-C7, a portion of the trapezius muscle was elevated.This flap was swung superiorly up bilaterally to layer over the hardware.A piece of allograft from the iliac crest was cut and then fashioned to fit from the occiput at the C2, putting a Songer cable down to lock it into position and then packing all the facet complexes with allograft and bone morphogenic protein. Literature review A comprehensive retrospective review of the literature was performed using the key words "desmoid tumor", "desmoid-type fibromatosis", and "aggressive fibromatosis", alone or together to search PubMed, PubMed, Ovid Medline, Ovid EMBASE, Scopus, and Web of Science database and all neurosurgical journals.Positive inclusion criteria included desmoid tumors of the head, neck, and spine, treated with surgery, as well as reporting of surgical margin status and completeness of excision.[19].Now, at eight years followup, we demonstrate that the patient is recurrence-free. We also present a discussion of the literature regarding the association of head, neck, and spine DT resection margin status and recurrence rate. Case report The patient was a 25-year-old female with partial resection and biopsy of a left cervical neurofibroma extending from the sternocleidomastoid posteriorly and posterior triangle of the neck, two years prior to presentation.Over time, the mass appeared to recur and enlarge, causing severe deformity of the posterior aspect of the patient's skull and neck and compromising the soft tissue structures of the neck (Figure 1).After obtaining consent for surgery, the patient was fiber optically intubated due to compression of the airway.A Mayfield head holder was affixed to the skull.She was turned prone onto a Jackson table.All contact points were doubly padded.Her arms were kept down to her side.She was prepped from the portion of the skull all the way down to the buttocks in case of the need for a very large latissimus dorsi flap to be elevated.She was prepped and draped in the usual sterile fashion.The intraoperative computerized tomography unit was used for intraoperative neuronavigation. AY-type of incision utilizing the previous incision from the base of the occiput down to the midline to approximately T1 was made.Then, a sharp dissection over the mass of the tumor away from the surrounding soft tissue was performed. The base of the mass was found to be intimately attached to the occiput and the C1-C2 vertebra, so a Cavitron ultrasonic aspirator was used to core it out internally.However, the mass was too hard so this could not be accomplished.Loop cautery was then used to start cutting off pieces of the tumor.Again, it was so hard it broke the loop.A Bovie at its highest setting was then utilized to cut pieces of the tumor away slice by slice, eventually debulking the tumor significantly.ardson [20,21].Statistical significance was defined as a p-value less than 0.05.Sample size for each study is detailed in Table 1. Case report Histologically, the lesion was shown to be a desmoid tumor, with invasion of the posterior paracervical spinal musculature.The patient tolerated the surgery well with no complications.No further adjuvant radiation treatment was offered.At eight years postoperative followup, the patient is doing well with no tumor recurrence (Figure 2 and Figure 3), and delivered a healthy baby. Literature review Nine studies were identified describing surgical margin status and recurrence rate of head, neck, and spine DT.When recurrence rates were compared in patients with grossly negative surgical margins and grossly positive surgical margins for head, neck, and spine DTs, two studies showed statistical significance (p < 0.05) for lower recurrence rate after gross total resection, and seven showed a trend toward lower recurrence rates in patients with gross total resection.A summary of the recurrence rates following grossly complete resections compared to grossly incomplete excisions in case series with DTs of the head and neck is presented [3,[22][23][24][25][26][27][28][29][30] (Table 1). Discussion Desmoid Tumors (DTs), also known as aggressive fibromatosis, are fibroblastic neoplasms which can arise in almost all parts of the body [1].Although they are benign lesions and thus do not spread metastatically, DTs are fast Statistical analysis A "N-1" Chi-squared test was performed to determine whether there was a statistically significant difference in DT recurrence rates between patients with grossly complete resections and grossly incomplete resections, as is recommended by Campbell and Rich- recommend against limiting the magnitude reconstruction magnitude to reduce recurrence rates [33]. Use of radiation therapy in the treatment of DTs is controversial, as DTs are benign lesions, and radiation therapy carries a risk of adverse side effects such as fibrosis, cellulitis, neurological deficits, and secondary malignant transformation.One large retrospective study comparing the results of surgery and/or radiation therapy at all anatomic locations found that in cases of surgical resections with negative margins, the recurrence rate was not statistically significantly different between surgery alone and surgery with postoperative radiation therapy [18].However, in cases of surgical resection with positive margins, postoperative radiation therapy improved local disease control from 46% to 78% (p = 0.0001) [18].These results suggest that radiation therapy should not be given when aggressive surgical resection with negative margins is possible, but it may be indicated when only an incomplete surgical resection with positive margins is achievable. Medical therapy for the treatment of DT, such as NSAIDs (including Sulindac) and tamoxifen, have been utilized with mixed results in patients with inoperable tumors or who are not surgical candidates [34].However, pharmacologic treatment of head, neck, and spine DTs has only been used infrequently, so robust data regarding its efficacy is lacking [7,34,35].Although NSAID therapy has relatively low morbidity, there is currently no prospective randomize data to demonstrate that NSAIDs alone are efficacious for the treatment of DTs [36]. Use of Bone Morphogenic Protein (BMP) is generally cautioned in cases with history of neoplasms.A retrospective study in 2013 described increased risk of benign tumors associated with BMP use in spinal fusion surgeries.In the study, the BMP-exposed group had a significantly higher incidence of benign tumors of the uterus (2.6% vs. 1.7%;P = 0.002), nervous system (0.81% vs. 0.31%; P < 0.001), and unspecified sites (0.38% vs. 0.14%; P = 0.009) [37].Further reports evaluating cancer risks in BMP use have not observed the correlation, leaving the issue unresolved [38,39].No current evidence exists on use of BMP in patients with desmoid tumors.The surgery described in our case report was performed in 2008.Intraoperative pathology showed no malignant cells.With evidence available at the time, we felt it was safe to use bone morphogenic protein to spare the patient a secondary incision into the iliac crest.On follow up, there is no evidence of any recurrence or new tumor formation as of June of 2016. Conclusion Although they are benign lesions with no metastatic potential, Desmoid Tumors (DTs) are locally invasive and fast growing.They have a high incidence of recurrence after surgical resection, with head and neck DTs exhibiting among the highest rate of recurrence and are thus difficult to definitively cure.We have provid-growing, locally invasive, and recur after surgical resection at high rates [2][3][4].While DTs at all sites commonly recur after surgical excision, recurrence rates with head and neck DTs are much higher.One study showed that recurrences occurred in 70% of patients with head and neck DTs, but only 50% of DTs at other locations [8].This could be partially explained by proximity to, or involvement of, eloquent anatomical locations and hesitation in aggressive attainment of negative margins. We present the case of a patient who underwent aggressive surgical resection of cervical DT with occiput to thoracic fusion.Instrumentation was necessitated by aggressive resection of posterior paraspinal musculature.Presently, at eight years followup, we demonstrate that the patient remains free of recurrence. A paucity of data exists pertaining to treatment strategies focused on reducing DT recurrence rates beyond case reports and limited case series.However, the little evidence available does support aggressive resection of tumor with negative margins.One retrospective analysis of the literature collected for desmoid tumors in all locations found that local control rates for DTs with positive surgical margins were 41%, compared with local control rates of 72% for DTs with negative surgical margins (mean followup time 10.4 years) [18].Similarly, another retrospective analysis of the literature collected for extra-abdominal DTs found that 72% of patients with a marginal or intralesional excision according to the Enneking classification system had a recurrence, compared with only 27% of patients who had wide or aggressive microscopic surgical margins [16]. Studies focusing on DTs of the head, neck, and spine further support the assertion that complete tumor resection with negative margins improve the chances of recurrence-free survival [22][23][24][25][26] (Table 1).These studies show either statistical significance or a trend toward lower recurrence rates in patients with aggressive surgical margins for head, neck, and spine DTs compared to patients with positive surgical margins.While aggressive surgical resection may be limited by cosmesis or involvement of critical neurovascular structures, the decision for such aggressive resection could potentially be supported by the improved rate of recurrence. Although the authors did not analyze head and neck DTs separately, Mullen, et al. found that fewer than half of patients with DTs from all body sites that underwent surgery with positive margins experienced recurrence, although they do advocate for wide excision with negative margins without compromising function [31].In contrast, Wang, et al. found that negative surgical margins and absence of tumor invasion into major nerves and blood vessels were associated with low recurrence rates [32].Although DTs do exhibit a tendency to recur in scar tissue, Garvey, et al. found that postoperative DT recurrence rates in both primary closures as well as complex reconstructions were similar, leading them to Figure 1 : Figure 1: Preoperative magnetic resonance image of neck.A large desmoid tumor invading the left cervical paraspinal muscles is seen.Panel A) sagittal T1 weighted image fast spin echo fat saturated flow compensated with gadolinium; Panel B) Axial T1 weighted image fast spin echo with gadolinium.Diameter A is 132.1 mm and diameter B is 83.1 mm; Panel C) Coronal T1 weighted image spin echo fat saturated flow with gadolinium enhancement. Figure 3 : Figure 3: Lateral cervical spine plan x-ray at eight years followup.Instrumentation for occiput to T1 fusion is demonstrated. Figure 2 : Figure 2: Postoperative magnetic resonance image cervical spine at eight year followup.No tumor recurrence is observed.Panel A) Sagittal T2 weighted MRI image; B) Axial T1 weighted fast spin echo with gadolinium MRI image. Table 1 : Recurrence rates of head and neck desmoid tumors after complete and incomplete excisions. * P-value for difference in recurrence rate between patients with grossly complete and grossly incomplete resection.Abbreviations: pt: patients.
2018-05-26T17:22:40.403Z
2017-11-30T00:00:00.000
{ "year": 2017, "sha1": "c247a9dd7f846167fbf180976844757c090e3c0e", "oa_license": "CCBY", "oa_url": "https://clinmedjournals.org/articles/cmrcr/clinical-medical-reviews-and-case-reports-cmrcr-4-193.pdf?jid=cmrcr", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c247a9dd7f846167fbf180976844757c090e3c0e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
27119537
pes2o/s2orc
v3-fos-license
Bifidobacterium infantis Potentially Alleviates Shrimp Tropomyosin-Induced Allergy by Tolerogenic Dendritic Cell-Dependent Induction of Regulatory T Cells and Alterations in Gut Microbiota Shellfish is one of the major allergen sources worldwide, and tropomyosin (Tm) is the predominant allergic protein in shellfish. Probiotics has been appreciated for its beneficial effects on the host, including anti-allergic and anti-inflammatory effects, although the underlying mechanisms were not fully understood. In this study, oral administration of probiotic strain Bifidobacterium infantis 14.518 (Binf) effectively suppressed Tm-induced allergic response in a mouse model by both preventive and therapeutic strategies. Further results showed that Binf stimulated dendritic cells (DCs) maturation and CD103+ tolerogenic DCs accumulation in gut-associated lymphoid tissue, which subsequently induced regulatory T cells differentiation for suppressing Th2-biased response. We also found that Binf regulates the alterations of gut microbiota composition. Specifically, the increase of Dorea and decrease of Ralstonia is highly correlated with Th2/Treg ratio and may contribute to alleviating Tm-induced allergic responses. Our findings provide molecular insight into the application of Binf in alleviating food allergy and even gut immune homeostasis. The mucosal surface is the largest area of the body and covers several hundred square meters in an adult, forming the first line of defense as well as the primary route of exposure to antigens. Particularly, gut-associated lymphoid tissue (GALT), a part of mucosal immune system, is the major tissue responsible for allergic response stimulated through oral delivery. In the GALT, sundry immune cells and cytokines participate in and orchestrate the final immune responses such as anaphylactic symptoms or immune tolerance. In the process of food allergy, naïve T cells preferentially differentiate into T helper (Th) cells Th2, which further induce IgE-producing plasma cells, and finally in turn, resulting in mast cells degranulation and histamine releasing. Alternatively, in the process of immune tolerance, naïve T cells are mostly differentiated into Th1 cells so as to inhibit Th2 polarization, or differentiated into regulatory T cells (Tregs), which shut down the overall immune response to oral antigens (10). Additionally, it is believed that dendritic cells (DCs) play the critical role in the selection of the above immune responsive directions by determining the way of naïve T cells differentiation (11). CD103-expressing DCs (CD103 + DCs) are present at high frequency in the small intestine and migrate to the mesenteric lymph node (MLN) to initiate oral tolerance (12, 13). Probiotics are defined as live microorganisms that have a positive effect on the health of the host when administered in adequate amounts (14). Recently, experimental and clinical evidence have shown that probiotics can regulate host immune system or gut microbiota composition, resulting in the effective alleviation of allergic responses (15)(16)(17)(18)(19)(20)(21)(22)(23)(24). However, results of clinical studies on the efficacy of prophylactic or therapeutic treatments with different bacterial strains in the context of allergic sensitization have been conflicting (25)(26)(27), and the anti-allergic effects of probiotic bacteria are still not completely defined. Thus, many questions remain unanswered, such as which probiotic strains are the most effective in modulation of allergic responses and how orally administrated probiotics affect the systemic immune system (28). In the present work, we compared the anti-allergic effect of probiotic strain Bifidobacterium infantis 14.518 (Binf) in both prophylactic and therapeutic treatments. We also investigated the role of intestinal DCs in the generation of anti-allergic Tregs and in suppressing of Th2-biased allergic responses. Moreover, our seafood allergy animal model reflects that the change of gut microbiota composition is tightly correlated with Th2/Treg balance. Our results provide new evidence of probiotic strain Binf application in food allergy, and also reveal the important role for intestinal DCs and distinct microbiota composition on induction of Tregs after exposure to Tm in the regulation of mucosal Th2 responses to food antigens and intestinal allergic reactivity. MaTerials anD MeThODs ethics statement This study was carried out in strict accordance with the recommendations in the National Guide for the Care and Use of Laboratory Animals of China. All animal procedures were approved by the Zhejiang Gongshang University Laboratory Animal Welfare Ethics Review Committee. animals Six-to-eight-week-old female BALB/c mice were purchased from Laboratory Animal Center of Hangzhou Normal University (Hangzhou, China). Mice are housed in a room with a 12-h lightdark cycle at 22-24°C under specific pathogen-free conditions. Bacterial Preparation The potential probiotic strain of Bifidobacterium infantis 14.518 (Binf) that originally obtained from infant fecal samples was obtained from CGMCC (China General Microbiological Culture Collection Center). Binf was routinely grown in MRS medium at 37°C overnight. Bacterial cells were collected by centrifugation at 4,000 rpm for 10 min, suspended in aseptic phosphate-buffered saline (PBS) and adjusted to 2 × 10 9 CFU/ml. For in vitro experiments, the concentration of Binf was adjusted to 10 7 CFU/ml in PBS and heat treated at 60°C for 30 min to suppress uncontrollable bacterial growth in cell culture medium. shrimp Tropomyosin The preparation of Tm was carried out as described by Kunimoto et al. (29) with some modifications. Briefly, fresh whiteleg shrimp (Litopenaeus vannamei) was minced and homogenized in two volumes of extraction buffer A (50 mmol/l KC1, 2 mmol/l NaHCO3). The homogenate was centrifuged at 7,000 g for 10 min at 4°C. The precipitate was collected and suspended in two volumes of extraction buffer A. After four repeated cycles of homogenization and centrifugation, the resulting precipitate was dissolved in acetone and passed through a gauze filter. The precipitate was washed with four volumes of acetone and dried overnight at room temperature. The powder obtained was suspended in extraction buffer B (20 mmol/l Tris-Cl, pH 7.5; 1 mol/l KCl; 0.1 mmol/l dithiothreitol) and stirred for 5 days at 4°C. The extract was treated with boiling water bath for 10 min and clarified by centrifugation at 7,000 g for 10 min at 4°C. The supernatant was partially purified by ammonium sulfate fractionation at 30% saturation. The pellet was collected by centrifugation at 7,000 g for 10 min at 4°C and dissolved in PBS. The protein concentration was determined by using a bicinchoninic acid assay (BCA) kit (Pierce Biotechnology Inc., Rockford, IL, USA) and the purity was analyzed by SDS-PAGE ( Figure S1 in Supplementary Material). immunization Protocols For therapeutic treatment group, mice (n = 8) were intraperitoneally immunized with PBS containing purified Tm (100 µg/ mouse) together with equal volume of complete Freund's adjuvant (CFA) (Sigma, St. Louis, MO, USA) on days 7 and with Tm (100 µg/mouse) together with equal volume of incomplete Freund's adjuvant (IFA) (Sigma, St. Louis, MO, USA) on days 14, 21, and 28, and challenged twice with Tm (600 µg/mouse) and equal volume of IFA on days 35 and 42. From days 45 to 65, mice were daily administered with 500 µl/mouse Binf (2 × 10 9 CFU/ml) in PBS by gavage. For preventive treatment group, mice were daily administered with 500 µl/mouse Binf (2 × 10 9 CFU/ml) from days 7 to 27, sensitized on day 30 with Tm (100 µg/mouse) together with equal volume of CFA, and on days 37, 44, and 51 with Tm (100 µg/mouse) together with equal volume of IFA, followed by challenging with Tm (600 µg/mouse) and equal volume of IFA on days 58 and 65. In either therapeutic group or preventive group, a corresponding positive control (n = 8) that given aseptic PBS instead of Binf was performed. In addition, a shared negative control (n = 8) for both therapeutic group and preventive group that given intraperitoneally injection of aseptic PBS (together with Freund's adjuvant) instead of Tm by following the preventive protocol was performed and regarded as the unimmunized group. On day 67, mice were sacrificed for immunological analysis. The efficiency of the model was assessed by allergy symptom score (30). The protocol was illustrated in Figure 1. serum ig and histamine analysis The blood were taken from retro-orbital plexus on days 7, 28, 42, and 65 for therapeutic treatment group and days 7, 27, 51, and 65 for preventive treatment group, centrifuged at 3,500 g for 10 min at 4°C, and the serum was collected and frozen at −80°C. Levels of serum Tm-specific IgE, IgG2a, and IgG1 were measured by ELISA as previously described with some modifications (31). In brief, serum samples were 1/20 diluted, and the secondary antibodies used in the ELISA tests were HRP-conjugated rat anti-mouse IgE, IgG2a, or IgG1 (Southern Biotechnology Associates, Birmingham, AL, USA). All the secondary antibodies were 1/6,000 diluted. After the HRP substrate TMB (eBioscience, San Diego, CA, USA) added, the absorbance was determined at 450 nm. Results were expressed as optical densities at 490 nm (OD490). Serum histamine was measured in 1/100 diluted serum samples with a commercial kit (Baomanbio, Shanghai, China) according to the manufacturer's instructions, with a detection limit of 0.1 ng/ml. Bacterial Dna isolation from Mouse Feces Fresh fecal samples were collected immediately upon defecation under sterile condition in a 2-ml tube and stored at −80°C until processing for DNA isolation. Samples collected just before sacrifice (days 65 and 67 as duplication) were used for microbial analyses. Fecal samples from different individuals of the same group were combined (n = 8), then the DNA was isolated using a QIAamp Fast DNA Stool Mini kit (QIAgen, Valencia, CA, USA) according to the manufacturer's instructions and then stored at −80°C until further use. DNA concentration was determined using a Nanodrop ND-1000 spectrophotometer (Thermo Fisher Scientific, San Jose, CA, USA), and the A260/A280 ratio between 1.8 and 2.0 was considered as a criterion for quality control. Metagenomic sequencing and Bioinformatic analyses Isolated stool DNA was handled as previously reported (32) and sequenced by Sangon Biotech (Sangon Biotech, Shanghai, China). Briefly, amplification of the genomic DNA was performed using barcoded primers, which targeted the V3 to V4 region of the bacterial 16S rRNA gene. Fast length adjustment of short reads was used to merge overlapping paired-end Miseq fastq files (33). Sequences with mismatches in the overlapping region were discarded. The output fastq file was then analyzed by software PRINSEQ (34), and then chimeric reads were filtered using chimera.uchime (35). Reads were clustered into operational taxonomic units (OTU) using Uclust (36) at 97% pairwise identity threshold. Taxonomies were assigned to the representative sequence of each OTU using RDP classifier (37). Preparation of lymphocytes Spleens, Peyer's patches (PPs) and MLNs were collected on sacrifice under sterile conditions. Single cell suspensions were prepared from spleen, PP, and MLN by pressing a piston and through a cell strainer, and the collected cells were washed with PBS. To isolate splenocytes, red blood cells were removed by RBC lysis buffer (Beyotime, Jiangsu, China). Then the lymphocytes and splenocytes were used as starting material for further analyses including RT-qPCR, ELISA, and flow cytometry. isolation of cD11c + Dc and naïve cD4 + T cell Flow cytometry Six-to-eight-week-old female BALB/c mice were given Binf by intragastric administration with a dose of 10 9 CFU/mouse for 20 days consecutively. The control mice were daily administered with equal volume of PBS. The positive control group mice were immunized four times and challenged twice as previously described. All the mice were killed by cervical dislocation. Spleen and MLN cell suspension were obtained as previously described. CD4 + cells were enriched from splenocytes with Mouse CD4 + T cell Isolation Kit (StemCell Technologies, Vancouver, BC, Canada), the enriched CD4 + fraction was subjected to cell sorting with FACSAria™ cell sorter (BD Bioscience, San Jose, CA, USA) to isolate CD4 + CD62L high CD44 low naïve T cells, and the CD11c + DC was isolated from cell suspensions of MLN in the same way (mouse DC Isolation Kit followed by cell sorting). Flow cytometry showed more than 99% purity of all the isolated cells. The antibodies used were anti-CD4-PE-Cyanine7, anti-CD62L-FITC, anti-CD44-APC and anti-CD11c-FITC (all from eBioscience, San Diego, CA, USA). statistical analysis The Student's t-test or ANOVA followed by Tukey's post hoc test were used when appropriate for assessing the distribution of data. All results were expressed as the mean ± S.D. The SPSS 18.0 statistical software package (SPSS Inc., Chicago, IL, USA) was used for data analysis. P value of less than 0.05 was considered as significance. To dig out the most relevant bacterial genera in Binf treatment, each of the five experimental groups (control, Tm [therapeutic], Tm + Binf [therapeutic], Tm [preventive] and Tm + Binf [preventive]) were positioned on the coordinate system, with specific bacterial genus proportion acting as Y value (the genera with a proportion of 0 were deleted) and Th2/Treg ratio acting as X value, then regular linear regression analysis was performed and the coefficient of correlation (R 2 ) was calculated. Based on the R 2 , all the bacterial genera were arranged, and genera with the biggest R 2 were considered as the most relevant ones (Table S2 in Supplementary Material). Binf suppresses Tm-induced allergic responses In the initial study, we evaluated the effect of probiotic Binf in modulating Tm-induced allergic responses in a mouse model (Figure 1). The treatment was divided into two groups: the therapeutic group that applied Binf after Tm stimulation ( Figure 1A) and the preventive group that applied Binf before Tm stimulation ( Figure 1B). Tm-treated mice showed strong allergic symptom, indicating the validity of the model (Figure 1C). Compared to the control, Tm-sensitized mice produced high levels of serum histamine, Tm-specific IgE, IgG2a, and IgG1 over the time-course of the experiment (Figure 2). At the end of entire experimental Histamine (a,B), Tm-specific IgE (c,D), Tm-specific IgG2a (e,F), and Tm-specific IgG1 (g,h) were measured. P values determined by one-way ANOVA followed by Tukey's post hoc test. CON, control group; Tm, Tm-treated group; Tm + Binf, Binf intervened group; NS, not significant. period (day 65), the levels of histamine (p < 0.05) and Tm-specific IgE (p < 0.001) were significantly lower in mice supplemented with oral administration of Binf in both therapeutic and preventive ways compared with Tm stimulation group (Figures 2A-D), while no significant difference found in the level of Tm-specific IgG2a and IgG1 between Tm + Binf and Tm groups (Figures 2E-H). Since the production of histamine and specific IgE are typical allergic responses, these results indicated that oral administration of Binf suppresses Tm-induced allergy in both preventive and therapeutic ways. Binf increases Tregs for Balancing Th2/Treg To test whether Binf can induce Treg cells and affect Th2/ Treg balance in vitro, we investigated the proportion of Th2 (CD4 + CD69 + ST2 + ) and Treg (CD4 + CD25 + CD127 low/− ) cells in spleen and MLN whole CD4 + T cell population by flow cytometry. As shown in Figure 3 and Figure S2 in Supplementary Material, in both therapeutic and preventive groups, administration of Binf significantly increased the proportion of Treg in spleen and MLN upon Tm challenge ( Figure 3A). By contrast, the proportion of Th2 significantly decreased after Binf treatment ( Figure 3B). Furthermore, to eliminate the interference of total CD4 + T cells proliferation, we calculated the ratio of Th2/Treg, which can counteract the variation of the whole CD4 + T cell population and emphasize the relevant abundance of allergic Th2 and anti-allergic Treg. The results showed that in most cases, Tm challenge increased Th2/Treg ratio, and Binf treatment significantly decreased that ratio and maintained Th2/ Treg balance ( Figure 3C). However, no significant changes of Th2/Treg ratio in spleen of the preventive group were observed, indicating the less important role of Th2 and Treg in the situation. Overall, these results demonstrated that Binf promotes the induction of Tregs and balance Th2/Treg for suppressing Th2 responses in Tm-sensitized mice. Binf Promotes Dcs Maturation and Tolerogenic Dcs accumulation Based on the expression of cell surface markers, abundant subsets of DCs have been described (38,39). Interestingly, the functions of different DC subsets on the stimulation of T cells are tissue dependent (40)(41)(42). To explore the role of Binf in activating functional DCs, BALB/c mice were treated with probiotic Binf by intragastric administration at a dose of 10 9 CFU/mouse for consecutive 20 days, and the percentage of CD80 + , CD86 + , MHC-II + , and CD103 + cells with CD11c + DC marker in mononuclear cell population were measured by flow cytometry (Figure 4; Figure S3 in Supplementary Material). The intake of Binf significantly increased the percentages of CD80 + (Figure 4A), CD86 + (Figure 4B), and MHC-II + (Figure 4C) DCs in PP and MLN, indicating the role of Binf in maturation of DCs in GALT. However, the changes of different DC subsets in spleen by Binf treatment were inconsistent, suggesting the less efficient role of Binf in peripheral immune system. In addition, Binf largely increased CD103 + DCs population in PP and MLN, while in contrast, decreased that in spleen ( Figure 4D). On the basis of these observations, we concluded that Binf promotes DCs maturation and tolerogenic CD103 + DCs accumulation in GALT, which is involved in the regulation of Treg and Th cell differentiation, but Binf effect on spleen DCs is mild and needs far more investigations. Binf stimulates Dcs on T cell Differentiation In Vitro To investigate the role of stimulated MLN DCs in the differentiation of naïve T cells into other CD4 + T-cell subsets, naïve CD4 + T cells were co-cultured with CD11c + DCs isolated from MLNs of control, Tm-treated or Binf-treated mice in the presence of specific stimulators (IL-12 and anti-IL-4 antibody for Th1 polarization; IL-4 and anti-IL-12 antibody for Th2 polarization; TGF-β1, IL-6, anti-IFN-γ, and anti-IL-4 antibodies for Th17 polarization; TGF-β1 and IL-2 for Treg polarization). The mRNA levels of the typical cytokine and master regulator genes of different T cell subsets were measured by RT-qPCR ( Figure 5). Tropomyosin challenge significantly downregulated the mRNA expression of Th1-and Treg-typical marker genes, while upregulated the Th2-and Th17-typical ones, indicating inflammation and allergic responses were provoked. Notably, Binf significantly promoted the mRNA expression of IL-10, TGF-β, and Foxp3 under Treg-polarizing conditions, demonstrating that Binf preferentially induces Treg cell differentiation in the presence of mature DCs. Dcs are required for Binf-specific induction of Tregs To investigate the role of DCs in T cell differentiation by Binf, we replaced DCs with anti-CD3 and anti-CD28 antibodies to mimic DCs costimulation signals in the in vitro T cell differentiation assay. In the presence of Binf, the mRNA expression of T-bet (Th1 master regulator) significantly enhanced, while that of GATA3 and RORγt, master regulators of Th2 and Th17, respectively, significantly reduced (Figure 6). Interestingly, treatment with Binf largely lowered the Th1 cytokines (IL-2 and IFN-γ) at both transcriptional and translational levels ( Figure 6A). In addition, Binf showed a moderate effect on the expression of Th2 cytokines due to a significant decrease of IL-4 but not IL-13 observed (Figure 6B), whereas no effect of Binf on Th17 cytokines (IL-6, IL-17, and IL-23) expression can be found ( Figure 6C). Importantly, treatment with Binf did not have any effect on the expression of IL-10, TGF-β, and Foxp3 at both transcriptional and translational levels in the absence of DCs (Figure 6D), indicating that the presence of DCs is a prerequisite for Treg cell induction by Binf. Binf Modulates Fecal Microbiota composition in Tm-sensitized Mice We further performed metagenomic analysis of fecal microbiota composition with or without Binf treatment under Tm challenge to explore the effect of Binf on alterations of gut microbiota, which in turn might affect the induction of Tregs. As shown in Figure 7A, in the therapeutic group, measurements of ecological metrics revealed that Tm diminished the richness of fecal microbiota in mice, but that could be partially restored by Binf administration. However, in the preventive group, Binf did not have such a significant effect on improving richness of commensal microbes. Unexpectedly, preventive administration of Binf without Tm challenge reduced the richness of fecal microbiota compared to the control, while that with Tm challenge (Binf + Tm) compromised the loss of richness caused by Tm sensitization (PBS + Tm) ( Figure 7A). Consistently, the effect of Binf on the diversity of gut microbiota showed identically as that on the richness (Figure 7B). Furthermore, we also analyzed gut microbiota at various taxonomic levels to investigate the role of Binf in the distribution change of predominant bacterial species. At the genera scale, Binf relieved Tm sensitization-caused proportion imbalance of Odoribacter and Bacteroides in the therapeutic group ( Figure 7C; Figure S4A in Supplementary Material). In the preventive group, two genera that are designated as Barnesiella and Parabacteroides can be regulated by the intake of Binf ( Figure 7D; Figure S4A in Supplementary Material). Specifically, Binf significantly decreased Barnesiella proportion directly, but the effect under Tm treatment is slight. By contrast, the proportion of Parabacteroides decreased by Tm sensitization and can be restored by Binf. At the phyla level, we found that the therapeutic intake of Binf could balance the ratio of Bacteroidetes vs. Firmicutes disordered by Tm challenge. However, direct intake of Binf will decrease the Firmicutes/Bacteroidetes ratio and thus deteriorate the imbalance induced by Tm challenge in preventive administration ( Figures S4B,C in Supplementary Material). These data demonstrated the importance of gut microbiota in mediating the alleviation of food allergy by Binf. In addition to assess the effect of Tm and Binf on the predominant microbiota bacterial species, we also tried to reveal the genera that most significantly correlated with Tm-induced allergy and Binf intervention. By using linear regression analysis, we found Dorea and Ralstonia were tightly correlated with the lymphocytes pattern (Th2/Treg ratio) (Figures 7E,F; Table S2 in Supplementary Material), which had the coefficient of determination (R 2 ) of 0.82 and 0.77, respectively. Interestingly, Dorea and Ralstonia showed the opposite correlation with the lymphocytes pattern: Th2/Treg ratio was negatively correlated with Dorea, but positively correlated with Ralstonia, when compared within each group in both therapeutic and preventive ways. Therefore, we propose that Binf probably alleviates Tm-caused allergic response through the increase of Dorea and decrease of Ralstonia, which is involved in Treg cell differentiation and balancing Th2/Treg ratio. But more studies are needed to fully reveal the exact role of these bacteria in modulating allergy responses, the detailed relationship between Dorea, Ralstonia, and T cell differentiation, and the underlying mechanisms. DiscUssiOn Despite probiotics administration has been shown to be an effective strategy for the prevention of allergic sensitization in experimental and clinical studies, little information is available on mechanisms of their action. Besides, there is a significant difference in effects with long-term supplementation of probiotics in relation to species and even the strain. Therefore, in the present work, we investigated the effect of Binf in alleviating Tm-induced allergy and clarified the possible mechanism underlying through in vivo and in vitro studies. We first set up a mouse model of Tm-induced allergy. When treated with probiotic strain Binf, production of histamine and Tm-specific IgE but not IgG2a or IgG1 was reduced. Because IgE is Th2 associated and the major antibody in allergic reactions, while IgG2a and IgG1 is Th1 associated and does not participate in allergic stimulating, the inhibition of IgE but not IgG2a or IgG1 exactly exhibited the specific anti-allergy activity of Binf. In addition, we also found that Binf increased CD4 + CD25 + CD127 low/− Tregs proportion for balancing Th2/Treg ratio in this mouse model. CD4 + CD25 + CD127 low/− Tregs have been reported to be the "real" natural Tregs with the highest expression level of Foxp3 and the strongest inhibitory effect on responder T cells (43). Therefore, the effect of Binf in inhibiting allergic reactions in this study attributes to the induction of CD4 + CD25 + CD127 low/− Tregs with highly strong function of suppressing Th2-predominant and even other inflammatory responses. Furthermore, by co-culture of CD11c + DCs and naïve T cells, we found that Tm significantly increased Th2-and Th17-regulated response while decreased Th1-and Treg-regulated response, demonstrating the strong pro-allergic ability of Tm in this in vitro co-culture cell model. We also found that Binf mainly induce Tregs but not the other cell types. Alternatively, in the absence of DCs, the induction of Tregs by Binf was no longer observed, indicating that process is DC dependent. Correspondingly, in the mouse model, our results showed that Binf promotes DCs maturation and CD103 + DCs accumulation to GALT, and it have been reported that CD103 + DCs in GALT were able to mediate the conversion of naïve T cells into Tregs (44). Collectively, these data indicated that Binf promotes Tregs differentiation by inducing tolerogenic DCs. However, more studies are needed to elucidate the underlying mechanism of the maturation and accumulation of DCs induced by Binf. Moreover, in the presence of Binf but absence of DCs, the levels of master regulator genes of Th1 (T-bet), Th2 (GATA3), and Th17 (RORγt) significantly changed (T-bet was upregulated while GATA3 and RORγt were downregulated); however, the gene levels of T-cell response-associated effectors (cytokines) did not change consistently. It is known that these master regulator genes are responsible for functional T cell subsets differentiating from naïve T cell, and the cytokines were produced by differentiated T cells (45). Therefore, we propose that Binf could directly regulate the differentiation of Th1, Th2, and Th17 cells, but do not exclude the possibility that the subsequent T cell functions might be controlled by other factors and not fully rely on Binf stimulation. Published data have demonstrated that gut microbiota is tightly correlated with allergic diseases (17,46). Consistently, we also found that Binf can modulate gut microbiota composition, which may further affect immune responses in Tm sensitization. We found that different bacteria genera or phyla showed different patterns under Tm challenge or Binf administration, suggesting the different roles of those bacteria in allergic and immune responses. Particularly, genus Dorea was tightly negatively correlated with Th2/Treg ratio, assuming Dorea may contribute to induction of Tregs and suppress Tm-induced allergic reaction; in sharp contrast, genus Ralstonia was positively correlated with Th2/Treg ratio, and thus might inhibit Treg differentiation and promote Tm-induced allergy. Consistently, previous research has also highlighted the role of Dorea and Ralstonia in food allergy and immunity. A 3-year follow-up study showed that Dorea is reduced in the intestinal microbiomes of infants who later develop food sensitization or food allergy, and thus suggested Dorea may protect against food sensitization and food allergy (47). On the contrary, Ralstonia was considered to be proinflammatory during Parkinson's disease (48), and Ralstonia was also relatively more abundant in asthmatic compared with non-asthmatic chronic rhinosinusitis patients (49), implying the activity of Ralstonia in disrupting host immune system. However, all these studies as well as our results only revealed the relationship between these bacteria and T cell population, and the underlying causal relationship needs far more researches to investigate. Notably, Binf supplementation did not increase Binf proportion but altered other bacterial composition of gut microbiota, indicating an indirect function of Binf in modulating the gut microbiota composition. Based on this observation, we conclude that Binf directly promote DCs and Tregs or indirectly modulate gut microbiota composition to extinguish allergic responses. Thus, intestinal DCs, Tregs, and bacteria genera Dorea might be useful to develop novel therapeutic strategy for Tm-induced allergy and even other kinds of hypersensitivities. Taken together, our results showed that oral administration of Binf suppresses Tm-induced allergic response in a mouse model through several possible ways as bellows: on the one hand, Binf promotes DCs maturation and tolerogenic CD103 + DCs accumulation in GALT, which further modulates T cell differentiation, especially induces Tregs for suppressing Th2 response; on the other hand, Binf regulates the alterations of gut microbiota composition, especially the increase of Dorea and decrease of Ralstonia, which is also involved in Th2/Treg balancing. Finally, immune tolerance was triggered and mucosal Th2 responses to food antigens and intestinal allergic reactivity were inhibited (Figure 8). Our findings not only link probiotic Binf to commensal microbiota population and the induction of functional Treg cells in the mucosa, but also provide molecular insight into the application of Binf in alleviating food allergy and even gut immune homeostasis. eThics sTaTeMenT This study was carried out in strict accordance with the recommendations in the National Guide for the Care and Use of Laboratory Animals of China. All animal procedures were approved by the Zhejiang Gongshang University Laboratory Animal Welfare Ethics Review Committee. aUThOr cOnTriBUTiOns LF, JS, and YW conceptualized the study; LF and YW drafted the work and revised it critically for important intellectual content; LF, JS, CW, and SF acquired and analyzed the data and drafted the manuscript; LF and YW revised the manuscript critically and provided overall supervision.
2017-11-10T18:05:47.407Z
2017-11-10T00:00:00.000
{ "year": 2017, "sha1": "5080035830a0010acf1a84fd6329716490d87a28", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01536/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5080035830a0010acf1a84fd6329716490d87a28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
117730393
pes2o/s2orc
v3-fos-license
Outpatient primary and tertiary healthcare utilisation among public rental housing residents in Singapore Background Globally, public housing is utilized to provide affordable housing for low-income households. Studies have shown an association between public housing and negative health outcomes. There is paucity of data pertaining to outpatient primary and tertiary healthcare resources utilization among public rental housing residents in Singapore. Methods A retrospective cohort study was performed, involving patients under the care of SingHealth Regional Health System (SHRS) in Year 2012. Healthcare utilization outcomes evaluated included number of outpatient primary and specialist care clinic visits, emergency department visits and hospitalization in Year 2011. Multivariate logistical analyses were used to examine the association between public rental housing and healthcare utilization. Results Of 147,105 patients, 10,400 (7.1%) patients stayed in public rental housing. There were more elderly (54.8 ± 18.0 vs 49.8 ± 17.1, p < 0.001) and male patients [5279 (50.8%) vs 56,892 (41.6%), p < 0.001] residing in public rental housing. Co-morbidities such as hypertension and hyperlipidemia were more prevalent among public rental housing patients. (p < 0.05). After adjustment for covariates, public rental housing was not associated with frequent outpatient primary care clinic or specialist outpatient clinic attendances (p > 0.05). However, it was associated with increased number of emergency department visits (OR: 2.41, 95% CI: 2.12–2.74) and frequent hospitalization (OR: 1.56, 95% CI: 1.33–1.83). Conclusion Residing in public rental housing was not associated with increased utilization of outpatient healthcare resources despite patients’ higher disease burden and frequency of emergency department visits and hospitalizations. Further research is required to elucidate their health seeking behaviours. Electronic supplementary material The online version of this article (10.1186/s12913-019-4047-8) contains supplementary material, which is available to authorized users. Background Low socio-economic status (SES) has been shown to be associated with increased risk of illnesses and comorbidities [1]. In the United States, low income and education level have been found to predict increased risk for cardiovascular disease and mortality [2]. Low SES has also been demonstrated to influence patterns of utilization of healthcare services. While some studies found that lower SES groups encounter difficulties with regards to healthcare access [3,4], they were shown to have higher healthcare utilization in countries where universal healthcare coverage is provided for [5,6]. A study by Dani F. et al. found that people with lower SES tended to have more frequent emergency department visits and hospital admissions. There exist a multitude of measures for SES which include factors such as education level, household income and occupation [7]. During routine clinical care, time may not permit for obtaining these details, which results in incomplete data [6]. Additionally, these data are not comprehensively captured at a population level. Housing type, which is available from patient's home address and easily retrievable from electronic health records, may provide a more convenient measure of SES for physicians to screen patients. Globally, public housing is utilized to provide affordable housing for low-income households [8,9]. In Singapore, home ownership for majority of her 5.6 million citizens is achieved through public housing [10]. To help the lowest income (≤USD$1123 per month) households cope with living costs, one or two-room public flats are made available for rental from the government at heavily subsidized rates. These public rental housing flats are organized into blocks, and clusters of public rental housing blocks are located with public housing blocks of various types to promote neighbourhood social cohesion. However, studies have shown that residence in public housing is associated with negative health outcomes [11,12]. Home ownership has been shown to have an inverse relationship with mortality [13]. In the HOPE VI panel study, public housing residents were found to have a two-fold higher likelihood of developing hypertension and hyperlipidaemia [14]. Another study showed that public housing is linked with obesity and poorer health statuses of mothers [11]. The significance of primary care and its contribution to a nation's healthcare system is becoming increasingly recognized. Primary care is defined as essential healthcare made universally accessible to individuals in the community at an affordable cost that allows a continuing healthcare process [15]. As primary care serves as the first-line of care for most patients, the extent of utilization of primary care resources often reflects the population's general health status and healthcare needs. Studies have also shown that people living in areas with more primary care physicians tend to have better health outcomes and that individuals' utilization of primary care is associated with better health [16]. In Singapore, approximately 70-80% of its overall healthcare demands are addressed by the public sector [17]. Healthcare financing in Singapore primarily comprises of government subsidies and 3 flagship programmes which are namely Medisave, Medishield Life and Medifund [18,19]. Every working Singaporean citizen contributes a proportion of their monthly salary to Medisave, a mandatory and government enforced medical savings account which pays for major healthcare expenditures such as hospitalization [19]. In contrast, Medishield Life is an automatically opt-in health insurance scheme which is used to subsidize high cost hospitalizations [19]. Lastly, Medifund is a means-tested social welfare program which is designed as a safety net to fund the healthcare costs of poorest citizens in the country, of which a significant proportion reside in public rental housing [19]. Therefore, out-of-pocket costs are expected to be minimal or nil for many residents living in public rental housing. A review by Chan et al. on health seeking behaviour of public rental housing residents found that they had lower participation in health screening, and preferred alternative medicine practitioners to western-trained doctors for primary care [20]. It is possible that many public rental housing residents may neglect health and primary healthcare due to conflicting life priorities, resulting in over-utilization of specialist and emergency care services at more advanced disease states. Overall, the delivery of primary healthcare services locally is contributed by both private general practitioner (GP) clinics and public outpatient primary care clinics (polyclinics). While the majority (80%) of primary care provided by GPs in the private sector, polyclinics in each public regional health system play an important role in management and follow-up of 80% of patients with chronic diseases [21]. For patients with medical condition requiring specialist care, they are referred to outpatient specialist clinics located in tertiary centres. Although previous studies have examined the utilization of tertiary healthcare resources such as hospital services among public rental housing residents [6], there is no study which has examined their utilization of outpatient primary and tertiary healthcare resources in Singapore. As such, this study aims to examine the utilization of outpatient primary and tertiary healthcare resources among public rental housing patients. Methods A retrospective cohort study was conducted involving all adult patients who were under the medical care of SingHealth Regional Health System (SRHS) in Year 2012. Among the regional health systems in Singapore, SRHS is the largest cluster and is responsible for the provision of healthcare services to residents in South-Central Singapore We excluded patients who stayed in non-SRHS residential areas as they would fall under the purview of a different regional health system. Non-citizens were also excluded as the likelihood of them being under long-term medical care from SRHS is low. Approval from Singapore Health Services Centralized Institutional Review Board (CIRB 2016/2294) was obtained prior to the study's initiation. SRHS electronic medical records were utilized to extract patients' socio-demographic and clinical details. These information included patient's age, gender, ethnicity and the type of residential housing Major co-morbidities such as diabetes mellitus, hypertension and renal disease which are listed in the Charlson and Elixhauser comorbidity index were also collected [22]. The primary outcome in this study was the number of primary care clinic visits and specialist clinic visits that each patient had in Year 2011. Secondary outcomes that were examined included the number of emergency department visits and hospital admission for each patient in the past 1 year from date of inclusion. In this study, frequent primary care outpatient clinic attendance and outpatient specialist clinic visits were defined by ≥4 visits and ≥ 3 visits per year respectively [6]. Frequent emergency department visits and hospital admissions were defined as ≥4 visits and ≥ 3 admissions per year respectively [6,[23][24][25][26]. The cut-offs for frequent primary care outpatient clinic visits, outpatient specialist clinic visit and hospital admissions were determined by expert consensus across the major health regional systems in Singapore [23]. The threshold of ≥4 visits for emergency department visit as per a study performed by Locker et al. who found that there is > 99% of chance attenders who would presenting at the A&E on < 4 occasions per year as compared to a true frequent attender [24]. Statistical analyses Statistical analyses were performed using SPSS version 23 (SPSS Inc., Chicago, IL, USA). Differences in characteristics of patients who stayed and do not stay in public rental housing were assessed using Student's t-test and Chi-square test, where appropriate. Univariate analyses were also performed to evaluate if there were differences in socio-demographic and clinical characteristics, as well as public rental housing residence status between patients with higher primary and tertiary healthcare utilization. Thereafter, variables with p-value < 0.05 were entered in the multivariate logistical regression model. A two-tailed p-value of < 0.05 was considered statistically significant. Figure 1 shows the flowchart for patient inclusion in the study. A total of 147,105 patients were included, of which 10,400 (7.1%) patients stayed in public rental housing. Table 1 details the anthropomorphic and clinical characteristics of patients who stayed in public rental flats and not in public rental flat, as well as their utilization of healthcare resources among patients. The mean age of patients was 50.2 ± 17.2 years old and majority of patients were female (84,934, 57.7%). Compared to patients who did not stay in public rental housing, patients who stayed in public rental housing were older (54.8 ± 18.0 vs 49.8 ± 17.1, p < 0.001). In addition, there were more male [5279 (50.8%) vs 56,892 (41.6%), p < 0.001] but less Chinese [6367 (61.3%) vs 109,089 (79.8%), p < 0.001] patients staying in public rental flats relative to those patients not staying in public rental flats. The prevalence of most comorbidities such as diabetes, hypertension and hyperlipidemia were higher among patients staying in public rental flats (p < 0.05). However, there were no significant differences in the rates of hyperthyroidism, hypothyroidism, bipolar disease, anxiety as well as both non-metastatic and metastatic malignancy between the 2 groups of patients (p > 0.05). The attendance rates of polyclinics and hospital admissions were higher among patients who stayed in public rental housing (p ≤ 0.001). In contrast, patients who did not stay in public rental housing had higher number of outpatient specialist clinic visits (2.53 ± 5.66 vs 2.16 ± 5.47, p < 0.001). Results for the univariate analyses for differences in characteristics of patients with more and less frequent primary care outpatient clinic, outpatient specialist clinic, emergency department visits and hospital admissions were reported in Additional files 1,2,3 and 4: Annexes A, B, C and D respectively in more detail (See Additional files 1,2,3 and 4: Annexes A, B, C, D). Table 2 shows the results of multivariate analyses of public rental housing on healthcare utilization. After adjustment for socio-demographic and clinical covariates, public rental housing was associated with increased emergency department visits (OR: 2.41, 95% CI: 2.12-2.74) and frequent hospitalization (OR: 1.56, 95% CI: 1.33-1.83) but lower utilization of specialist outpatient clinics (OR: 0.83, 95% CI: 0.79-0.87). However, public rental housing was not associated with frequent outpatient primary care clinic attendances (OR: 1.048, 95% CI: 0.993-1.107). Discussion In this study, residence in public rental housing was not associated with increased utilization of outpatient primary and tertiary healthcare resources despite their higher disease burden, increased number of emergency department and hospital admissions. Studies examining primary care utilization by SES generally showed mixed results, where equitable distribution was observed across SES groups in some studies while other studies found increased usage of primary care services among low SES groups [27][28][29]. Increased primary healthcare utilization has been linked to better health outcomes [16] and it remains unclear if patients residing in public rental housing are utilizing primary healthcare resources optimally. This is especially of concern as they were found to have higher number of hospital admissions and emergency department visits. Research has shown that there are many barriers to primary healthcare services utilization among lower SES groups. These barriers can largely be divided into categories which are namely population characteristics, patients' cultural norms and values as well as healthcare system related services [30]. One of the cultural reasons that may compromise utilization of primary healthcare services is the perceived superiority of alternative medicine. A study showed that Western medicine were less preferred among lower income patients seeking primary healthcare, with only 11.1% preferring Western-trained physicians while 52.6 and 29.5% of patients prefer alternative medicine and self-reliance respectively [31]. The strong belief in self-reliance reflects a mindset that illnesses can get better without professional help and could impede early detection and treatment of diseases. Other patient and cultural related factors which may contribute to their low primary healthcare utilization includes misconception about health and financial costs [29,32]. A study conducted among public rental housing residents found that they were more likely to seek medical attention only when symptoms such as pain manifest [33]. Another commonly cited health-system related reason suggested include lack of access to healthcare and long clinic waiting time [34]. It is noteworthy that a study by the English National Health Services found a 43-day difference in waiting time for non-coronary revascularization procedure between patients with different SES [35]. Locally, barriers that public rental housing residents commonly face for subsidized specialist care include the need to obtain referral letters from primary care physicians in public healthcare facilities and long waiting time, which can span up to three to six months [36]. It was interesting to note that public rental housing patients had lower utilization of specialist outpatient clinics. This was similar to findings from other studies that showed lower utilization of specialist visits in low SES groups compared to higher SES groups [5,37]. A potential reason for this could be due to patients' non-compliance with follow-up at SOCs. In contrast to primary care where services are typically provided within fixed-length appointment slots, specialists' appointment lengths are highly variable and diagnosis-dependent, which may result in variable waiting times and inconvenience to patients [38]. Strong social support has also been shown to increase the probability of physician visits [39]. Although the level of social support among public rental housing residents was not assessed in this study, the lack of social support among public rental housing residents may potentially reduce their adherence to specialist outpatient clinic visits, especially among patients with ambulatory problems and should be explored in future studies. The pattern of outpatient primary and tertiary outpatient healthcare utilization observed in this study could potentially be attributed to the heterogeneity in the health statuses and comorbidities of patients. With the paradigm shift towards greater efficiency for healthcare delivery, healthcare delivery targeted at groups of patients with similar pattern of healthcare utilization has been proposed [40]. Population segmentation via expert-driven and data-driven approaches has been suggested to identify healthcare needs of different patient groups. A local study that performed cluster analyses on a general population found that subjects could be segmented into 5 distinct clusters of patients with varying healthcare utilization and co-morbidities [41]. Future studies may wish to consider using population segmentation approaches to identify sub-groups of public rental housing patients which have overutilization or under-utilization of healthcare resources and poor health-related outcomes. This will aid the design of appropriate healthcare interventions to improve their health-related outcomes. Pertaining to co-morbidities, public rental housing residents were found to have higher rates of co-morbidities such as depression and diabetes. Potential reasons for these findings could be due to circumstances surrounding their housing environment. Rental housing residents are often subjected to poorer housing conditions, where environmental hazards and poor hygiene may precipitate other illnesses [42]. Research has also shown that stressful life events are associated with heart disease, diabetes, major depression and other diseases [1]. Psychological stress comes about when an individual perceives tasks and demands to be exceeding his or her ability to cope [43]. Patients in low SES groups are at risk of higher psychological stress due to increased exposure to such stressors such as financial stress of supporting a family, poor social support and discrimination [7,42,44]. Metabolic diseases such as diabetes are commonly affected by diet and lifestyle choices. Individuals with lower SES have been shown to be less informed about implementing lifestyle changes in aspects of smoking, exercise and diet, as compared to their higher SES peers [1]. Overall, the higher disease burden among patients staying in public rental housing, coupled with their potential underutilization of outpatient primary and tertiary healthcare resources may explain their increased frequency of emergency department visits and hospitalizations. Further studies should be performed to understand the healthcare needs for patients residing in public rental housing as well as their health-seeking behaviours and attitudes to optimize their health-related outcomes. This study is not without limitations. Firstly, data analysed in this study only included variables that were routinely collected from electronic databases within SHRS. Consequently, other measures of primary healthcare utilization such as healthcare related costs, health insurance claims and visits to private GP clinics could not be evaluated. Future studies may look into evaluating these measures as well as other socio-demographic characteristics (e.g. household income levels, marital status, family structure and support), attitudes and beliefs (e.g. self-reliant attitude, preference for alternative medicine) and barriers in knowledge (e.g. lack of access to information, misconceptions which may affect their healthcare utilization. Secondly, data pertaining to residents utilizing healthcare facilities in other regional health systems and non-users of the SRHS was unavailable, which may affect the representativeness of the reported population. However, it is expected that the proportion of residents utilizing facilities in other health systems to be small due to the geographical ease of access to the primary care facilities and specialist centres available in the SHRS. Lastly, a causal association between public rental housing and primary healthcare utilization could not be established due to the retrospective nature of the study. Conclusion In this study, residing in public rental housing was not associated with increased utilization of outpatient primary and tertiary healthcare resources despite their higher disease burden and frequency of emergency department visits and hospitalizations. Further research is required to elucidate and understand health seeking behaviours among public-rental housing patients so as to optimize their appropriate utilization of outpatient primary and tertiary healthcare resources and improve their healthrelated outcomes.
2019-04-17T15:35:38.037Z
2019-04-15T00:00:00.000
{ "year": 2019, "sha1": "50403af24a00643af168ee67fb4a68bff97e60a4", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-019-4047-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50403af24a00643af168ee67fb4a68bff97e60a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8328504
pes2o/s2orc
v3-fos-license
Isolated trigeminal nerve palsy with motor involvement as a presenting manifestation of multiple sclerosis in an equatorial region – a case report Introduction Isolated cranial nerve palsies are considered to be an uncommon presenting feature of multiple sclerosis. Involvement of the trigeminal nerve, particularly its motor component as part of a clinically isolated syndrome of multiple sclerosis has rarely been reported in equatorial regions and no cases have been described in Sri Lanka thus far. Case Presentation We report a case of isolated right sided trigeminal nerve palsy (Motor and Sensory) in a 34 year old previously well lady from urban Sri Lanka who was found to have characteristic lesions on Magnetic Resonance Imaging highly suggestive of multiple sclerosis. Conclusions Multiple sclerosis should be considered in the differential diagnosis of patients who present with isolated cranial nerve palsies. Clinicians should have a high index of suspicion when evaluating such patients especially in low prevalence regions close to the equator. Early recognition and treatment of such a “Clinically Isolated Syndrome” may prevent early relapse. Introduction Despite being a relatively common disease in western countries, multiple sclerosis remains a rare entity in equatorial regions including Sri Lanka [1]. It is known to manifest in many forms including retrobulbar neuritis, cerebellar syndromes and transverse myelitis. An isolated cranial nerve palsy is considered to be a rare presenting sign of multiple sclerosis [2][3][4]. The pathogenesis of cranial nerve involvement in multiple sclerosis is not well described but it is commonly associated with brain stem demyelination [5]. The initial presenting manifestation of relapsing multiple sclerosis has been recently described as the "clinically isolated syndrome" (CIS) [6]. Thus, isolated cranial nerve palsies with characteristic imaging patterns would now fall in to this category. Case presentation A 34 year old previously well lady from urban Sri Lanka presented to the Institute of Neurology at the National Hospital of Sri Lanka with a one month history of numbness of the right side of her face and difficulty chewing from the right side of her mouth. She described the onset of the symptoms as sudden as it was present on awakening from an uninterrupted sleep the previous night. The symptoms had persisted over a month without much progression. She had received treatment from a general practitioner but her symptoms had not improved. She did not have any diplopia or blurred vision, painful eye movements, hearing impairment, dysphagia or dysarthria. She did not complain of limb weakness or unsteadiness with tendency to fall. She did not complain of shooting pain on the right side of the face triggered by touch and has had no previous facial rashes. She denied sexual promiscuity. She had lived in Sri Lanka all her life and has not had any foreign travel. Neurological examination revealed an alert lady who was oriented to time, place and person. There was no facial asymmetry or drooling of saliva and no obvious facial rashes. On asking the patient to open her mouth there was slight jaw deviation to the right side ( Figure 1). Palpation of the Temporalis and Masseters did not reveal any wasting. Sensory examination revealed impaired pain sensation on the right side of the face involving the Ophthalmic, Maxillary and Mandibular divisions of the trigeminal nerve conforming to the characteristic onion skin distribution. The corneal reflex on the right side was also absent. All other cranial nerve examination was normal including the adjacent nerves. Opthalmoscopic examination did no reveal optic atrophy or papillitis. There was no limb involvement with normal Corticospinal, Spinothalamic and Posterior column pathway examination including preserved reflexes and there were no demonstrable Cerebellar signs. Magnetic Resonance Imaging (MRI) of the brain and spinal cord was subsequently performed and it revealed multiple, hyperintense (Both T2WI and T2 FLAIR), periventricular lesions in the deep white matter conforming to the characteristic Dawson's Finger appearance which is highly suggestive of multiple sclerosis (Figures 2-3). There were also similar lesions in the right cerebellar peduncle. The spinal cord was free of lesions. Cerebrospinal fluid analysis revealed normal protein levels with no cells. The presence of oligoclonal bands could not be analyzed as this facility is unavailable at our institution and was unaffordable to the patient to be done in the private sector. Visual and Brainstem evoked potential studies were carried out and revealed normal results. All other basic biochemistry results were normal including normal inflammatory markers. She was treated with intravenous Methylprednisalone 1 g daily for 3 days followed by oral Prednisalone 50 mg daily (1 mg/kg/day) with plans to gradually taper off the treatment later. She was advised on the possibility of further neurological deficits in the future and asked to adhere to the follow up programme. Discussion Isolated cranial nerve palsies may be a presenting manifestation of multiple disease processes including cerebral vasculitis, basal meningitis and many other inflammatory conditions of the brainstem. Demyelinating diseases including multiple sclerosis may rarely present with isolated cranial nerve palsies as well [2][3][4]. When cranial nerves are affected in isolation the trigeminal nerve has been found to be the most frequently affected in some studies [7]. Due to its low prevalence in equatorial regions, multiple sclerosis is rarely considered in the differential diagnosis of isolated cranial nerve palsies. Furthermore due to the limited availability of MRI facilities in most parts of the region many patients with such presentations will be misdiagnosed or remain unrecognized. A recent meeting of experts on multiple sclerosis coined the term "Clinically Isolated Syndrome" for the initial presenting manifestation of the disease [6]. It is reasonable to state that the patient described in this report falls in to this category. The importance of identifying patients with CIS is to prognosticate future relapses and the severity of the illness as well as identifying patient who require early disease modifying treatment. It has been shown that patients with CIS and a positive baseline MRI, such as in our patient have an 8 fold risk of developing a further attack over time. We treated our patient with Corticosteroids over Interferon β-1b due to the availability and cost effectiveness of the former treatment. Close follow up is required as the extensive lesions on MRI suggest an early relapse. The pathogenesis of isolated cranial nerve palsies in multiple sclerosis remains inconclusive. Brainstem demyelination has been suggested to be the mechanism in some reports [5] but the MRI of our patient did not reveal any brainstem lesions. The possibility of a demyelinating process of the nerve itself should be considered in this case. Conclusions Multiple sclerosis should be considered in the differential diagnosis of patients who present with isolated cranial nerve palsies. Clinicians should have a high index of suspicion when evaluating such patients especially in low prevalence regions close to the equator. Early recognition and treatment of such a "Clinically Isolated Syndrome" may prevent early relapse.
2016-05-16T19:15:41.387Z
2012-05-30T00:00:00.000
{ "year": 2012, "sha1": "544e495bc31dbea9aafcb1eadb151859c422fae3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/1755-7682-5-17", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae3d186150fc61974e4c99eef176feeef1d03a80", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253470732
pes2o/s2orc
v3-fos-license
Improving the Mechanical Properties of Damaged Hair Using Low-Molecular Weight Hyaluronate Chemical treatments of hair such as dyeing, perming and bleaching could cause mechanical damage to the hair, which weakens the hair fibers and makes the hair break more easily. In this work, hyaluronate (HA) with different molecular weight (MW) was investigated for its effects on restoring the mechanical properties of damaged hair. It was found that low-MW HA (average MW~42 k) could significantly improve the mechanical properties, specifically the elastic modulus, of overbleached hair. The fluorescent-labeling experiments verified that the low-MW HA was able to penetrate into the cortex of the hair fiber, while high-MW HA was hindered. Fourier transform infrared spectrometry (FT-IR) results implied the formation of additional intermolecular hydrogen bonds in the HA-treated hair. Thermos gravimetric analysis (TGA) indicated that the HA-treated hair exhibited decreased content of loosely bonded water, and differential scanning calorimetry (DSC) characterizations suggested stronger water bonding inside the HA-treated hair, which could alleviate the weakening effect of loosely bonded water on the hydrogen bond networks within keratin. Therefore, the improved elastic modulus and mechanical strength of the HA-treated hair could be attributed to the enhanced formation of hydrogen bond networks within keratin. This study illustrates the capability of low-MW HA in hair damage repair, implying an enormous potential for other moisturizers to be used in hair care products. Introduction Hair damage is a common phenomenon caused by hair grooming, perming, dyeing, bleaching or treatment by various chemicals. Hair damage makes the hair behave differently from its virgin status, including wetting properties, water retention, combing properties and mechanical properties [1][2][3]. For instance, healthy tresses have obviously greater tensile strength, while damaged tresses tend to break easily during daily grooming. The mechanical properties of hair have attracted more research interest since they are correlated to hair breakage and have intensive academic and practical importance [1,4]. Exploring cosmetic ingredients to repair damaged hair and promote its physical properties, especially the mechanical properties of hair, is of great significance. At the molecular level, hair damage generally indicates the destruction of keratin configuration and chemical bonds in the hair, such as ionic bonds, disulfide bonds and intermolecular hydrogen bonds [5][6][7][8][9]. This destruction consequently results in a decrease in the mechanical properties of hair [2,3,10]. Many efforts have been devoted to the development of active ingredients to repair hair damage, mostly via the repair of chemical bonds. Malinauskyte et al. [11] found that medium and high-molecular-weight keratin peptides could restore moisture and repair internal chemical bonds, which enhanced the mechanical properties of hair. Song et al. [7] found that polycarboxylic acids could establish bonds between the carboxyl groups in dicarboxylic acids and the amine, sulfhydryl and hydroxyl groups in hair keratin, which increased intermolecular forces and repaired the mechanical properties of hair. In the past decade, hyaluronate (HA) has been attracting more and more interest due to its excellent water retention capability via its hydrogen bonding network with water [12,13]. Although HA has been widely used in skin-care products, there are few studies on its function and application in hair care. Previous hair research regarding HA is mainly focused on hair growth and loss, but the effect on hair repair, especially mechanical properties, has been rarely investigated [14][15][16][17]. As mentioned above, the hydrogen bond is one of the key chemical bonds in hair keratin, which plays an important role in hair damage and repair [18]. Both hydrogen bonds and water content greatly contribute to the mechanical properties of hair. Therefore, HA could be a promising active ingredient for the repair of hair stretch properties [19,20]. The current paper investigated the effects of HA with different molecular weight on the mechanical properties of hair. To explain the difference in hair repair efficacy, the penetrating behavior of HA with different molecular weight was studied using the fluorescent labeling method. Subsequently, the possible underlying mechanism was investigated using FT-IR, TGA and DSC characterization of HA-treated hair. Effect of HA Treatments on Mechanical Properties of Hair We first checked the ability of different molecular weight (MW) HA to recover the mechanical properties of damaged hair. Tensile strength and Young's modulus are important parameters to determine hair resistance. Over-bleached Asian hair strands were treated with 0.25% HA solution in a hair-care spray manner ten times, as described in the experimental section. Figure 1a shows the tensile strength of the bleached hair before and after treatment with different MW HA. It was observed that the tensile strength of bleached hair did not show a significant difference after being treated with high-MV and mid-MW HA, but was improved by nearly 16% after being treated with low-MW HA. Moreover, a statistically significant improvement in low-MW HA-treated hair was noticed, demonstrating that low-MW HA could recover the mechanical property of damaged hair. Further analysis of the tensile test results indicated that low-MW HA treatment could significantly increase the Young's modulus of the bleached hair ( Figure 1b). However, all the hair samples exhibited approximately the same elongation rate at break (Figure 1c), suggesting that the improved tensile strength of the low-MW HA-treated hair mainly originated from the increased elastic modulus of hair. bonds. Malinauskyte et al. [11] found that medium and high-molecular-weight keratin peptides could restore moisture and repair internal chemical bonds, which enhanced the mechanical properties of hair. Song et al. [7] found that polycarboxylic acids could establish bonds between the carboxyl groups in dicarboxylic acids and the amine, sulfhydryl and hydroxyl groups in hair keratin, which increased intermolecular forces and repaired the mechanical properties of hair. In the past decade, hyaluronate (HA) has been attracting more and more interest due to its excellent water retention capability via its hydrogen bonding network with water [12,13]. Although HA has been widely used in skin-care products, there are few studies on its function and application in hair care. Previous hair research regarding HA is mainly focused on hair growth and loss, but the effect on hair repair, especially mechanical properties, has been rarely investigated [14][15][16][17]. As mentioned above, the hydrogen bond is one of the key chemical bonds in hair keratin, which plays an important role in hair damage and repair [18]. Both hydrogen bonds and water content greatly contribute to the mechanical properties of hair. Therefore, HA could be a promising active ingredient for the repair of hair stretch properties [19,20]. The current paper investigated the effects of HA with different molecular weight on the mechanical properties of hair. To explain the difference in hair repair efficacy, the penetrating behavior of HA with different molecular weight was studied using the fluorescent labeling method. Subsequently, the possible underlying mechanism was investigated using FT-IR, TGA and DSC characterization of HA-treated hair. Effect of HA Treatments on Mechanical Properties of Hair We first checked the ability of different molecular weight (MW) HA to recover the mechanical properties of damaged hair. Tensile strength and Young's modulus are important parameters to determine hair resistance. Over-bleached Asian hair strands were treated with 0.25% HA solution in a hair-care spray manner ten times, as described in the experimental section. Figure 1a shows the tensile strength of the bleached hair before and after treatment with different MW HA. It was observed that the tensile strength of bleached hair did not show a significant difference after being treated with high-MV and mid-MW HA, but was improved by nearly 16% after being treated with low-MW HA. Moreover, a statistically significant improvement in low-MW HA-treated hair was noticed, demonstrating that low-MW HA could recover the mechanical property of damaged hair. Further analysis of the tensile test results indicated that low-MW HA treatment could significantly increase the Young's modulus of the bleached hair ( Figure 1b). However, all the hair samples exhibited approximately the same elongation rate at break (Figure 1c), suggesting that the improved tensile strength of the low-MW HA-treated hair mainly originated from the increased elastic modulus of hair. tensile strength of the HA-treated hair was mainly related to the increased elastic modulus of the hair. In comparison, the peptide-treated hair also present improved Young's modulus and tensile strength, which is believed to be related to the change in the chemical environment surrounding the hair keratin [21]. Molecules 2022, 27, x FOR PEER REVIEW 3 of 11 Figure 2 shows the tensile strength and Young's modulus of the bleached hair treated with hair spray containing varying concentrations of low-MW HA. At low concentrations (<0.25%), low-MW HA improved the tensile strength and Young's modulus of the bleached hair in a concentration-dependent manner. The results also implied that the improved tensile strength of the HA-treated hair was mainly related to the increased elastic modulus of the hair. In comparison, the peptide-treated hair also present improved Young's modulus and tensile strength, which is believed to be related to the change in the chemical environment surrounding the hair keratin [21]. Penetration of Labelled HA into Hair Fibers It was previously reported that different MW peptides could penetrate into the hair fiber with varying penetration efficiency. However, the penetration behavior of polysaccharide molecules with different MW into the hair fibers has been rarely studied [7,11]. In this work, we used fluorescence microscopy as a sensitive technique to investigate the differences in the penetration behavior of different MW HA into hair fibers. Figure 3 shows the cross-sectional photographs of hairs treated with fluorescently labeled HA of high MW and low MW. The images were taken using the green channel, which showed natural autofluorescence of the hair and helped display the positioning of the crosssections in the images. It was observed that the fluorescence intensity of low-MW HA-treated hair was significantly higher than that of high-MW HA-treated hair. Moreover, the low-MW HA penetrated into all parts of the hair, even deep into the cortex, while high-MW HA only entered the outer layers of the fibers. Therefore, the ability of low-MW HA to recover the hair's mechanical properties might be related to its effective penetration into hair fibers. natural autofluorescence of the hair and helped display the positioning of the crosssections in the images. It was observed that the fluorescence intensity of low-MW HA-treated hair was significantly higher than that of high-MW HA-treated hair. Moreover, the low-MW HA penetrated into all parts of the hair, even deep into the cortex, while high-MW HA only entered the outer layers of the fibers. Therefore, the ability of low-MW HA to recover the hair's mechanical properties might be related to its effective penetration into hair fibers. Figure 4 presents the FT-IR spectra of the bleached hair before and after being treated with low-MW HA. The spectra of both samples showed a broad band at 3272 cm −1 attributed to O-H stretching of water or O-H bearing molecules. The intensity of the O-H stretching-related band was significantly increased after the low-MW HA treatment, which could be explained by the additional O-H groups provided by HA molecules. In addition, the amide I band at 1629 cm −1 mainly associated with the C=O stretching vibration, and the amide II band at 1517 cm −1 associated with the N-H bending vibration and C-N stretching vibration were observed in the bleached hair spectra. The intensity and position of the amide I and II bands are known to be sensitive to the conformation and composition of the keratin. Our results indicated that the amide I/II band intensity ratio increased from 1.036 to 1.092 after HA treatment. Furthermore, the position of the amide II band peak was blue-shifted to 1529 cm −1 in HA-treated hair. Since HA treatment would not affect the amino acid or peptide composition of hair, such variation in amide I and II bands might be related to the possible formation of intermolecular H-bonds between HA and keratin. Figure 4 presents the FT-IR spectra of the bleached hair before and after being treated with low-MW HA. The spectra of both samples showed a broad band at 3272 cm −1 attributed to O-H stretching of water or O-H bearing molecules. The intensity of the O-H stretching-related band was significantly increased after the low-MW HA treatment, which could be explained by the additional O-H groups provided by HA molecules. In addition, the amide I band at 1629 cm −1 mainly associated with the C=O stretching vibration, and the amide II band at 1517 cm −1 associated with the N-H bending vibration and C-N stretching vibration were observed in the bleached hair spectra. The intensity and position of the amide I and II bands are known to be sensitive to the conformation and composition of the keratin. Our results indicated that the amide I/II band intensity ratio increased from 1.036 to 1.092 after HA treatment. Furthermore, the position of the amide II band peak was blue-shifted to 1529 cm −1 in HA-treated hair. Since HA treatment would not affect the amino acid or peptide composition of hair, such variation in amide I and II bands might be related to the possible formation of intermolecular H-bonds between HA and keratin. Effects of HA Treatments on Thermal Properties of Hair Previous studies suggested that two types of water components, strongly bonded and loosely bonded water, could be differentiated in human hair [22]. The contents of strongly bonded and loosely bonded water were analyzed by thermos gravimetric analysis (TGA), according to the methodology reported by Barba et al. with slight modification. The temperature was increased from 25 • C to 65 • C at a rate of 20 • C/min and maintained at 65 • C for 18 min, followed by an increase up to 180 • C at 20 • C/min and kept at that level for 20 min. The water loss was determined, and the TGA curves were generated. According to the typical TGA curves in Figure 5, the bleached hair and HA-treated hair had similar total water content. However, the loosely bonded water content in the HA-treated hair was slightly lower than that in the untreated bleached hair. The above results indicated that HA treatment affected the existing form of water inside the hair fiber, potentially due to the strong water-binding capacity of HA [23]. The trend of water content is in contrast to the effect of the peptide on hair, which generally increased the loosely bonded water content [24]. Molecules 2022, 27, x FOR PEER REVIEW 5 of 11 Effects of HA Treatments on Thermal Properties of Hair Previous studies suggested that two types of water components, strongly bonded and loosely bonded water, could be differentiated in human hair [22]. The contents of strongly bonded and loosely bonded water were analyzed by thermos gravimetric analysis (TGA), according to the methodology reported by Barba et al. with slight modification. The temperature was increased from 25 °C to 65 °C at a rate of 20 °C/min and maintained at 65 °C for 18 min, followed by an increase up to 180 °C at 20 °C/min and kept at that level for 20 min. The water loss was determined, and the TGA curves were generated. According to the typical TGA curves in Figure 5, the bleached hair and HA-treated hair had similar total water content. However, the loosely bonded water content in the HAtreated hair was slightly lower than that in the untreated bleached hair. The above results indicated that HA treatment affected the existing form of water inside the hair fiber, potentially due to the strong water-binding capacity of HA [23]. The trend of water content is in contrast to the effect of the peptide on hair, which generally increased the loosely bonded water content [24]. Effects of HA Treatments on Thermal Properties of Hair Previous studies suggested that two types of water components, strongly bonded and loosely bonded water, could be differentiated in human hair [22]. The contents of strongly bonded and loosely bonded water were analyzed by thermos gravimetric analysis (TGA), according to the methodology reported by Barba et al. with slight modification. The temperature was increased from 25 °C to 65 °C at a rate of 20 °C/min and maintained at 65 °C for 18 min, followed by an increase up to 180 °C at 20 °C/min and kept at that level for 20 min. The water loss was determined, and the TGA curves were generated. According to the typical TGA curves in Figure 5, the bleached hair and HA-treated hair had similar total water content. However, the loosely bonded water content in the HAtreated hair was slightly lower than that in the untreated bleached hair. The above results indicated that HA treatment affected the existing form of water inside the hair fiber, potentially due to the strong water-binding capacity of HA [23]. The trend of water content is in contrast to the effect of the peptide on hair, which generally increased the loosely bonded water content [24]. DSC was applied to evaluate the thermal behavior of water in hair, which is an effective technique that allows for determining the bonding strength of water in hair fibers [24][25][26]. Figure 6 shows the DSC curves of the bleached hair before and after HA treatment, with the peaks representing different stages of hair degradation. The upward peaks represented endothermic reactions, with the first peaks corresponding to the release of loosely bonded water. The second and third peaks denote endothermic fusion reactions of the keratin polypeptide chain. It was observed that all three peaks in DSC curves were affected by HA treatment. On one hand, the "water evaporation" peak of the HA-treated hair was centered at a higher temperature of 94.8 • C, compared to that of bleached hair (88.3 • C), which is consistent with Margarida's work, who observed similar peak movement in peptide-treated bleached hair [27]. Moreover, the "water evaporation" peak area of the HA-treated hair, which corresponded to the enthalpy for water vaporization, was obviously larger than that of the bleached hair, suggesting that more energy would be spent during the release of water from HA-treated hair. These results indicated that HA treatment led to stronger water bonding inside the hair fiber. On the other hand, the two peaks corresponding to Molecules 2022, 27, 7701 6 of 10 the melting and decomposition of keratin also shifted to higher temperatures after HA treatment, suggesting that HA treatment also tended to increase the crystallinity of keratin. hair was centered at a higher temperature of 94.8 °C, compared to that of bleached hair (88.3 °C), which is consistent with Margarida's work, who observed similar peak movement in peptide-treated bleached hair [27]. Moreover, the "water evaporation" peak area of the HA-treated hair, which corresponded to the enthalpy for water vaporization, was obviously larger than that of the bleached hair, suggesting that more energy would be spent during the release of water from HA-treated hair. These results indicated that HA treatment led to stronger water bonding inside the hair fiber. On the other hand, the two peaks corresponding to the melting and decomposition of keratin also shifted to higher temperatures after HA treatment, suggesting that HA treatment also tended to increase the crystallinity of keratin. Materials Over-bleached Chinese hair (18 cm × 1 g, free 16 cm) was purchased from Shanghai Canyu Commercial Co., Ltd., (Shanghai, China); sodium hyaluronate with different molecular weight was provided by Bloomage Biotechnology Co., Ltd. (Jinan, China), including high-molecular-weight HA with an average MW of 1460 k (denoted as high-MW HA), middle-molecular-weight HA with an average MW of 370 k (denoted as mid-MW HA) and low-molecular-weight HA with an average MW of 42 k (denoted as Low-MW HA). Other chemicals were purchased from Aladdin (China). Hair Treatment Over-bleached Chinese hair strands were washed with 10% (w/w) SDS solution and naturally dried at ambient conditions (temperature = 25 ± 2 • C, RH = 50% ± 5%). Different hair strands were treated with 0.3 g hair spray solution containing 0.25% sodium hyaluronate (HA) with different molecular weight. The hair strands treated with deionized water of the same amount were set as the negative control group (bleached hair group). The treated hair strands were stored at ambient conditions for 12 h and then washed with deionized water to remove the free residual HA on the hair surface. The above hair treatment procedure was repeated 10 times before testing the mechanical property of the hair. Hair Tensile Property Tests The diameters of hair fibers were measured at the middle section of a single hair fiber by an SN-1200w HD camera (Sannosinu Technology Co., Ltd., Shenzhen, China). For each group of the treated hair strands, 30 single hair fibers with diameters of 90-110 µm were selected from the strands under the micro-camera. The single fiber tensile properties of the hair were then measured by an XS (08) XT-3 fiber strength tester (Shanghai Xusai Instrument Co., Ltd., Shanghai, China). The tensile strength (σ) and strain (ε) of the single hair fiber were calculated through the following formula: and where F b , S, ∆L and L 0 are the breaking strength, the cross-sectional area of the fiber, the displacement and the initial length, respectively [10]. Young's modulus (YM) is a mechanical property, applicable only within Hooke's law, which is related to tensile strength and strains and provides a measure of material stiffness [9]. The YM of a single hair fiber can be calculated by selecting the Hookean section curve of the derived tensile data and the following formula: where that F is the maximum value of strength in the Hookean section. The tensile strength and Young's modulus of the hair strands were calculated based on the average of 30 hair single fibers. SPSS software was used for statistical analysis, a t-test was performed on the data of each group and the control group, and the p value was calculated. Fluorescent Labeling of HA and Fluorescence Microscopy To enable the study of HA incorporation into over-bleached hair, fluorescein 5(6)isothiocyanate (FITC) was linked to HA. Typically, 0.2 g of HA and 0.04 g of FITC were dissolved into 2 mL of 0.05 mol/L NaOH aqueous solution and were then reacted at 95 • C for 45 min. After cooling to room temperature, 18 mL of NaOH-saturated ethanol solution was added and the mixture was centrifuged to obtain crude FITC-labeled HA. The crude FITC-labeled HA was then dispersed into 20 mL of NaCl-saturated ethanol solution and centrifuged to remove the unbound FITC. After alcohol washing 6 times, the precipitation was freeze-dried to obtain purified FITC-labeled HA [28]. Labeled HA was used in the treatment of over-bleached hair as described earlier. Transversal cuts (15 µm) of hair fiber samples embedded in an epoxy resin were prepared using a microtome (Microtome Leitz, Oberkochen, Germany), and were analyzed by fluorescence microscopy. The cross-section of hair was observed by a fluorescence microscope (Nikon Co., Ltd., Tokyo, Japan). All images were recorded using similar filter, exposure, brightness and gain settings. Characterization of Hair Before characterization, the hair strands were kept in a humidity-controlled box (22 • C, 50% RH) for 48 h. Fourier transform infrared spectra of hair fibers were recorded with an FTIR spectrophotometer (Thermo Fisher Scientific, Massachusetts, America) in the spectral range of 1000-4000 cm −1 [29]. The moisture content in hair was analyzed by a thermos gravimetric analysis (TGA) instrument (1100SF, Mettler Toledo) in an atmosphere of dry N 2 where a purge at 30 mL·min −1 was employed. For each hair sample, TGA measurements were repeated 5 times [22,30]. The differential scanning calorimetry (DSC) tests were performed on a DSC instrument (NETZSCH Instrument Manufacturing Co., Germany). Typically, 5-10 mg of finely cut hair was rapidly transferred into a DSC capsule with the sample pan cover punched, allowing evaporating water to escape. All the experiments were performed under a constant flow of N 2 . The samples were heated from 30 • C to 300 • C at a rate of 10 • C/min, and the heat flow was recorded [24,25]. Discussion Chemical dyeing, including permanent dyeing, greatly disrupts the structural and mechanical properties of hair fibers and causes mechanical damage [31,32]. As a result, the fibers become weak and are more susceptible to breakage with time, which is incompatible with healthy hair. Therefore, the demand for products that improve the fiber qualities of hair is increasing rapidly. One of the simplest ways to assess the integrity and quality of the hair fiber is by measuring its mechanical properties. Indeed, the slightest modification in the chemical composition of hair may greatly alter its mechanical properties [33,34]. Previous works have intensively reported the application of proteins, peptides and amino acids in hair damage repair [35]. However, only a few studies have addressed the effects of polysaccharides on restoring the mechanical properties of damaged hair. In this work, we first reported the potential of low-molecular-weight hyaluronate for hair damage repair and revealed the possible underlying mechanism. We first examined the effects of HA with different molecular weight on repairing the mechanical properties of over-bleached hair. It was found that only low-MW HA, which showed a higher penetration efficacy, could restore the mechanical properties of damaged hair. Hair fibers have a natural barrier that prevents the penetration of materials into the cortex, and it would be difficult for high-MW compounds to pass through this barrier. The penetration investigation using fluorescence microscopy showed that the low-MW HA was able to penetrate into the cortex, while the high-MW HA only entered the outer layers of the cortex. Therefore, the hair damage repair efficacy of low-MW HA could be related to its ability to penetrate into the hair fibers. In the previous literature, an equation was proposed to obtain a better quantitative description of the tensile properties of similar α-keratin fibers, which consist of stiff protein fibers and a pliant protein matrix [33,34]. The stretching process of hair was characterized by three stages: (1) a nearly linear region (namely the Hookean region or elastic region) with strains less than 3%, which only involved the change of bond angles without significant structural transformation; (b) a near flat region with little increase in stress (transformation region) due to the α-β transition; (c) the post-transformation region mainly related to the destruction of convent disulphide bonds. In the Hookean region, α-keratin fiber is generally considered a material dominated by hydrogen bonds (H-bonds), and its elastic modulus is highly correlated to the density of effective intermolecular H-bonds. In this work, the mechanical test results in Figures 1 and 2 indicated that low-MW HA treatment could improve the tensile strength of the bleached hair, and such improvement effect was mainly related to the increased elastic modulus rather than the elongation ratio of hair. Therefore, the hair mechanical property-restoring effects of low-MW HA might be associated with the enhanced formation of H-bond networks. It is well known that the elastic modulus of hair fiber is related to the water content, especially the loosely bonded water content within the fiber [36,37]. Loosely bonded water can disrupt the H-bonding networks, thereby reducing the elastic modulus of the hair fiber. High humidity usually leads to the decreased elastic modulus of hair due to the increased content of the "free water" or loosely bonded water. In this work, HA-treated hair showed a higher elastic modulus than the untreated bleached hair at a fixed test humidity (50%). Since HA has a strong water-binding capacity, it was speculated that HA treatment might affect the existing form of water inside the hair fiber. TGA analysis indicated that the HA-treated hair exhibited a decreased content of loosely bonded water. Moreover, DSC analysis suggested that more energy would be spent during the evaporation and release of water from HA-treated hair, implying stronger water bonding inside the HA-treated hair fibers. Therefore, we proposed that HA treatment led to stronger water bonding inside the hair owing to its high water-binding capacity, thus promoting the formation of hydrogen bond networks within keratins, and thereby leading to the increased elastic modulus of hair. According to the above discussion, the effect of HA in hair damage repair was tentatively explained by its water-binding ability in the current work. However, according to some previous studies, HA molecules could also directly interact with keratin fibers, which may also contribute to improved mechanical properties of the damaged hair [38]. The results from FT-IR spectra ( Figure 4) and DSC analysis ( Figure 6) also revealed the structural and crystallinity variation of keratin in the hair fibers, implying the interactions between HA and keratin, especially the possible formation of intermolecular hydrogen bonds between HA and keratin. This study revealed the potential of low-molecular-weight hyaluronate in hair damage repair, and the tentative underlying mechanism was discussed. In the future, more studies will be performed to further investigate the interaction of hyaluronate with hair fiber components. Conclusions Based on the results and discussion presented above, the following conclusions can be made regarding the potential of low-molecular-weight hyaluronate for hair damage repair as well as its working mechanism: (1) Treating damaged hair with HA could significantly improve the mechanical strength, or more specifically, the elastic modulus of hair. (2) Only low MW-HA showed mechanical property-restoring effects on hair, while the effect of HA with high molecular weight was negligible due to its poor penetrating capability. (3) HA treatment led to stronger water bonding inside the hair owing to its high waterbinding capacity, therefore promoting the formation of hydrogen bond networks within keratins, and thereby leading to the increased elastic modulus of hair. (4) The possible formation of intermolecular H-bonds between HA and keratin might also contribute to the improved tensile properties of HA-treated hair.
2022-11-12T16:02:56.130Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "3aad340f453b83b4005f403836c2e49bf0bfbcef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/22/7701/pdf?version=1667990872", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "163c20e86f563da7c3576fc92c91a92d3cefac90", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56437924
pes2o/s2orc
v3-fos-license
Mathematical Modeling for Lateral Displacement Induced by Wind Velocity Using Monitoring Data Obtained from Main Girder of Sutong Cable-Stayed Bridge Based on the health monitoring system installed on the main span of Sutong Cable-Stayed Bridge, GPS displacement and wind field are real-time monitored and analyzed. According to analytical results, apparent nonlinear correlation with certain discreteness exists between lateral static girder displacement and lateral static wind velocity; thus time series of lateral static girder displacement are decomposed into nonlinear correlation term and discreteness term, nonlinear correlation term of which is mathematically modeled by third-order Fourier series with intervention of lateral static wind velocity and discreteness term of which ismathematicallymodeled by the combinedmodels of ARMA(7, 4) and EGARCH(2, 1). Additionally, stable power spectrum density exists in time series of lateral dynamic girder displacement, which can be well described by the fourth-order Gaussian series; thus time series of lateral dynamic girder displacement are mathematically modeled by harmonic superposition function. By comparison and verification between simulative and monitoring lateral girder displacements from September 1 to September 3, the presented mathematical models are effective to simulate time series of lateral girder displacement from main girder of Sutong Cable-Stayed Bridge. Introduction Nowadays, long span cable-stayed and suspension bridge structures are commonly constructed at home and abroad.On account of their flexible structural characteristics, displacement response from main girder of long-span bridge structure swings obviously impacted by strong aerostatic and fluctuating wind actions.According to aerostatic response analysis on Sutong Cable-Stayed Bridge by Xu et al., the lateral displacement response from main girder can approach 1.2 m under strong wind velocity 40 m/s with attack angle 0 ∘ [1]; and research results from buffeting response analysis on Golden Gate Bridge by Vincent showed that extreme buffeting amplitude from main girder can reach 1.7 m under strong wind velocity 31 m/s [2].Such large amplitude can definitely threaten comfort and safety of the whole bridge structure.For example, severe wind vibration from main girder of Tacoma Suspension Bridge in Washington state eventually brought about collapse of the whole bridge structure under wind velocity 19 m/s [3].Therefore, it is of great significance to research displacement response impacted by wind loads from main girder of long span bridge structures, and especially the lateral displacement response, as one fairly important part for main girder, should be specifically valued. Theoretical exploration, numerical simulation, and wind tunnel tests for lateral displacement response have been carried out to some extent.Cheng and Xiao improved the calculation method for aerostatic stability and further concluded that instable lateral displacement was 4.24 m under critical static wind [4]; Long et al. analyzed lateral displacement response from Sidu Suspension Bridge through ANSYS finite element simulation and concluded that maximum lateral displacement at middle span was 32.26 cm, which is close to 1/1000 length of main span [5] lateral displacement response from Xihoumen Suspension Bridge through wind tunnel tests and concluded that lateral displacement at horizontal angle 10 ∘ was larger than that of other angles [6]. However, for mechanism complexity of lateral displacement response impacted by aerostatic and fluctuating wind actions, traditional methods of theoretical deduction, numerical simulation, and wind tunnel tests are difficult to accurately reflect the actual lateral displacement response of bridge structure, on account of uncertain boundary condition, imprecise assignment of initial parameters, and inappropriate ignorance of subordinate factors.In recent years, with development of structural health monitoring technology, it is feasible to install monitoring sensors on long span bridge structures, monitoring data of which can authentically reflect bridge structural behaviors under actual environment and load actions.Although wind field of long span bridge structures has been widely monitored in recent years [7][8][9], lateral displacement response is rarely monitored and researched; thus real correlation regularity between lateral displacement response and wind action is still covered.Additionally, lateral displacement response under actual operation environment is also affected by other random factors, which have never been taken into account by researchers before.Therefore, lateral displacement response from main girder is necessarily researched upon monitoring data to reveal real structural behavior of long span bridges. In this paper, based on the health monitoring system installed on the main span of Sutong Cable-Stayed Bridge, GPS displacement and wind field are real-time monitored and analyzed.According to analytical results, apparent nonlinear correlation with certain discreteness exists between lateral static girder displacement and lateral static wind velocity; thus time series of lateral static girder displacement are decomposed into nonlinear correlation term and discreteness term, nonlinear correlation term of which is mathematically modeled by th-order Fourier series with intervention of lateral static wind velocity and discreteness term of which is mathematically modeled by the combined models of ARMA(, ) and EGARCH(, ).Additionally, stable power spectrum density exists in time series of lateral dynamic girder displacement; thus time series of lateral dynamic girder displacement are mathematically modeled by harmonic superposition function.By comparison and verification between simulative and monitoring lateral displacements from September 1 to September 3, mathematical models are feasible and effective to simulate time series of lateral girder displacement from main girder of Sutong Cable-Stayed Bridge. Bridge Monitoring and Sample Analysis The bridge monitoring object for this research is the worldwide famous Sutong Cable-Stayed Bridge (in Jiangsu Province, China).Its whole structure form is single-spanned and double-hinged with the main span reaching 1088 m as shown in Figure 1, and the main girder employs flat steel box type with 36.3 m wide and 4.0 m high as shown in Figure 2. 3D ultrasonic anemometers and GPS monitoring station are installed on two flanks of midspan cross-section from main girder (resp., shown in Figures 1 and 2) to continuously acquire wind data and displacement data with sample frequency of 1 Hz.Specifically, the wind data from 3D ultrasonic anemometers embrace such three types as wind velocity, horizontal angle, and vertical angle in local coordinate system (Figure 3), and the girder displacement data from GPS monitoring station contains absolute locations in WGS-84 coordinate system (Figure 3), which are supposed to deduct reference locations for analysis.Until now, the storage amount of monitoring data has increased to 93 million for each measurement point.Such considerable monitoring data cannot be totally applied to actual analysis; thus the monitoring data from upstream flank in the year 2012 are specially chosen. Taking the monitoring data in the whole August, for example, considering that lateral displacement effect of main girder is primarily reflected by wind load across the Sutong Bridge, hence time series of wind velocity and girder displacement are decomposed into the -axis in local coordinate system as shown in Figures 4(a By comparison between Figures 4(b) and 5(b), similar variation characteristics exist between static wind velocity and static girder displacement, which can be visually described by correlation scatter plots as shown in Figure 6, indicating apparent nonlinear correlation similar to quadratic parabolic curve.Therefore, static girder displacement can be mathematically expressed by static wind velocity, with consideration of definite discreteness affected by other random factors.Moreover, time series of dynamic girder displacement depict obvious steady stochastic fluctuation, as well as no variation of its power spectrum densities by time shown in Figure 7; thus dynamic girder displacement can be mathematically simulated by harmonic superposition method.characteristics (autoregression, moving average, and heteroscedasticity) and can be mathematically described by the combined models of ARMA(, ) and EGARCH(, ).In detail, the ARMA(, ) model defines the stochastic characteristics of autoregression and moving average as follows: Modeling Theory and Procedure where is the constant term, and , respectively, denote the orders of autoregression or moving average of 2 (), and , respectively, denote the coefficients of autoregression or moving average of 2 () with 0 = 1, and − denotes the innovations process with time delay of .Meanwhile, the other EGARCH(, ) model defines the stochastic characteristic of heteroscedasticity as follows [9,10]: where denotes the conditional variance of the innovations process , is the constant term, and , respectively, denote the orders of the EGARCH(, ) model, , V , and , respectively, denote the coefficients of the EGARCH(, ) model, is a standard, independent, and identically distributed random draw from some specified probability distribution such as Gaussian or Student's , and with degrees of freedom V > 2. Considering that the EGARCH(, ) model is treated as ARMA(, ) models for log 2 , thus the stationarity constraint for the EGARCH(, ) model is included by ensuring that the eigenvalues of the characteristic polynomial, are inside the unit circle, where and are the variable and coefficients of the characteristic polynomial, respectively.During modeling process for time series of discreteness 2 (), the autocorrelation function () and partial correlation function () with their lag phase are introduced for stationary test of 2 ().Specifically, the autocorrelation function () can be calculated as follows [11]: where with denoting the amount of 2 ().And the other partial correlation function () can be calculated through fitting successive autoregressive models of orders by ordinary least squares, retaining the last coefficient of each regression [12].Besides, the AIC and BIC delimitation criteria are applied to determine model orders, with their statistical values aic and bic being, respectively, calculated as follows [13,14]: where denotes the optimized log-likelihood objective function (LLF) values associated with parameter estimates of the combined models, 1 denotes the number of estimated parameters associated with each value in LLF, and 2 denotes the sample size of the observed 2 () associated with each LLF value. Detailed Procedure. Based on the theory and method above, the detailed procedure for mathematically modeling 5.96 The value of order p q = 1 q = 2 The standard error 5.96 The value of order p q = 1 q = 2 The standard error time series of static girder displacement is illustrated, taking the correlation scatter plots in the whole August in Figure 6, for example, as follows. Step 1. Fitting Fourier series for nonlinear correlation term.By means of the MATLAB fitting tools (utilizing the thirdorder Fourier series (1) to fit the correlation scatter plots) [10], the mathematical model of nonlinear correlation term is straightforward and acquired as shown in Figure 8(a), together with the estimated values of Fourier parameters presented in Table 1.By substitution of estimated values together with V() into formula (1), time series of fitting static displacement 1 () in the whole August are acquired as shown in Figure 8(b). Step 2. Stationary test for time series of discreteness.Time series of discreteness 2 () can be acquired by () minus 1 () as shown in Figure 9(a).Autocorrelation function () and partial correlation function () of 2 () are calculated with 50 lag phases shown in Figures 9(b) and 9(c), respectively, presenting that () is slowly converging into the 95% confidence intervals as the lag phase increases, which indicates bad stationarity of 2 () for mathematical modeling.Due to this, process of first-order difference for 10(a), with its () and () shown in Figures 10(b) and 10(c), both presenting rapid convergence into the 95% confidence intervals and verifying good stationarity for processed discreteness. Step 3. Order determination of ARMA(, ) and EGARCH(, ).The orders of and are relative to the convergent forms of () and ().That is, the () and () clearly present the trailing property in Figures 10(b) and 10(c), with 4th and 7th of lag phase initially converging into the 95% confidence intervals, inferring that 7 and 4 are appropriately assigned to the orders of and , respectively [11,12].Furthermore, the orders of and are determined utilizing the AIC and BIC delimitation criterion. In detail, the statistical values of AIC and BIC from time series of processed discreteness (Figure 10(a)) are calculated, respectively, under integer assignment of and between 0 and 8 as shown in Figure 11, presenting that the most suitable orders of and are 2 and 1, respectively, corresponding to the minimum statistical values [13,14]. Step 4. Parameter estimation and residual test of ordered models.Based on the specified orders above, model parameters ( , , , V , , and ) are further estimated to fit time series of processed discreteness, as shown in Tables 2 and 3, respectively.For testing the fitting effectiveness of estimated parameters, residuals between processed discreteness (Figure 10(a)) and its defined models with estimated parameters are analyzed using () and (), as shown in Figure 12, which presents that both () and () are consistent in the 95% confidence intervals (except is 0) and thus verifies good availability of estimated parameters for ordered models.Moreover, the standard errors between processed discreteness (Figure 10(a)) and its models with lower orders are shown in Figure 13. Step 5. Simulation for time series of discreteness 2 () and static displacement ().Based on the mathematical models of ARMA (7,4) and EGARCH(2, 1) with estimated parameters, time series of simulative processed discreteness are shown in Figure 14 Modeling for Dynamic Girder Displacement 3.2.1.Modeling Theory and Method.Time series of dynamic girder displacement show consistent power spectrum density (Figure 7), which can be mathematically simulated by harmonic superposition method [15][16][17].Primarily, function expression of power spectrum density should be confirmed to lay foundation for harmonic superposition.Considering that straightforward function fitting will ignore local feather of acute peak at 10 −1 Hz around of frequency, thus the power spectrum density are decomposed into two parts: part one to fit the whole trend with ignorance of acute peak; part two to specifically fit the acute peak.Furthermore, each part can be expressed by the -order Gaussian series in logarithmic form; that is, where 1 () denotes function expression of part one, 2 () denotes function expression of part two, and , , , , ℎ , and are fitting parameters of 1 () and 2 (), where is the random variable of uniform distribution within [0, 2].With superposition of (, ) from each subinterval, the time series of dynamic girder displacement () can be mathematically modeled by harmonic superposition function as follows: 3.2.2.Detailed Procedure.Based on the theory and method above, the detailed procedure for mathematically modeled time series of dynamic girder displacement is illustrated, taking the correlation scatter plots from August 1 to August 8 in Figure 15(a), for example, as follows. Step 1. Decreasing discreteness for original power spectrum density.Considering that certain discreteness existing in original power spectrum density can cover the acute peak at 10 −1 Hz, adverse to fitting the -order Gaussian series, therefore, average process with double frequency scales is carried out to decrease discreteness, specifically by 10 Step 2. Fitting Gaussian series for two parts of processed power spectrum density.The processed power spectrum density can be divided into two parts: the whole trend with ignorance of acute peak as shown in Figure 16(a); the specific acute peak as shown in Figure 16(b).By means of the MATLAB fitting tools (utilizing the fourth-order Gaussian series (9) to fit two parts) [10], the mathematical models 1 () and 2 () of two parts are, respectively, shown in Figures 15(a Step 3. Harmonic superposition for fitting power spectrum density () ⋅ () is divided into 25000 subintervals within frequency bands [10 −5 Hz, 10 −0.5 Hz].In each subinterval, one middle frequency mid, and its corresponding value ( mid, ) are existent and then substituted into (15) for summation; thus time series of dynamic girder displacement () can be acquired as shown in Figure 17(a) (simulated for 86400 s).By comparing its power spectrum density shown in Figure 17(b) with the monitoring one shown in Figure 15(a), good similarity of the whole trend and the acute peak verifies Model Test and Evaluation According to mathematical modeling process above, time series of lateral girder displacement are expressed by combination of third-order Fourier series 1 (), ARMA(7,4), EGARCH(2, 1), and harmonic superposition function (). For verifying feasibility and effectiveness of the whole mathematical models, time series from September 1 to September 3 are simulated with intervention of monitoring static wind (Figure 18(a)) and then compared with monitoring ones during same period (Figure 18(b)). According to mathematical modeling theory and procedure above, comparison is divided into two parts: (1) time series of static girder displacement; (2) time series of dynamic girder displacement.As for the first part, its simulative and monitoring results are shown in Figure 19(a) and linear fitting curve of correlation scatter plots is shown in Figure 19(b), presenting consistent variation tendency in Figure 19(a) and () approximating to () in Figure 19(b) (where () and (), respectively, denote simulative and monitoring results), which verifies good feasibility and effectiveness of mathematical models for static girder displacement.As for the second part, power spectrum densities of simulative and monitoring results are shown in Figure 20, presenting uniform variation tendency of both whole trends and acute peaks, which verifies good feasibility and effectiveness of mathematical models for dynamic girder displacement.Therefore, mathematical models above can be reasonably utilized to simulate time series of lateral girder displacement from main girder of Sutong Cable-Stayed Bridge. Conclusions Based on monitoring data from main girder of Sutong Cable-Stayed Bridge, time series of lateral girder displacement effect are mathematically modeled by methods of fitting Fourier series and Gaussian series, combined models of ARMA (7,4) and EGARCH(2,1), and harmonic superposition function.And conclusions can be drawn as follows. (1) Scatter plots between lateral static wind velocity and lateral static displacement present apparent nonlinear correlation, which is similar to quadratic parabolic curve, and time series of lateral dynamic displacement contain obvious stable power spectrum density with no variation by time.(2) Time series of lateral static displacement can be decomposed into nonlinear correlation term and discreteness term.Moreover, nonlinear correlation term can be mathematically modeled by third-order Fourier series with intervention of lateral static wind velocity, and discreteness term can be mathematically modeled by the combined models of ARMA (7,4) and EGARCH(2, 1). (3) Through decreasing discreteness in double frequency scales and division in double frequency bands, power spectrum density of lateral dynamic displacement can be mathematically modeled by the fourth-order Gaussian series, and time series of lateral dynamic displacement can be further mathematically modeled by harmonic superposition function. (4) By the comparison between simulative and monitoring lateral displacement effect from September 1 to September 3, mathematical models are feasible and effective to simulate time series of lateral girder displacement from main girder of Sutong Cable-Stayed Bridge. Figure 4 : Figure 4: Time series of wind velocity along -axis in the whole August. Time series of dynamic girder displacement Figure 5 :Figure 6 : Figure 5: Time series of girder displacements along -axis in the whole August. Figure 7 :Figure 8 : Figure 7: Power spectrum densities of dynamic girder displacement in different periods. ) and 5(a), respectively, which can be obviously observed that either of two time series contain static variant trend in whole and dynamic stochastic fluctuation in part.Such two kinds of variation characteristics can be furthermore separated by 10-minute average process as shown in Figures4(b), 4(c), 5(b), and 5(c), respectively. Figure 11 : Figure 11: The statistical values of AIC and BIC Figure 13 : Figure13: The standard errors between processed discreteness and its models with lower orders. Figure 15 : Figure 15: Original and processed power spectrum densities. (a), and through inverse calculation of first-order difference, time series of simulative original discreteness 2 () are obtained as shown in Figure 14(b).Together with time series of fitting static displacement 1 () in Figure 8(b), time series of simulative static displacement () are ultimately shown in Figure 14(c), definitely similar to time series of monitoring static displacement in Figure 5(b). FrequencyFigure 17 : Figure 17: Time series of simulative dynamic displacement () and its power spectrum density. − 3 Figure 19: Simulative and monitoring results and their correlation scatter plots with fitting curve. Figure 20 : Figure 20: Power spectrum densities of simulative and monitoring results. 2 () is carried out as shown in Figure Table 4 : Estimated parameter values of the fourth-order Gaussian series.
2018-12-19T23:44:30.513Z
2014-06-19T00:00:00.000
{ "year": 2014, "sha1": "6ef66c48620650f7dfc0c10d1b53883bd39c6451", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2014/723152.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ef66c48620650f7dfc0c10d1b53883bd39c6451", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
259190453
pes2o/s2orc
v3-fos-license
Evaluation of fMRI activation in post-stroke patients with movement disorders after repetitive transcranial magnetic stimulation: a scoping review Background Movement disorders are one of the most common stroke residual effects, which cause a major stress on their families and society. Repetitive transcranial magnetic stimulation (rTMS) could change neuroplasticity, which has been suggested as an alternative rehabilitative treatment for enhancing stroke recovery. Functional magnetic resonance imaging (fMRI) is a promising tool to explore neural mechanisms underlying rTMS intervention. Object Our primary goal is to better understand the neuroplastic mechanisms of rTMS in stroke rehabilitation, this paper provides a scoping review of recent studies, which investigate the alteration of brain activity using fMRI after the application of rTMS over the primary motor area (M1) in movement disorders patients after stroke. Method The database PubMed, Embase, Web of Science, WanFang Chinese database, ZhiWang Chinese database from establishment of each database until December 2022 were included. Two researchers reviewed the study, collected the information and the relevant characteristic extracted to a summary table. Two researchers also assessed the quality of literature with the Downs and Black criteria. When the two researchers unable to reach an agreement, a third researcher would have been consulted. Results Seven hundred and eleven studies in all were discovered in the databases, and nine were finally enrolled. They were of good quality or fair quality. The literature mainly involved the therapeutic effect and imaging mechanisms of rTMS on improving movement disorders after stroke. In all of them, there was improvement of the motor function post-rTMS treatment. Both high-frequency rTMS (HF-rTMS) and low-frequency rTMS (LF-rTMS) can induce increased functional connectivity, which may not directly correspond to the impact of rTMS on the activation of the stimulated brain areas. Comparing real rTMS with sham group, the neuroplastic effect of real rTMS can lead to better functional connectivity in the brain network in assisting stroke recovery. Conclusion rTMS allows the excitation and synchronization of neural activity, promotes the reorganization of brain function, and achieves the motor function recovery. fMRI can observe the influence of rTMS on brain networks and reveal the neuroplasticity mechanism of post-stroke rehabilitation. The scoping review helps us to put forward a series of recommendations that might guide future researchers exploring the effect of motor stroke treatments on brain connectivity. Introduction According to The Global Burden of Diseases, Injuries, and Risk Factors Study (1), stroke accounted for 101 million prevalent cases, 6.55 million deaths, and 143 million disability-adjusted life-years worldwide in 2019. Ischemic and hemorrhagic stroke are the two main types of stroke. Ischemic stroke was defined as an episode of neurological dysfunction resulting from decreased blood flow to a certain area of the brain. In contrast, hemorrhagic stroke was not brought on by trauma but rather by a weak blood vessel that bursts and bleeds into the surrounding brain tissue (2). Stroke is the principal cause of serious disability globally, and movement disorders are one of the most prevalent sequelae of stroke (3). Meanwhile, the recovery of movement disorders after stroke are often incomplete, which cause a major stress on their families and society (4). Stroke results in neuronal death in the directly damaged brain regions. On the other hand, cortical regions remote from directly damaged areas also undergo secondary degeneration or reorganization, leading to widespread changes in the structure and function of the whole brain network (5). In a word, the direct injury effects of motor neurons and their descending axons, as well as abnormal connections remote from the injured lesion, may be important pathophysiological factors for stroke residual effects (6). These changes are closely related to neurological function deficits and subsequent recovery after stroke (7). It is considered that recovery of motor function following stroke depends on neuroplasticity (8). The nervous system's ability to adjust to pressure from the environment, new experiences, and changesincluding brain injury-is known as neuroplasticity (9,10). The development of our knowledge of the neuroplastic changes has inspired researchers to look into methods of anticipating probable post-stroke recovery. Repetitive transcranial magnetic stimulation (rTMS) has already aroused increasingly attention as a tool for modulating stroke-induced abnormal brain network activity and functional connections, which allow the brain to change and adapt to injury following stroke (11). rTMS could improve movement disorders by enhancing the neural plasticity of the brain networks (12). Additionally, its long-lasting neuromodulation beyond the stimulation period could make rTMS suitable for the treatment of movement disorders after stroke (13). High-frequency rTMS (HF-rTMS) on the ipsilesional hemisphere can upregulate the excitatory effects of the ipsilesional cortex, while low-frequency rTMS (LF-rTMS) on the contralesional hemisphere can downregulate excitatory effects of the ipsilesional cortex (14) (Figure 1). They have been widely utilized during the acute, subacute, and chronic phases, and have been proven to restore motor function after stroke (15). Despite the benefits associated with rTMS, such as motor recovery, the mechanisms through which rTMS exerts therapeutic effects remain poorly understood. Functional magnetic resonance imaging (fMRI) measures changes in blood oxygen levels in the brain, and the blood-oxygenlevel-dependent (BOLD) signal evaluates brain activity, with better temporal resolution than PET and SPECT, and superior spatial resolution compared to EEG and MEG (16). People can use fluctuations in the BOLD signal to assess functional connectivity, a method identifying correlation patterns between brain regions (17). fMRI is a non-invasive imaging technique to accurately describe the reorganization of the cerebral cortex, changes in interhemispheric balance, and activity changes of the hemispheres (18). fMRI includes task-based fMRI and resting-state fMRI(rs-fMRI). Task-based fMRI is a technique that requires subjects to perform specific tasks during scanning. Researchers have used fingertapping task-based fMRI to investigate changes in the activation of the sensorimotor network pre-and post-rTMS stimulation (19). rs-fMRI is sensitive to changes in deoxyhemoglobin, which can indirectly reflect changes in neuronal activity (20). Thus, rs-fMRI are applicable to stroke survivors with motor dysfunctions. fMRI has been used to explore the underlying mechanisms of rTMSmediated neuronal modulation and make us better understand the plasticity in the brain network. The neural response induced by rTMS is not confined to the stimulated brain area, but can also spread to other cortical regions remote from the stimulated area (21). Hence, rTMS can affect extensive brain functional networks and even whole brain activity (22). Researchers and clinicians have harnessed neuromodulatory effects to promote motor recovery in stroke survivors with the aid of rTMS, possibly owing to alterations in the regions below the stimulation site and where sensorymotor networks are connected (23,24). Most researches used behavioral and neurophysiological measures to evaluated how rTMS affected stroke motor recovery (25). Nevertheless, they were unable to offer information on brain changes with an outstanding spatial resolution. To date, the neural mechanisms associated with rTMS intervention and their influence on functional connections are rather complicated and poorly understood. From a clinical perspective, a profound understanding of the neural mechanisms underlying recovery is helpful for developing efficient therapy in the future. In order to better understand and use the rTMS technology, we did a scoping review to determine the rTMSinduced neural plasticity measured through fMRI in post-stroke patients with movement disorders. Method Scoping review For this scoping review, we use the methodological framework developed by Arksey Identifying the research question In this review, we will concentrate on the alteration of brain networks measured by fMRI after rTMS over M1 in stroke patients with movement disorders, and understand the brain mechanisms of rTMS. The scoping review may provide guidance for rTMS use as a therapeutic tool in movement disorders after stroke. Selecting studies Inclusion and exclusion criteria We formulated inclusion and exclusion criteria. The inclusion criteria were: (1) The study population included hemiplegia patients with ischemic or hemorrhagic stroke who performed functional assessment and underwent fMRI that was analyzed; (2) performed rTMS as the treatment; (3) fMRI as a tool to investigate brain activation before rTMS and after rTMS; (4) published in English or Chinese. /fneur. . locate the paper, the article was withdrawn or the complete text was not accessible. Data management, screening, and extraction The following phases were involved in study selection: titles and abstracts were reviewed for relevance by two reviewers according to the inclusion and exclusion criteria above. Then Full-texts were then screened. The two authors reached a consensus to determine whether this study should be included or excluded. If the two authors failed to reach consensus, a third author would have been consulted. Included articles were then examined to extract data. The process of identification, screening, eligibility, and inclusion of studies is pictured in Figure 2. Charting the data Critical appraisal Although a critical appraisal is not required by the scoping review, previous studies propose that the quality of evidence is an important part of this type of review (27). We choose the Downs and Black quality assessment checklist (28) to evaluate the level and quality of both randomized and nonrandomized controlled trials. According to the following cut-off points, we categorized the studies as excellent (26)(27)(28); good (20-25); fair (15)(16)(17)(18)(19); and poor (≤14) (29). Data collection and synthesis The data from each study were collected and categorized: each individual study (first author, year, and country of publication), study type, intervention delivered, outcome measurement and population characteristics. Regarding the rTMS protocol, the following stimulus parameters were extracted and recorded: targeted regions, stimulation intensity and frequency, number of TMS pulses per session, number of sessions, course and time. To evaluate whether and how rTMS modulated neural activity, we extracted the changes observed pre-and post-rTMS. Collating, summarizing, and reporting the results Seven hundred and eleven records were discovered in the database, 273 duplicate records were removed, and nine records passed the screening procedure' inclusion criteria. Figure 2 displays the screening procedure and the justifications for excluding research. The research contents are summarized in Table 2. In the included studies, the outcome measure was the Fugl-Meyer . The MRC scale has been the common and widely accepted assessment scale for muscle power, which grades muscle power on a scale of 0-5 (39). The WMFT consists of time and quality scales evaluating a set of 15 upper extremity functional tasks (40). As one of the most comprehensive quantitative evaluation indicators following stroke, FMA has been widely used in assessing reflex activity, motor control, and muscle strength, which consists of 33 items related to the motor function and the maximum exercise result is 66 points (41). The Motricity Index (MI) is an effective tool to assess stroke patients with motor dysfunction (42), which assessed muscle power by analyzing movements of all joints of extremities (43). mRS is a single item scale measuring the degree of disability or dependence in the daily activities for patients post-stroke (44). It is a welldesigned scale, which is used to classify functionally independent levels with reference to pre-stroke activities. BI was is a frequentlyused clinical assessment tool for daily living, thus reflecting motor function (45). As one of the most widely-used assessments of functional independence, BI is much more sensitive to changes in disability than the mRS (46). Result Search results Databases searches identified 711 articles. After screening titles, abstracts, and full texts for eligibility, nine articles were included. Four studies were conducted in China, two in Japan, two in Germany, and one in Turkey. Five studies were RCTs, three were pre-post-test trials, and one was non-randomized controlled trials. A summary of the characteristics of the included papers is summarized in Table 2. Figure 3 shows the findings regarding the effects of rTMS on fMRI and motor performance in stroke survivors. Table 3 displayed the outcomes of the quality evaluation of each study. Five studies were rated as having good quality and four as fair according to the Downs and Black criteria. The experimental hypothesis, the primary clinical features of the patients, the intervention techniques, and the key findings were all well reported in the papers under review. However, the studies met the requirements for the reporting section, but none provided the principal confounders in the groups and reported adverse events of intervention. Six studies failed to adhere to the requirement for blinded outcome assessment and participants blinded to treatment. Rs-fMRI studies of HF-rTMS to the ipsilesional M By measuring the effects of the HF-rTMS (10 Hz) applied in the ipsilesional M1 on BOLD signals, Du et al. (32) found that HF-rTMS increased BOLD signal in the ipsilesional motor areas. Motor . /fneur. . performance was observed in conjunction with fMRI changes. The effects of HF-rTMS (10 Hz) and LF-rTMS (1 Hz) were contrasted by Juan et al. (31). When applied the 10 Hz in the ipsilesional M1 they found an increase in resting-state functional connectivity (FC) between the bilateral M1, which has a positive relationship with motor performance. Guo et al. (33) assessed the effect of functional reorganization following rTMS in stroke survivors as well as the differences between HF-rTMS and LF-rTMS. In the HF-rTMS group, they found higher FC between ipsilesional M1 and contralesional premotor area (PMA), which suggests that HF-rTMS can increase the FC of the ipsilesional motor network and enhance the motor functions. Significant functional connectivity changes after rTMS are summarized in Figure 4. Significant activations of the brain areas after rTMS are summarized in Figure 5. Rs-fMRI studies of LF-rTMS to the contralesional M Du et al. (32) show LF-rTMS reduced brain excitability and fMRI activation in contralesional motor region. These changes were accompanied by improved motor activity. Juan et al. (31) reported 1 Hz rTMS applied to contralesional M1 resulted in enhanced FC between contralesional M1 and ipsilesional SMA. Motor improvement evaluated using the FMA and MRC scale was significantly enhanced in the real rTMS group compared with the sham group. Gottlieb et al. (37) found that connectivity to the left angular gyrus increased after LF-rTMS over M1. The modified Ashworth scale (MAS)score was reduced and the FMA score improved in the LF-rTMS group, suggesting motor improvement. Significant functional connectivity changes after rTMS are summarized in Figure 4. Significant activations of the brain areas after rTMS are summarized in Figure 5. Rs-fMRI studies of HL + LF-rTMS to the M Chen et al. (34) found that inhibitory-facilitatory rTMS treatment induced greater increases in FC between multiple brain regions in comparison to the other groups using singlecourse or sham rTMS, resulting in great improvements in motor function. Significant functional connectivity changes after rTMS are summarized in Figure 4. Significant activations of the brain areas after rTMS are summarized in Figure 5. Task-fMRI studies of LF-rTMS to the contralesional M Ueda et al. (30) found significant FC changes in bilateral cerebral hemispheres after LF-rTMS during task-fMRI. According to Grefkes et al. (36), 1-Hz rTMS has suppressive effects on ipsilesional M1 and facilitates more efficient motor processing in the contralesional hemisphere, as shown by the improved coupling of SMA and M1. Wanni et al. (35) show significant activations were seen in the ipsilesional PMA, M1, and thalamiccortical regions with the paretic hand movements after rTMS. However, significant activations in the contralesional primary somatosensory cortex (S1), superior parietal cortex, and bilateral cerebellum with unaffected hand movements after the intervention were observed. There was a considerable improvement in FMA and WMFT from pre-to post-rTMS. Tosun et al. (38) reported that greater activation of the affected M1 was observed during the movements of the paretic hand in most patients of the TMS group and TMS + NMES group. Significant functional connectivity changes after rTMS are summarized in Figure 4. Significant activations of the brain areas after rTMS are summarized in Figure 5. Neural plasticity There is growing evidence to suggest an association of rTMS with the induction of neural plasticity to promote stroke recovery (47). "Neural plasticity" refers to the ability of modification of the nervous system in response to suffering and to adapt following trauma by remodeling its structure, functions, or connections (48). The processes of neural plasticity include altered excitability of neuronal circuits, reorganization by using redundant connections, and formation of new functional connections, which may partly compensate for the lost function. Based on neuroplasticity research, motor function recovery after stroke is related with spontaneous neuroplasticity changes and rTMS-induced plasticity (49). During the post-stroke recovery stage, the spontaneous remodeling neural networks were accompanied by functional recovery (50,51). It is possible that these spontaneous changes in neuroplasticity are associated with pathophysiological mechanisms, such as salvage ischemic penumbra by revascularization, the release of neurotropic factors and neurotransmitters (52). However, these spontaneous neuroplasticity changes have a limited effect on stroke recovery (53). After stroke, rehabilitation can promote dynamic processes due to increase effective neuronal information input, promote neural repair and functional compensation (54). rTMS was identified to be a successful rehabilitation method for improving stroke recovery by promoting neuroplasticity. The application of fMRI provides a good understanding of the mechanisms involved. Recently, plasticity-induced rTMS was combined with fMRI to map plastic aftereffects (55). The neuroplastic mechanisms of motor function recovery after stroke can see the Figure 6. Plasticity changes during recovery Neural plasticity takes place at a very early stage after stroke, lasts for some time, and involves brain regions remote from the lesioned site. Schulz et al. (56) proved that the corticospinal tract fibers not only originating from the M1 and the ipsilesional PMA and SMA contributed to motor function after stroke. Pathologically, damage to the brain related to motor function, such as M1, PMA, SMA, and S1 (57), contribute to hemiplegia following stroke. Activation in M1 of the affected hemisphere is reduced and the activation is relocated toward the PMA and SMA during movements of the affected hand after stroke (58,59). By comparing a shift in sensorimotor cortical excitability in both hemispheres, it was found the activation of the contralesional sensorimotor cortex increases accompanying movements of the paretic hand in most post-stroke (60). After a stroke, there are alterations in neural activity in bilateral cerebral hemispheres, which can be beneficial, but can also contribute for maladaptive recovery (61). Conventionally, LF-rTMS or HF-rTMS is effective in improving motor functions by rebalancing hemispheres' the excitability (32). There have been many studies on local and global functional connections analysis. The most consistent conclusion is that the interhemispheric connection between regions involved in motor function has changed significantly, and the degree of connectivity is related to motor functions. In the early post-stroke period, motor network resting-state connections progressively decrease (62). fMRI studies demonstrated that the patients with reduced the functional connections between bilateral M1 are more severe in motor performance and with increased functional connection between the ipsilesional M1 and the contralesional thalamus, SMA . FIGURE Neuroplastic mechanisms of motor recover after stroke. and middle frontal gyrus at the stage of acute stroke was beneficial to the motor recovery (63). The decreased interhemispheric functional connections between homologous motor areas were associated with the degree of motor function in the acute phase (64), While the enhanced functional connections had a positive correlation with the spontaneous recovery of motor function in the weeks to months after stroke (62). The patients with good motor function recovery showed functional connections between homologous motor regions came back to normal levels in the stable phase after stroke. However, the patients with poor motor function recovery found that functional connections remained low (65,66). Therefore, it is necessary to seek alternative approaches that strengthen the natural plasticity of the sensorimotor system, which is particularly effective in enhancing motor improvement. In rTMS studies, functional connectivity (FC) evaluation is of value because it enables the evaluation of rTMS effects. The bimodal balance-recovery model The interhemispheric competition, assuming the presence of mutual, balanced inhibition between the hemispheres in the healthy people, is found to be altered after stroke (12). Damage to one hemisphere in stroke results not only from neuronal loss within the affected hemisphere but also from the downregulation of remaining neurons within the affected hemisphere resulting in increased inhibition of the affected hemisphere by the unaffected hemisphere (12). That is to say that both the stroke itself and the excessively high interhemispheric inhibition from the contralesional hemisphere result in a double impairment of the ipsilesional cortex. Based on TMS therapy (30), ipsilesional motor areas play an active and vital role in stroke recovery. Therefore, the downregulation of contralesional cortical excitability may be helpful for enhancing motor recovery. Researchers and clinicians have used rTMS to restore the interhemispheric balance to gain motor improvement by either enhancing the activtion of the lesioned hemisphere (38) or inhibiting the healthy hemisphere. However, it has been found that HF-rTMS given over the M1 of the unaffected side is more effective than LF-rTMS when patients with significant harm to the affected side (67). The significance of the contralesional hemisphere in motor recovery following stroke has been thoroughly investigated in the past few years. According to Ueda et al. (30), the cortical activity of the ipsilesional and the contralesional motor areas changes synchronously during movements of the paralyzed hand after rTMS in stroke survivors with moderate disability. fMRI research found the contralesional PMA was significantly activated during the movement of the paralyzed hand in stroke survivors (68). A meta-analysis (69) shows that consistently activated regions involved the contralesional M1, the bilateral PMA, and SMA in stroke patients compared to healthy controls. The extent of this activation is likely to be influenced by the size and location of the injured region, being greatest in stroke survivors with the greatest impairment. From what has been discussed above, the enhanced activation of the PMA, SMA, and the contralesional hemisphere make up for the lost function in stroke survivors with severe motor impairment. Thus, the vicariation model (12) was presented, which assumes activity in residual networks helps stroke people recover function lost by damaged areas. Similarly, rTMS can support residual motor function following stroke by inducing positive compensatory effects of contralesional mirror motor regions (67). The vicariation and interhemispheric competition are the two models of functional recovery, which hold opposite views and represent different neuromodulatory treatments. According to interhemispheric competition, stroke recovery would be facilitated by the downregulation of the contralesional hemisphere since it would free the injured hemisphere from aberrant inhibition by the ipsilesional hemisphere. The vicariation model, however, contends that such a tactic would impede compensating activity in the contralesional hemisphere. However, neither the interhemispheric competition model nor the vicariation model could account for all stroke recovery. Therefore, a novel theory called the bimodal balance-recovery model (12) was put out to account for the various contributions made by the contralesional hemisphere to post-stroke motor recovery. The biphasic recovery model of the transcallosal suppression model and the compensatory model may be the neurophysiological basis of functional recovery after stroke. Due to the high level of structural preservation, the interhemispheric competition model is more helpful for stroke survivors with moderate motor impairment. In contrast, patients with little structural reserve may find the vicariation model more helpful in predicting a recovery. Excessive activation in contralesional M1 and nonprimary motor areas can be seen in the early phase after stroke, suggesting recruiting of these brain regions after the cerebral vascular accident (70). This is in accordance with the result that shows overactivity in the ipsilesional and contralesional PMC, SMA, parietal cortex, and contralesional M1 during movements of the hemiplegic hand (71). In a word, bilateral cerebral hemispheres may both participate in the restoration . of motor functions brought about by rTMS intervention. It is necessary to conduct more researches to examine whether rTMS therapy can benefit more stroke patients using individualized stimulation techniques. RTMS for modulating plasticity following stroke RTMS improves neural activity Pathologically, specific brain damage associated with motor functions, such as M1, PMA, and SMA, can lead to motor disorders. The simultaneous activation of bilateral sensorimotor cortices would strengthen coherence of cortical activity, which is a crucial neurophysiological mechanism promoting interaction via transcallosal connections between the related brain regions. rTMS has been considered an effective strategy promoting the recovery of motor functions, which may cause alterations in brain activity and connectivity in local and distant areas after stroke (8). To balance out both sides of cortical excitability, HF-rTMS and LF-rTMS have been used, promoting or hindering the affected and unaffected M1, respectively (32). In both cases, rTMS can correct the maladaptive brain plasticity induced by stroke or enhance adaptive brain plasticity. Based on the fMRI findings, significant clusters in the bilateral cerebellum were observed during the unaffected hand movements after rTMS, suggesting the cerebellum play a role in stroke recovery (35). The cerebellum maintains many neural connections with the motor cortex, which controls motor skills such as motor coordination, fine motor, and motor learning (72). Moreover, the fMRI findings also found activations in bilateral thalamocortical circuits are associated with affected hand motions after rTMS (35). The thalamus served as a relay station for the sensory-motor route, sending relevant sensory and motor information to the cortex (73). Activations of the corpus callosum after rTMS in the included study (35) are in general consistent with previous research that indicated similar change in the affected hand movements after intervention (74). Another study demonstrated that changes in the structure of the corpus callosum are related to transcallosal inhibition and upper limb dysfunction in chronic stroke (75). The regaining of motor function in stroke patients may be influenced by the corpus callosum, which functions as a connector for information between brain hemispheres. Activation of motor cortices triggers brain plasticity, which may lead to enhanced cortico-subcortical connections and cortico-subcortical pathways, which are associated with functional recovery after stroke (53, 76). In these included studies, we found that rTMS reorganizes not only motor-related networks, M1, SMA, and PMA but also the cerebellum, thalamus as well as the corpus callosum. RTMS improves functional connectivity Functional connectivity can quantify the functional integration of different brain areas by correlating brain activity in order to detect neural interactions between regions, which are quite convincing (77). Not only are many brain activities related to particular brain regions, but also to connections between different brain regions. Damage to the brain's structural components may result from the localized neurological condition, and this damage may have an impact on how distant brain areas function (78). Stroke may result in neurological impairment by affecting localized, specific areas of the brain, but more evidence proves that the network effects resulting from the loss of connections between distant brain areas are important reasons for neurological impairment (5). Restoration of interhemispheric functional connectivity was only seen in stroke survivors who had recovered well, not in those who had recovered poorly (79), suggesting that interhemispheric functional connectivity in stroke patients is a significant indicator established (80). Therefore, increasing motor network connections after stroke may be particularly useful for promoting motor recovery. Furthermore, not only does rTMS affect the stimulated functional network, but also physically or functionally related remote brain areas. A large body of studies suggests that increased connections between major motor areas in bilateral cerebral hemispheres may underlie rTMS-mediated functional gains. By altering connections between motor areas in both the stimulated and non-stimulated hemispheres, rTMS may be used to improve motor function by reversing pathological alterations in the functional network architecture. The majority of interhemispheric connections, including M1, S1, and PMA, were involved in motor performance. Interhemispheric functional connections between ipsilesional M1 and contralesional M1 were significantly decreased after stroke (81). The studies (31,33) indicated that increased functional connections of ipsilesional/contralesional M1 could be served as the main target of motor rehabilitative regimes for stroke patients. Guo et al. (33) demonstrated the increased functional connections between ipsilesional M1 and contralesional PMA were associated with motor recovery after rTMS, suggesting contralesional PMA may be contributed to mediating motor recovery after stroke. Grefkes et al. (36) proved LF-rTMS enhanced coupling of SMA and M1, constituting a significant mechanism for better motor function. Some rodents studies confirmed improving function is mostly dependent on the restoration of neural networks in the ipsilesional hemisphere (82). Motor recovery is the result of a repair of ipsilesional effective connectivity between SMA and M1 as well as a reduction of abnormal transcallosal impacts. The activation of SMA is linked to the attention to intention, which is essential for voluntary motor movement, and it is an important compensatory area of movement (83). Gottlieb et al. (37) found the connectivity between the motor cortex and the angular gyrus increased after the LF-rTMS, and the angular gyrus is responsible for controlling upper limb motions (84). A previous study has shown that the postcentral gyrus is the primary somatosensory cortical center (85). The preservation of connectivity between the ipsilesional M1 and the contralesional postcentral gyrus indicates better motor performance (86). Therefore, the activation of contralesional postcentral gyrus is a significant part in the reorganization of motor function. Upper limb movement is closely related to M1 region located in the precentral gyrus (87). A better prognosis of motor function is associated with activation of precentral gyrus and postcentral gyrus in stroke patients (88). In an earlier study, the superior parietal gyrus was shown to be part of space object positioning and visual-motor coordination (89). Chen et al. (34) confirm increased functional connectivity in the contralesional precentral gyrus, postcentral . /fneur. . gyrus, and the parietal gyrus in the recovery of motor function. Changes in interhemispheric connectivities are associated with an imbalance between the contralesional hemispheres and ipsilesional hemispheres after stroke onset. Interhemispheric connectivities are increased by the rebalancing of bilateral hemispheric networks during the recovery period. These functional connectivity changes were linked to the recovery of motor function restoration and could be targeted for neurorehabilitation interventions following stroke. Although LF-rTMS and HF-rTMS have opposite effects on cortical activity of the directly stimulated region, these two frequencies of rTMS tend to increase functional connectivity. Especially, most studies using LF-rTMS protocol, reported increases in functional connectivity rather than reductions. Interhemispheric functional connection decreased sharply, and increased significantly in parallel with motor recovery after rTMS, suggesting the importance of interhemispheric functional connection in the recovery of motor function after stroke. In summary, these studies have demonstrated that rTMS can promote interhemispheric connectivity by increasing activation in ipsilesional regions, enhancing excitatory connectivity from the ipsilesional to the contralesional brain regions, and reducing stroke-induced transcallosal inhibition to facilitate facilitates the recovery of motor performance. RTMS protocols It has been shown that the long-lasting after-effects of plasticity-inducing rTMS can easily impact human behavior. The combination of rTMS with fMRI provides a unique opportunity to elucidate the mechanistic basis for such behavioral effects. Thus, this combination allows us to gain insight into the local and distant effects of different interventions and provides a means of addressing changes in functional connectivity that underlie potential behavioral effects. The currently accepted strategy to promote the recovery of motor function after stroke is to either apply HF-rTMS (31)(32)(33) to the M1 region of the ipsilesional hemisphere or apply LF-rTMS (30)(31)(32)(33)(35)(36)(37)(38) to the M1 region of the contralesional hemisphere. In clinical practice, the combination of HF-rTMS and LF-rTMS is relatively rare, but some studies have shown that the combination of the two is feasible and safe, and the therapeutic effect is better than the single application (90). In comparison to sham stimulation or singlecourse rTMS, coupled inhibitory-facilitatory rTMS significantly improved motor function. rTMS is a noninvasive neuromodulation technique whose variability in the stimulation intensity, frequency, duration, and target location influence modulatory effects (91). The HF-rTMS stimulation and the LF-rTMS stimulation both can improve motor function by rebalancing motor cortex excitability and regulating connectivity in patients after stroke (32). Du et al. (25) found that rTMS could dramatically enhance motor function, and this enhancement was strongly connected with changes in the motor cortex excitability, and LF-rTMS may have more profound effects than HF-rTMS. The HF-rTMS group offers better advantages for the functional connection recombination of the motor network on the ipsilesional brain, bringing greater benefits for the recovery of motor disorders (33). A meta-analysis revealed that HF-rTMS is marginally more efficient than LF-rTMS (92). Another metaanalysis, however, revealed a different outcome (93). Chen et al. (34) also found that the combined application of low-frequency and high-frequency rTMS had a synergistic effect on improving motor function and cortical excitability of patients. At present, the optimal effective frequency of rTMS is still uncertain. The 2014 European Guidelines for the treatment of rTMS indicate that both LF-rTMS or HF-rTMS can be used to restore motor function after stroke (94). The application of LF-rTMS shows level A evidence for motor stroke in the post-acute phase, as well as level C evidence in the chronic phase. While the application of HF-rTMS shows level A evidence in the post-acute phase (15). Limitations and future direction The summary of current evidence suggests that rTMS may change neural plasticity, which was associated with movement improvement after stroke. This scoping review revealed some important findings. First, in the process of searching the literature, we discovered most rTMS studies only focused on the functional connectivity changes in motor network, and few studies have involved deep brain areas. However, many movement diseases, including Parkinson's disease and stroke, are associated with the damage of the thalamus, putamen, cerebellum, and other subcortical regions (95, 96). As the included studies in the review seldom focused on whole-brain connectivity, but rather specific regions of interest, more researches should pay attention to the whole-brain connectivity. In the future, the regulation mechanism of rTMS on deep brain regions can be explored by analyzing the changes in functional connectivity in cortical and subcortical regions, which will provide a reference for the treatment of these diseases. Second, it should be highlighted that the number of studies applied fMRI to investigate the aftereffects of rTMS on the functional brain network following stroke is relatively limited, and the effects of rTMS, in particular, have seldom been mapped with task-based fMRI. Future research should concentrate on mapping the effects of rTMS on task-related activity and connectivity (97). Third, the last three decades have seen the progress of neuroimaging technology, which allow people to examine neuroplasticity noninvasively. However, each of them has its merits and demerits. Multimodal neuroimaging technology can merge information and overcome some of the limitations of the stand-alone neuroimaging method. Multimodal fusion technology permits more granular inspection of the underlying mechanisms of plastic after-effects of rTMS protocols. However, studies using multimodal fusion technology to explore neural mechanisms underlying rTMS intervention are relatively rare. Therefore, incorporating multimodal neuroimaging technology into clinical trials will help expand our knowledge of neural mechanisms underlying rTMS intervention and ultimately tailor the therapeutic use of rTMS. Fourth, fMRI studies have confirmed the role of bilateral M1 areas after a stroke, indicating functional connectivity between the areas and functional connectivity between M1 and other brain areas is associated with motor recovery (63). Although M1, as an important brain area responsible for planning . /fneur. . and performing motor functions, is the ideal stimulated brain area, stimulating rather the PMA has many benefits (98). When brain damage is severe, PMA of the contralesional hemispheres exerts an excitatory effect on the M1 affected hemisphere. When brain lesions are small, PMA of the contralesional hemispheres exerts an inhibitory effect (99). SMA is related to complicated motions and the programming of these complicated motions, which is a crucial issue to address in motor skill recovery. In addition, for patients with motor dysfunction accompanied by cognitive impairment or depression, the left dorsolateral frontal lobe can be selected as the stimulation target (100), and the recovery of motor function can be promoted by improving the cognitive and depression symptom. To date, studies have focused mostly on the motor cortex, whereas other parts of the brain are still underrepresented. Future researches may deepen the topic to concomitantly evaluate different targets areas. Lastly, none of the studies perform fMRI during rTMS, and a fMRI scan was usually performed at baseline and after rTMS that lasted several weeks. This approach is limited in its sensitivity to capture post-stimulation effects, and is likely to fail to map the immediate consequences of the stimulation. For solving this problem, simultaneous rTMS-fMRI, in which rTMS is applied during fMRI scan, permits research into the immediate effects of rTMS and the underlying processes of rTMS-mediated neuronal regulation. Conclusion The studies reviewed above strongly suggest that rTMS can be used to modulate disturbed cortical networks thereby improving motor function. As a noninvasive imaging method, fMRI is commonly used in brain activation studies. It works based on the changes in blood flow caused by neuronal activity. In response to neuronal activity, blood flow to the region can be accelerated to meet the increased demand for oxygen (101). fMRI studies can show a changing process of recovery of motor function. fMRI studies can start in the first few days poststroke and continue for several months to a year poststroke (102). The integration of fMRI and rTMS represents a powerful tool for manipulating and observing neural activity, which can unravel the mechanisms of TMS-mediated neural modulation. We deeply think that creating innovative techniques for the effective treatment of post-stroke movement disorders requires a thorough knowledge of the neural mechanisms underlying recovery from movement disorders. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
2023-06-19T13:12:25.659Z
2023-06-19T00:00:00.000
{ "year": 2023, "sha1": "779ee23fef65cd8a4381605406eda703a44564e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "779ee23fef65cd8a4381605406eda703a44564e5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1310752
pes2o/s2orc
v3-fos-license
Generating a Diverse Set of High-Quality Clusterings We provide a new framework for generating multiple good quality partitions (clusterings) of a single data set. Our approach decomposes this problem into two components, generating many high-quality partitions, and then grouping these partitions to obtain k representatives. The decomposition makes the approach extremely modular and allows us to optimize various criteria that control the choice of representative partitions. Introduction Clustering is a critical tool used to understand the structure of a data set. There are many ways in which one might partition a data set into representative clusters, and this is demonstrated by the huge variety of different algorithms for clustering [9,35,8,27,45,31,43,14,28]. Each clustering method identifies different kinds of structure in data, reflecting different desires of the end user. Thus, a key exploratory tool is identifying a diverse and meaningful collection of partitions of a data set, in the hope that these distinct partitions will yield different insights about the underlying data. Problem specification. The input to our problem is a single data set X. The output is a set of k partitions of X. A partition of X is a set of subsets X i = {X i,1 , X i,2 , . . . , X i,s } where X = s j=1 X i, j and for all j, j X i, j ∩ X i, j = / 0. Let P X be the space of all partitions of X; since X is fixed throughout this paper, we just refer to this space as P. There are two quantities that control the nature of the partitions generated. The quality of a partition, represented by a function Q : P → R + , measures the degree to which a particular partition captures intrinsic structure in data; in general, most clustering algorithms that identify a single clustering attempt to optimize some notion of quality. The distance between partitions, represented by the function d : P × P → R, is a quantity measuring how dissimilar two partitions are. The partitions X i ∈ P that do a better job of capturing the structure of the data set X will have a larger quality value Q(X i ). And the partitions X i , X i ∈ P that are more similar to each other will have a smaller distance value d(X i , X i ). A good set of diverse partitions all have large distances from each other and all have high quality scores. Thus, the goal is this paper is to generate a set of k partitions that best represent all high-quality partitions as accurately as possible. Related Work. There are two main approaches in the literature for computing many high-quality, diverse partitions. However, both approaches focus only on a specific subproblem. Alternate clustering focuses on generating one additional partition of high-quality that should be far from a given set (typically of size one) of existing partitions. k-consensus clustering assumes an input set of many partitions, and then seeks to return k representative partitions. Most algorithms for generating alternate partitions [38,16,6,5,13,21,12] operate as follows. Generate a single partition using a clustering algorithm of choice. Next, find another partition that is both far from the first partition and of high quality. Most methods stop here, but a few methods try to discover more alternate partitions; they repeatedly find new, still high-quality, partitions that are far from all existing partitions. This effectively produces a variety of partitions, but the quality of each successive partition degrades quickly. Although there are a few other methods that try to discover alternate partitions simultaneously [10,29,37], they are usually limited to discovering two partitions of the data. Other methods that generate more than just two partitions either randomly weigh the features or project the data onto different subspaces, but use the same clustering technique to get the alternate partitions in each round. Using the same clustering technique tends to generate partitions with clusters of similar shapes and might not be able to exploit all the structure in the data. The problem of k-consensus, which takes as input a set of m k partitions of a single data set to produce k distinct partitions, has not been studied as extensively. To obtain the input for this approach, either the output of several distinct clustering algorithms, or the output of multiple runs of the same randomized algorithm with different initial seeds are considered [46,47]. This problem can then be viewed as a clustering problem; that is, finding k clusters of partitions from the set of input partitions. Therefore, there are many possible optimization criteria or algorithms that could be explored for this problem as there are for clustering in general. Most formal optimization problems are intractable to solve exactly, making heuristics the only option. Furthermore, no matter the technique, the solution is only as good as the input set of partitions, independent of the optimization objective. In most k-consensus approaches, the set of input partitions is usually not diverse enough to give a good solution. In both cases, these subproblems avoid the full objective of constructing a diverse set of partitions that represent the landscape of all high-quality partitions. The alternate clustering approach is often too reliant on the initial partition, has had only limited success in generalizing the initial step to generate k partitions. The k-consensus partitioning approach does not verify that its input represents the space of all high-quality partitions, so a representative set of those input partitions is not necessarily a representative set of all high-quality partitions. Our approach. To generate multiple good partitions, we present a new paradigm which decouples the notion of distance between partitions and the quality of partitions. Prior methods that generate multiple diverse partitions cannot explore the space of partitions entirely since the distance component in their objective functions biases against partitions close to the previously generated ones. These could be interesting partitions that might now be left out. To avoid this, we will first look at the space of all partitions more thoroughly and then pick nonredundant partitions from this set. Let k be the number of diverse partitions that we seek. Our approach works in two steps. In the first step called the generation step, we first sample from the space of all partitions proportional to their quality. Stirling numbers of the second kind, S(n, s) is the number of ways of partitioning a set of n elements into s nonempty subsets. Therefore, this is the size of the space that we sample from. We illustrate the sampling in figure 1. This generates a set of size m k to ensure we get a diverse sample that represents the space of all partitions well, since generating only k partitions in this phase may "accidentally" miss some high quality region of P. Next, in the grouping step, we cluster this set of m partitions into k sets, resulting in k clusters of partitions. We then return one representative from each of these k clusters as our output alternate partitions. Note that because the generation step is decoupled from the grouping step, we treat all partitions fairly, independent of how far they are from the existing partitions. This allows us to explore the true density of high quality partitions in P without interference from the choice of initial partition. Thus, if there is a dense set of close interesting partitions our approach will recognize that. Also because the grouping step is run separate from the generation step, we can abstract this problem to a generic clustering problem, and we can choose one of many approaches. This allows us to capture different properties of the diversity of partitions, for instance, either guided just by the spatial distance between partitions, or also by a density-based distance which only takes into account the number of high-quality partitions assigned to a cluster. Space of all partitions From our experimental evaluation, we note that decoupling the generation step from the grouping step helps as we are able to generate a lot of very high quality partitions. In fact, the quality of some of the generated partitions is better than the quality of the partition obtained by a consensus clustering technique called LiftSSD [39]. The relative quality w.r.t. the reference partition of a few generated partitions even reach close to 1. To our best knowledge, such partitions have not been uncovered by other previous meta-clustering techniques. The grouping step also picks out representative partitions far-away from each other. We observe this by computing the closest-pair distance between representatives and comparing it against the distance values of the partitions to their closest representative. Outline. In Section 2, we discuss a sampling-based approach for generating many partitions proportional to their quality; i.e. the higher the quality of a partition, the more likely it is to be sampled. In Section 3, we describe how to choose k representative partitions from the large collection partitions already generated. We will present the results of our approach in Section 4. We have tested our algorithms on a synthetic dataset, a standard clustering dataset from the UCI repository and a subset of images from the Yale Face database B. Generating Many High Quality Partitions In this section we describe how to generate many high quality partitions. This requires (1) a measure of quality, and (2) an algorithm that generates a partition with probability proportional to its quality. Quality of Partitions Most work on clustering validity criteria look at a combination of how compact clusters are and how separated two clusters are. Some of the popular measures that follow this theme are S Dbw, CDbw, SD validity index, maximum likelihood and Dunn index [23,24,36,44,15,4,32,17]. Ackerman et. al. also discuss similar notions of quality, namely VR (variance ratio) and WPR (worst pair ratio) in their study of clusterability [1,2]. We briefly describe a few specific notions of quality below. k-Means quality. If the elements x ∈ X belong to a metric space with an underlying distance δ : X × X → R and each cluster X i, j in a partition X i is represented by a single elementx j , then we can measure the inverse quality of a cluster byq(X i, j ) = ∑ x∈X i, j δ (x,x j ) 2 . Then the quality of the entire partition is then the inverse of the sum of the inverse qualities of the individual clusters: This corresponds to the quality optimized by s-mean clustering 1 , and is quite popular, but is susceptible to outliers. If all but one element of X fit neatly in s clusters, but the one remaining point is far away, then this one point dominates the cost of the clustering, even if it is effectively noise. Specifically, the quality score of this measure is dominated by the points which fit least well in the clusters, as opposed to the points which are best representative of the true data. Hence, this quality measure may not paint an accurate picture about the partition. Kernel distance quality. We introduce a method to compute quality of a partition, based on the kernel distance [30]. Here we start with a similarity function between two elements of X, typically in the form of a (positive definite) kernel: K : X × X → R + . If x 1 , x 2 ∈ X are more similar, then K(x 1 , x 2 ) is smaller than if they are less similar. Then the overall similarity score between two clusters , and a single clusters self-similarity for X i, j ∈ X i is defined κ(X i, j , X i, j ). Finally, the overall quality of a partition is defined Q K (X i ) = ∑ s j=1 κ(X i, j , X i, j ). If X is a metric space, the highest quality partitions divide X into s Voronoi cells around s points -similar to s-means clustering. However, its score is dominated by the points which are a good fit to a cluster, rather than outlier points which do not fit well in any cluster. This is a consequence of how kernels like the Gaussian kernel taper off with distance, and is the reason we recommend this measure of cluster quality in our experiments. Generation of Partitions Proportional to Quality We now discuss how to generate a sample of partitions proportional to their quality. This procedure will be independent of the measure of quality used, so we will generically let Q(X i ) denote the quality of a partition. Now the problem becomes to generate a set Y ⊂ P of partitions where each X i ∈ Y is drawn randomly proportionally to Q(X i ). The standard tool for this problem framework is a Metropolis-Hastings random-walk sampling procedure [34,25,26]. Given a domain X to be sampled and an energy function Q : X → R, we start with a point x ∈ X, and suggest a new point x 1 that is typically "near" x. The point x 1 is accepted unconditionally if Q(x 1 ) ≥ Q(x), and is accepted with probability Q(x 1 )/Q(x) if not. Otherwise, we say that x 1 was rejected and instead set x 1 = x as the current state. After some sufficiently large number of such steps t, the expected state of x t is a random draw from P with probability proportional to Q. To generate many random samples from P this procedure is repeated many times. In general, Metropolis-Hastings sampling suffers from high autocorrelation, where consecutive samples are too close to each other. This can happen when far away samples are rejected with high probability. To counteract this problem, often Gibbs sampling is used [41]. Here, each proposed step is decomposed into several orthogonal suggested steps and each is individually accepted or rejected in order. This effectively constructs one longer step with a much higher probability of acceptance since each individual step is accepted or rejected independently. Furthermore, if each step is randomly made proportional to Q, then we can always accept the suggested step, which reduces the rejection rate. Metropolis-Hastings-Gibbs sampling for partitions. The Metropolis-Hastings procedure for partitions works as follows. Given a partition X i , we wish to select a random subset Y ⊂ X and randomly reassign the elements of Y to different clusters. If the size of Y is large, this will have a high probability of rejection, but if Y is small, then the consecutive clusters will be very similar. Thus, we use a Gibbs-sampling approach. At each step we choose a random ordering σ of the elements of X. Now, we start with the current partition X i and choose the first element x σ (1) ∈ X. We assign x σ (1) to each of the s clusters generating s suggested partitions X j i and calculate s quality scores q j = Q(X j i ). Finally, we select index j with probability q j , and assign x σ (1) to cluster j. Rename the new partition as X i . We repeat this for all points in order. Finally, after all elements have been reassigned, we set X i+1 to be the resulting partition. Note that auto-correlation effects may still occur since we tend to have partitions with high quality, but this effect will be much reduced. Note that we do not have to run this entire procedure each time we need a new random sample. It is common in practice to run this procedure for some number t 0 (typically t 0 = 1000) of burn-in steps, and then use the next m steps as m random samples from P. The rationale is that after the burn-in period, the induced Markov chain is expected to have mixed, and so each new step would yield a random sample from the stationary distribution. Grouping the Partitions Having generated a large collection Z of m k high-quality partitions from P by random sampling, we now describe a grouping procedure that returns k representative partitions from this collection. We will start by placing a metric structure on P. This allows us to view the problem of grouping as a metric clustering problem. Our approach is independent of any particular choice of metric; obviously, the specific choice of distance metric and clustering algorithm will affect the properties of the output set we generate. There are many different approaches to comparing partitions. While our approach is independent of the particular choice of distance measure used, we review the main classes. Membership-based distances. The most commonly used class of distances is membership-based. These distances compute statistics about the number of pairs of points which are placed in the same or different cluster in both partitions, and return a distance based on these statistics. Common examples include the Rand distance, the variation of information, and the normalized mutual information [33,40,42,7]. While these distances are quite popular, they ignore information about the spatial distribution of points within clusters, and so are unable to differentiate between partitions that might be significantly different. Spatially-sensitive distances. In order to rectify this problem, a number of spatially-aware measures have been proposed. In general, they work by computing a concise representation of each cluster and then use the earthmover's distance (EMD) [20] to compare these sets of representatives in a spatially-aware manner. These include CDistance [11], d ADCO [6], CC distance [48], and LiftEMD [39]. As discussed in [39], LiftEMD has the benefit of being both efficient as well as a well-founded metric, and is the method used here. Density-based distances. The partitions we consider are generated via a sampling process that samples more densely in high-quality regions of the space of partitions. In order to take into account dense samples in a small region, we use a density-sensitive distance that intuitively spreads out regions of high density. Consider two partitions X i and X i . Let d : P × P → R + be any of the above natural distances on P. Then let d Z : P × P → R + be a density-based distance defined as d Z (X i , Clusters of Partitions Once we have specified a distance measure to compare partitions, we can cluster them. We will use the notation φ (X i ) to denote the representative partition X is assigned to. We would like to pick k representative partitions, and a simple algorithm by Gonzalez [22] provides a 2-approximation to the best clustering that minimizes the maximum distance between a point and its assigned center. The algorithm maintains a set of centers k < k in C. Let φ C (X i ) represent the partition in C closest to X i (when apparent we use just φ (X i ) in place of φ C (X i )). The algorithm chooses X i ∈ Z with maximum value d(X i , φ (X i )). It adds this partition X i to C and repeats until C contains k partitions. We run the Gonzalez method to compute k representative partitions using LiftEMD between partitions. We also ran the method using the density based distance derived from using LiftEMD. We got very similar results in both cases and we will only report the results from using LiftEMD in section 4. We note that other clustering methods such as k-means and hierarchical agglomerative clustering yield similar results. Experimental Evaluation In this section, we show the effectiveness of our technique in generating partitions of good divergence and its power to find partitions with very high quality, well beyond usual consensus techniques. Data. We created a synthetic dataset 2D5C with 100 points in 2-dimensions, for which the data is drawn from 5 Gaussians to produce 5 visibly separate clusters. We also test our methods on the Iris dataset containing 150 points in 4 dimensions from UCI machine learning repository [18]. We also use a subset of the Yale Face Database B [19] (90 images corresponding to 10 persons and 9 poses in the same illumination). The images are scaled down to 30x40 pixels. Methodology. For each dataset, we first run k-means to get the first partition with the same number of clusters specified by the reference partition. Using this as a seed, we generate m = 4000 partitions after throwing away the first 1000 of them. We then run the Gonzalez k-center method to find 10 representative partitions. We associate each of the 3990 remaining partitions with the closest representative partition. We compute and report the quality of each of these representative partitions. We also measure the LiftEMD distance to each of these partitions from the reference partition. For comparison, we also plot the quality of consensus partitions generated by LiftSSD [39] using inputs from k-means, single-linkage, average-linkage, complete-linkage and Ward's method. Performance Evaluation Evaluating partition diversity. We can evaluate partition diversity by determining how close partitions are to their chosen representatives using LiftEMD. Low LiftEMD values between partitions will indicate redundancy in the generated partitions and high LiftEMD values will indicate good partition diversity. The resulting distribution of distances is presented in Figures 2(a), 2(b), 2(c), in which we also mark the distance values between a representative and its closest other representative with red squares. Since we expect that the representative partitions will be far from each other, those distances provide a baseline for distances considered large. For all datasets, a majority of the partitions generated are generally far from the closest representative partition. For instance, in the Iris data set (2(a)), about three-fourths of the partitions generated are far away from the closest representative with LiftEMD values ranging between 1.3 and 1.4. Evaluating partition quality. Secondly, we would like to inspect the quality of the partitions generated. Since we intend the generation process to sample from the space of all partitions proportional to the quality, we hope for a majority of the partitions to be of high quality. The ratio between the kernel distance quality Q K of a partition to that of the reference partition gives us a fair idea of the relative quality of that partition, with values closer to 1 indicating partitions of higher quality. The distribution of quality is plotted in Figures 3(a)3(b)3(c). We observe that for all the datasets, we get a normally distributed quality distribution with a mean value between 0.62 and 0.8. In addition, we compare the quality of our generated partitions against the consensus technique LiftSSD. We mark the quality of the representative partitions with red squares and that of the consensus partition with a blue circle. For instance, chart 3(a) shows that the relative quality w.r.t. the reference partition of three-fourths of the partitions is better than that of the consensus partition. For the Yale Face data, note that we have two reference partitions namely by pose and by person and we chose the partition by person as the reference partition due to its superior quality. Visual inspection of partitions. We ran multi-dimensional scaling [3] on the all-pairs distances between the 10 representatives for a visual representation of the space of partitions. We compute the variance of the distances of the partitions associated with each representative and draw Gaussians around them to depict the size of each cluster of partitions. For example, for the Iris dataset, as we can see from chart 4(a), the clusters of partitions are well-separated and are far from the original reference partition. In figure 5, we show two interesting representative partitions on the Yale face database. We show the mean image from each of the 10 clusters. Figure 5(a) is a representative partition very similar to the partition by person and figure 5(b) resembles the partition by pose. Conclusion In this paper we introduced a new framework to generate multiple non-redundant partitions of good quality. Our approach is a two stage process: in the generation step, we focus on sampling a large number of partitions from the space of all partitions proportional to the quality and in the grouping step, we identify k representative partitions that best summarizes the space of all partitions.
2011-07-29T14:07:51.000Z
2011-07-29T00:00:00.000
{ "year": 2011, "sha1": "551dece1e8bfef9ad4e925f1f61d73679a1c6795", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "551dece1e8bfef9ad4e925f1f61d73679a1c6795", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
211043954
pes2o/s2orc
v3-fos-license
Beyond-mean-field effects on the symmetry energy and its slope from the low-lying dipole response of $^{68}$Ni We study low-energy dipole excitations in the unstable nucleus $^{68}$Ni with the beyond-mean-field (BMF) subtracted second random-phase-approximation (SSRPA) model based on Skyrme interactions. First, strength distributions are compared with available experimental data and transition densities of some selected peaks are analyzed. The so-called isospin splitting is also discussed by studying the isoscalar/isovector character of such excitations. We estimate then in an indirect way BMF effects on the symmetry energy of infinite matter and on its slope starting from the BMF SSRPA low-lying strength distribution. For this, several linear correlations are used, the first one being a correlation existing between the contribution (associated with the low-energy strength) to the total energy-weighted sum rule (EWSR) and the slope of the symmetry energy. BMF estimates for the slope of the symmetry energy can be extracted in this way. Correlations between such a slope and the neutron-skin thickness of $^{68}$Ni and correlations between the neutron-skin thickness of $^{68}$Ni and the electric dipole polarizability times the symmetry energy are then used to deduce BMF effects on the symmetry energy. I. INTRODUCTION It is known that the low-lying dipole strength in neutron-rich nuclei, localized around the particle separation energy, has a strong impact on neutron radiativecapture cross sections, which are extremely important for nucleosynthesis processes of astrophysical interest [1][2][3][4][5]. This impact was one of the motivations for studying low-energy dipole excitations, first in stable nuclei through photon scattering [6,7], where it was understood that the low-energy strength strongly depends on the neutron-to-proton ratio N/Z. Going further with the isospin asymmetry for analyzing unstable nuclei became then an important experimental challenge (see, for instance, Refs. [8][9][10][11][12][13][14] and Refs. [15,16] for recent reviews). In a parallel effort, several links and correlations were explored with theoretical models to relate the low-energy dipole strength of neutron-rich nuclei to the neutronskin thickness of nuclei, to the symmetry energy of infinite matter, and to its density dependence (that is, its slope) [17][18][19]. It is interesting to mention that various correlations (some of them will be used later) between properties of nuclei and of nuclear matter have been analyzed in the literature, starting from the observation that the existence of a neutron-skin in neutron-rich nuclei is strictly related to the density dependence of the symmetry energy [20][21][22]. The neutron-skin thickness was found to be correlated with the symmetry energy calculated at the saturation density J as well as to its slope [18,21,[23][24][25], the slope being extremely important for example in heavy-ion collisions [23,[26][27][28][29][30] and in nuclear astrophysics for the description of neutron stars [31][32][33][34]. Correlations between the neutron-skin thickness and the product of the symmetry energy times the electric dipole polarizability were also investigated in Ref. [35]. It is important to stress that all such correlations were studied employing in most cases only mean-field models. For this reason, a risk could exist that these correlations do not reflect a general behavior but simply an artifact induced by this category of models. However, some of these correlations were first predicted within simple droplet models. This would indicate that they are probably general features of nuclei and nuclear matter and not a simple mean-field artifact. This is the case for example for the correlations found between the neutronskin thickness of a nucleus of mass A and the symmetry energy of matter J minus the symmetry energy in the nucleus. The latter quantity is defined within a droplet model as J/(1 + x A ) where x A = 9J/(4Q)A −1/3 and Q is the so-called surface stiffness [24]. Another example is the correlation found in a droplet model between the neutron-skin thickness and J/Q [25]. To address the problem of a possible mean-field model dependence in the analysis of correlations between different quantities, the authors of Ref. [36] made recently a study based on a Taylor expansion of the equation of state (EOS) of matter around the saturation density. They analyzed in particular to what extent correlations between so-called low-order empirical parameters (for instance J and the slope of the symmetry energy, entering in the lower orders of such an expansion) may be affected by uncertainties on higher-order parameters. To estimate uncertainties, around 50 models (including not only relativistic and non relativistic mean-field-based models but also many-body-perturbation-theory models [13]. The vertical black line represents the neutron threshold. The vertical magenta dotted lines correspond to the energy values of the two experimental low-energy centroids measured in Ref. [12] (panel (b)) and in Ref. [13] (panel (c)). with several chiral interactions) were used: employing all such models (not only of mean-field type) it was shown for example that the extracted correlation coefficient between the slope of the symmetry energy calculated at saturation density and J is quite high, ∼ 0.8. It is interesting to mention that correlations between the symmetry energy and its slope were analyzed employing different experimental constraints such as those coming from heavy-ion collisions, measurements of neutron skins, electric dipole polarizabilities, masses, giant dipole resonances (GDRs), isobaric analog states, as well as costraints coming from nuclear astrophysics (see, for instance, Refs. [37][38][39][40][41][42]). The low-energy strength of dipole excitation spectra in neutron-rich nuclei was called 'pygmy' because of its lower energy location and of its smaller contribution to the EWSR compared to GDRs. In several cases, as indicated by the associated transition densities, such a strength was interpreted as produced by oscillations of the neutron skin of the nucleus against its core. This interpretation was reconsidered in some nuclei, for instance in 48 Ca, where a description of the low-lying strength in terms of single-particle excitations was seen to be more appropriate [43,44]. Related to this low-lying strength distribution, the mixing between isoscalar and isovector nature was also discussed [45] as well as the mixing with complex configurations such as toroidal motions [46]. The first measurement of the low-lying dipole strength in the unstable nucleus 68 Ni was conducted by Wieland et al. [12] through virtual photon scattering at 600 MeV/nucleon (relativistic Coulomb excitations) at GSI. The strength was found to be centered at around 11 MeV with a contribution of 5% to the EWSR. Later, relativistic Coulomb excitations were used once again for the same nucleus at GSI to extract the electric dipole polarizability [13]. A slightly different result was found this time, with a centroid located at 9.55 MeV and a contribution of 2.8% to the EWSR. The discrepancy in the value of the centroid was explained as a possible 'energydependent branching ratio'. In connection with low-lying excitations, the so-called isospin splitting was largely discuseed in the literature [15,[47][48][49][50][51][52][53][54][55][56]: according to this, it could be expected that the lower-energy part of a pygmy dipole resonance is excited by both isoscalar and isovector probes (mixed isoscalar/isovector nature) whereas the higher-energy part, close to the tail of the GDR, is mostly an isovector excitation. The authors of Ref. [14] illustrated the first measurement done on 68 Ni using an isoscalar probe (isoscalar 12 C target) at INFN-LNS in Catania. They found a centroid placed at around 10 MeV and a contribution of 9% to the EWSR. Having in the literature three slightly different experimental centroids, located at ∼ 11, 9.55, and 10 MeV, we focus in this work on the study of the low-lying dipole response of the unstable nucleus 68 Ni using the BMF SSRPA model with Skyrme interactions [57]. Section II presents our predictions obtained with the Skyrme parametrization SGII [58,59] and the comparison with the available experimental results. Some selected transition densities are analyzed and a possible isospin splitting is discussed by studying the isoscalar/isovector character of these excitations. We then move to the symmetry energy and its density dependence, which is characterized by its slope. Section III illustrates BMF effects on the symmetry energy and its slope, using as a starting point correlations existing between the percentage of the EWSR associated with the low-lying dipole strength in 68 Ni and the slope of the symmetry energy (Subsec. III A). Correlations between the neutron-skin thickness and the slope of the symmetry energy (Subsec. III B) and correlations between the electric dipole polarizability times the symmetry energy and the neutron-skin thickness (Subsec. III C) are then used to estimate the impact of BMF calculations on the symmetry energy and its slope (Subsec. III D). This estimation is of course qualitative and we are not going to provide any quantitative predictions. Our aim is to show to what extent such quantities can be modified by effects induced by beyond-mean-field correlations. Conclusions are drawn in Sec. IV. II. SSRPA PREDICTIONS AND COMPARISON WITH MEASUREMENTS FOR THE LOW-LYING STRENGTH OF 68 NI The SSRPA model is applied to compute the dipole strength distribution of 68 Ni with the Skyrme parametrization SGII. A cutoff of 80 (50) MeV is chosen for the 1p1h (2p2h) sector and the diagonal approximation is adopted in the 2p2h matrix used for the calculation of the corrective term in the subtraction procedure [57]. Figure 1 represents the SSRPA results (panel(a)) for the transition operator and the comparison with the random-phaseapproximation (RPA) spectrum (panel(b)) computed with the same Skyrme functional. We applied the procedure described in Ref. [49] in order to project out possible admixtures with spurious components. We found that the this procedure affects the transition probability and the EWSR percentage by less than 0.01 %, showing that our self-consistent approach is reliable. A vertical line located at 7.792 MeV indicates the neutron threshold. Before describing the low-lying spectrum (below ∼ 12 MeV), which is the focus of this work, we may compare the strength distributions in the GDR region and say that, as expected, the SSRPA spectrum is much denser in this region compared to the RPA case, describing the physical fragmentation and width of the resonance. Let us focus on the region below ∼ 12 MeV, to identify there BMF effects. We first observe that there is more strength in the SSRPA spectrum which results, as we will see later, in a higher percentage of EWSR (in the BMF model) computed up to a low-energy cutoff, which separates the pygmy excitation and the GDR. This difference reflects a BMF effect. We also observe that the SSRPA low-energy distribution shows peaks concentrated around 9 and 10 MeV, as well as several peaks located just above 11 MeV, which means that a non negligible strength is predicted by the SSRPA model in the regions where thet three experimental centroids are located. On the other side, in the low-energy part of the RPA spectrum there are much less peaks, the highest one being located between 9.5 and 10 MeV. Another isolated peak is placed below 11 MeV and there is practically no strength above 11 MeV. We may conlude that BMF effects are important to provide a larger fragmentation of the strength leading to a better coverage, in SSRPA, of the region where the three experimental centroids were found. Looking at Fig. 1, one observes a kind of separation at ∼ 12 MeV between the low-energy strength and the strength that may be associated with the tail of the giant resonance (we will discuss this later). Since we are going to estimate BMF effects related to the low-energy strength computed with the interaction SGII, we calculate the percentage of the EWSR up to 12 MeV. Such a percentage is equal to 3.75 (whereas it is equal to 2.35 within the SGII-RPA model). For reasons of consistency, we are going in what follows to compute all the needed percentages of EWSR up to 12 MeV. We recall that the experimental low-energy contribution to the EWSR was found to be equal to 5, 2.6, and 9 % in the three experiments mentioned here. Figure 2 shows the comparison of SSRPA results with the experimental data. The comparison with the experimental data is carried out by comparing only the location of the energy peaks of the measured and predicted excitation spectra (vertical axes describe different quantities in the three panels). Panel (a) shows the SSRPA B(E1) distribution. Panels (b) and (c) are extracted, respectively, from the upper panel of Fig. 3 of Ref. [12] and from Fig. 3 of Ref. [13]. They represent, respectively, the photoabsorption cross section of Ref. [12] and the E1 strength distribution of Ref. [13]. For the discussion, we remind once again that the low-energy centroid found in Ref. [14] is placed at 10 MeV, whereas the centroids of Refs. [12] and [13] are located at ∼ 11 and at 9.55 MeV, respectively, as Fig. 2 indicates. By looking at the neutron and proton transition densities (left panels) one observes a systematic dominant neutron contribution located at the surface of the nucleus, which is a typical feature of a pygmy excitation in its conventional interpretation. On the other side, by looking at the isovector and isoscalar transition densities (right panels) one does not observe any specific well-defined evolution going from upper to lower panels (increasing excitation energy). In other words, one does not observe any clear isospin splitting, which would imply a mixed isoscalar/isovector nature in the lower energy region and a dominant isovector nature in the higher energy part of the low-lying spectrum. However, before concluding on this aspect and in order to have a more quantitative insight on the isospin nature of the low-lying excitations, we plot in Fig. 4 the transition probabilities associated with the isovector and the isoscalar dipole operators. Both unprojected and projected (that is, with spurious-component corrections) results are shown. The comparison between the isovector (panel (a)) and the isoscalar (panel (b)) transition strengths shows a very strong isospin mixing that might explain that different states may be excited with different probes. Whereas in the isovector distribution the strength values are more or less comparable among themselves in the whole energy window, from ∼ 7 to ∼ 13 MeV, in the isoscalar distribution one observes that the strength is more important in the lower-energy part (below ∼ 10 MeV) than in the higher-energy part of the spectrum. This indicates the presence of an isoscalar/isovector splitting [48,60]. Similar conclusions may be drawn by comparing the isoscalar strength with the electromagnetic one shown in Figs. 1 and 2. From the figure we can also clearly see how the effect of the projection of the spurious components is acting almost exclusively on the lowest states located at around 3 MeV, corresponding to the spurious mode. III. BMF EFFECTS ON THE SYMMETRY ENERGY AND ITS SLOPE Let us first write the expressions of the quantities that we are going to use in what follows. The neutron-skin thickness of a nucleus is defined as the difference between the neutron and the proton root-mean-square radii, By introducing the isospin-asymmetry parameter δ = (ρ n − ρ p )/ρ, where ρ n , ρ p , and ρ are the neutron, proton, and total densities, respectively, one can write the EOS for asymmetric matter as where S(ρ) is called symmetry-energy coefficient, By truncating Eq. (5) at the quadratic term (parabolic approximation) the symmetry-energy coefficient may be computed as the difference between the EOSs of neutron and symmetric matter. The value of S(ρ) at the saturation density ρ 0 is often called J, J = S(ρ 0 ). One can expand S(ρ) around the saturation density, where L and k sym are related to the first and second derivatives of S(ρ), respectively. In particular, L is the slope of the symmetry energy, To visualize the linear correlations that we are going to employ, we have chosen four Skyrme parametrizations having quite different values of L, namely, SGII, SIII [61], SkI3 [62], and SkI4 [62]. The associated values of J and L are reported on Table I. There, the values of J and L for SGII and SIII are extracted from Ref. [25] whereas the values of J and L for SkI3 and SkI4 are extracted from Ref. [63]. Table I: Mean-field values for the symmetry-energy coefficient computed at the saturation density J and for the associated slope L for the four Skyrme parametrizations indicated on the first column. Such J and L values are associated with EOSs computed at the mean-field level, corresponding to the leading (first) order of the Dyson equation. The different values of L are produced by quite different mean-field EOSs for pure neutron matter, as displayed in Fig. 5, where mean-field EOSs for neutron matter and the corresponding mean-field symmetry-energy coefficients are plotted for the four Skyrme parametrizations. We are aware of the bad quality of the neutron matter EOS produced by the parametrization SIII, for which the value of the slope L is indeed very low, but we are going to use this case only to visualize linear correlations and perform linear fits. In any case, results and predictions will be discussed only for the parametrization SGII which represents an illustrative reasonable case. It is expected that, when BMF models are used, the corresponding symmetry energies and slopes evolve because they have to be associated now with BMF EOSs for infinite matter. We estimate here such a BMF effect in an indirect way, by the analysis of the BMF SSRPA low-energy dipole strength. In particular, we employ for this several correlations that have been analyzed in the literature. We stress that we are not going to provide any precise predictions. Our aim is to estimate qualitatively to what extent a BMF model can have an impact on such quantities. A. EWSR and slope of the symmetry energy L We use first correlations found between the percentage of the EWSR associated with the low-lying dipole strength and the slope L [18,19]. We carry out RPA calculations with the four Skyrme parametrizations of Table I. The contribution (percentage) to the EWSR is evaluated up to an energy of 12 MeV. The total EWSR is satisified better than 1%. The corresponding points are shown on Fig. 6(b) as a function of the slope L (meanfield values of L, taken from Table I). We perform a linear fit (black dotted line in the figure) and associate an uncertainty band which is evaluated in the following way. Using the linear fit done on the points obtained with the RPA EWSR percentages and the mean-field L values for infinite matter, we are going to extract a BMF value of L for a given BMF SSRPA percentage of the EWSR. To construct an uncertainty band we compute, for each RPA percentage of the EWSR, the horizontal distance between the corresponding mean-field value of L and the point located on the linear-fit curve. We take the average of the distances computed in this way for the four interactions and we define as uncertainty a band having an horizontal width equal to twice such an average. This is represented by the grey area in the figure. For the Skyrme parametrization SGII we report on the linear fit the corresponding prediction for the SSRPA percentage of the EWSR (computed up to 12 MeV). Such a percentage, equal to 3.75, is larger than the RPA value for SGII. Using the linear fit and the uncertainty band, we may then extract a BMF value of L, with an associated uncertainty. Whereas the mean-field value of L is 37.7 MeV for the parametrization SGII, the extracted BMF value is 60.815 ± 16.982 MeV. The slope of the symmetry energy was thus increased owing to BMF effects, which implies that BMF effects tend to produce a stiffer EOS for pure neutron matter. We carried out the same analysis with a cut at 12.5 MeV. This is shown on Fig. 6(a). The percentages of EWSR are obviously increased for both RPA and SS-RPA calculations compared to the previous case. One may extract in the same way a BMF value for L. This value is larger than the one associated with a cut at 12 MeV, as the figure shows. However, the corresponding uncertainty is more narrow. One may thus conclude that, using the BMF estimation obtained at 12 MeV (which seems to be anyway the most reasonable choice looking at the strength distributions), the associated uncertainty band is sufficiently large to include BMF estimations that may be obtained by slightly increasing the cut. We have indeed checked by looking at the transition densities that a cut at 12 MeV is the best choice. Below 12 MeV, the neutron and proton transition densities are similar to the cases shown on Fig. 3, where typical features of a pygmy resonance may be recognized. However, above 12 MeV, the shapes of the densities indicate that a transition towards the tail of the GDR starts to occur. An illustrative example is shown in Fig. 7, where neutron and proton transition densities for the state located at 12.17 MeV are presented. Clear features of a pygmy excitation are missing there and a more mixed behavior starts to appear. By including more functionals to describe the linear correlation that we employ here, a higher spreading of the results around the linear fit would of course result. However, in this work, the scope is not a precise indication. By using such a linear correlation and, more importantly, by using the fact that the percentage of the EWSR associated with the low-energy part of the spectrum is larger in SSRPA than in RPA for each given functional, we extract a qualitative estimation. This indicates to what extent one may expect that such a BMF result can affect L and modify it from its mean-field value. B. Neutron-skin thickness ∆rnp and L We use now the correlations existing between the neutron-skin thickness and the slope L. Such correlations have been analyzed for example in Refs. [24,25]. We perform Hartree-Fock calculations for the nucleus 68 Ni with the four Skyrme parametrizations that we have Ni with mean-field calculations using the four Skyrme parametrizations SIII, SGII, SkI3, and SkI4 as a function of the slope L (mean-field values) of the symmetry energy (red squares). A linear fit is carried out on the four points (dotted line) and an uncertainty band (grey area) is also displayed (see text). The BMF L value for SGII is included (vertical green line) with its uncertainty band which is represented by vertical orange dashed lines, and the corresponding BMF value for ∆rnp is extracted (horizontal green line) with an associated uncertainty (orange area). chosen and report on Fig. 8 the corresponding values for ∆r np as a function of L (mean-field values). Again, we perform a linear fit (black dotted line) and estimate in the same way as done before an uncertainty band. This time, we are going to extract a BMF ∆r np value by using the BMF L value estimated in the previous step. For each mean-field value of L associated with a given Skyrme parametrization we compute the vertical distance between the Hartree-Fock neutron-skin thickness and the corresponding point on the linear-fit curve. We make an average of the four distances and construct [35] (region between the blue solid lines) and [42] (region between the indigo dashed lines). Experimental constraints provided by heavy-ion collisions are also reported (yellow area), which are extracted from Ref. [42]. The mean-field and the BMF points are indicated for the Skyrme parametrization SGII as a red square and a dark green diamond, respectively. The uncertainty associated with the BMF value is also reported as a light green area. in this way an uncertainty band as a band having a vertical width equal to twice this average (grey area in the figure). We can now report on the figure the SGII BMF value of L (with its uncertainty represented by the two vertical orange dashed lines) extracted in the previous step. We produce in this way an estimation for a BMF value of the neutron-skin thickness with an associated uncertainty (orange area). We mention that the two steps illustrated in Subsecs. III A and III B were employed in Ref. [19] as a way to extract constraints on the neutronskin thickness by the analysis of low-lying pygmy resonances. We carry out here the same procedure to extract a BMF estimation of ∆r np . It is interesting to see that the neutron-skin thickness of the nucleus is impacted by BMF effects. The meanfield value for ∆r np is 0.154 fm with the parametrization SGII, whereas the BMF value is equal to 0.173 ± 0.018 fm with the same interaction. The mean-field value is located at the lower border of the BMF uncertainty band. Qualitatively, one may conclude that BMF effects tend to increase the neutron skin of the nucleus. C. Electric dipole polarizability times J and ∆rnp The authors of Ref. [35] discussed a linear correlation existing between the electric dipole polarizability α times J and the neutron-skin thickness of a given nucleus. We use such a correlation to extract a BMF value of J from the BMF value of ∆r np which was estimated in Subsec. III B. By construction, the dipole polarizability α is the same in RPA and in SSRPA (owing to the subtraction procedure [57]). Thus, we compute α for 68 Ni within the RPA model and obtain α = 4.48 fm 3 for the parametrization SGII. This value will be used also for the SSRPA model. Figure 9 shows the correlation between αJ and ∆r np using mean-field-based calculations done with the four Skyrme parametrizations SIII, SGII, SkI3, and SkI4. A linear fit is carried out again (dotted line) and an uncertainty band is estimated. Here, we are going to use a BMF value for ∆r np to extract a BMF value for αJ. We evaluate for the four points the vertical distance between the mean-field values of αJ and the linear-fit curve. A vertical uncertainty is defined as twice the average of the four distances. The BMF value of ∆r np with its uncertainty band is reported and a BMF value of J may be extracted (α being the same as in RPA), equal to 27.617 ± 5.004 MeV. The symmetry energy at saturation density was slightly increased by BMF effects even if one may note that the mean-field value, 26.83 MeV, falls inside the uncertainty band associated with the BMF estimation. D. BMF values for J and L The correlation between the electric dipole polarizability times the symmetry energy and the neutron skin thickness discussed in Subsec. III C was extended to a correlation between the dipole polarizability times the symmetry energy and the slope L in Ref. [64]. This was done in particular for the nucleus 208 Pb. Using the experimental measurement of the dipole polarizability, a relation between J and L was then extracted in Ref. [35]. Based on the experimental value of (19.6 ± 0.6) fm 3 such a relation is shown on Fig. 10 as the region between the two blue solid lines. On the other hand, using the recent value of (20.1 ± 0.6) fm 3 reported by Tamii et al. [65], Lattimer and Steiner extracted a slightly different constraint for J and L in Ref. [42] which is displayed in Fig. 10 as the region between the indigo dashed lines. We chose this case of 208 Pb as an illustrative example to show that such empirical constraints have indeed to be taken as qualitative indications: sligthly different measured values of the dipole polarizability may modify the empirical region which constrains the values of J and L. We have also extracted from Ref. [42] the empirical constraint on J and L provided by heav-ion collisions (yellow area). On the same figure, the mean-field value of J and L corresponding to the parametrization SGII is included together with the BMF estimation and the associated uncertainty area. We observe that the value of L is much more strongly impacted by BMF effects than the value of J. The meanfield point is located outside the region defined by the blue solid lines (extracted from Roca-Maza et al [35]), whereas the BMF area is compatible with this region if the uncertainty band is taken into account. The two points (mean-field and BMF) are both compatible with the empirical constraint provided by the indigo dashed lines (extracted from Lattimer and Steiner [42]). The mean-field point is located inside this area whereas the BMF one is compatible with it if one considers the uncertainty region. The yellow band in the figure represents the empirical constraint provided by heavy-ion collisions and also extracted from Ref. [42]. The mean-field point is placed at the border of this area, whereas the BMF case tends to favour a higher slope L, which implies a stiffer EOS for pure neutron matter. Mean-field values and the corresponding BMF estimations for L, ∆r np , and J are summarized on Table II for the parametrization SGII. IV. CONCLUSIONS In this article we have studied the low-energy dipole strength distribution of the unstable nucleus 68 Ni with the BMF SSRPA model based on Skyrme interactions. The parametrization SGII is chosen as illustrative case. First, the low-energy response is compared with three available experimental measurements, which led to the experimental centroids of 11 [12], 9.55 [13], and 10 [14] MeV. The SSRPA model provides peaks around these three energy values. Transition densities were analyzed for two peaks in the region of 9.55 MeV, one peak around 10 MeV, and two peaks in the region of 11 MeV. Looking at the transition densities, there is no clear evidence for a well-defined isospin splitting going from lower-to higher-energy peaks. However, by comparing isovector and isoscalar transition probabilities, one observes a strong isospin mixing and a suppression of the isoscalar strength in the higher-energy part of the shown distribution. This indicates the existence of an isospin splitting. The second part of this work was devoted to a qualitative estimation of BMF effects on the symmetry energy of infinite matter and its slope, starting from the computation of the percentage of the EWSR of the lowlying dipole spectrum (up to 12 MeV) with the SSRPA model. Mean-field-based calculations were used, with four Skyrme parametrizations, to visualize linear correlations between the percentage of the EWSR and L, L and the neutron-skin thickness, as well as the dipole polarizability times the symmetry energy and the neutron-skin thickness. By performing linear fits on these points and computing associated uncertainty bands, we have estimated BMF effects on L, on the neutron-skin thickness, and on J. The mean-field values for J and L are 26.83 and 37.70 MeV, respectively, with the parametrization SGII. We have estimated for the same parametrization BMF values of J = 27.617 ± 5.004 MeV and L = 60.815 ± 16.982 MeV. Both quantities are increased by BMF effects but it is clear that the slope of the symmetry energy is much more sensibly affected. This indicates that, qualitatively, BMF effects tend to lead to stiffer EOSs for pure neutron matter.
2020-02-07T18:41:26.696Z
2020-02-06T00:00:00.000
{ "year": 2020, "sha1": "7c8f754625e55f5b8ba9e86fef3fbc6df2c9993a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2002.02331", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "99fb24dfeb2418eee9dd97bcb36b8a94211ca92d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
86380852
pes2o/s2orc
v3-fos-license
Recurrent Esophageal Stricture Secondary to Pemphigus Vulgaris: A Rare Diagnostic and Therapeutic Challenge ABSTRACT Pemphigus vulgaris (PV) is an autoimmune blistering disorder of skin and mucous membranes, characterized by acantholysis, can be life threatening, and carries significant morbidity. Esophageal involvement is uncommon, and the diagnosis can often be delayed. Esophageal stricture secondary to PV is extremely rare, and there are no guidelines on the management of this complication. We present a case of recalcitrant esophageal stricture, secondary to PV, successfully treated with topical and intralesional steroids. Moreover, we review the literature pertaining to esophageal PV and the management of esophageal strictures. INTRODUCTION Pemphigus vulgaris (PV) is a blistering disorder of skin and mucous membranes with an incidence of 0.1-3.2 cases per 100,000 and a nearly equal male-to-female ratio. 1,2 PV is a potentially life-threatening disease, with a mortality rate of approximately 5%-15%. 3 Esophageal involvement is infrequently reported in the literature and presents with dysphagia and odynophagia due to mucosal erosions and ulcerations. Esophageal stricture from PV is rare, and there are only few reported cases on this complication. [4][5][6] CASE REPORT A 65-year-old white woman with a 4-year history of PV was seen for evaluation of intermittent dysphagia. She had been maintained on mycophenolate mofetil 500 mg, azathioprine 50 mg, acyclovir 400 mg, and alendronate 35 mg. Her high-dose prednisone therapy was tapered 3 months earlier because of controlled disease. She had no other medical conditions. Examination was remarkable for multiple small ulcers in oral cavity. Esophagogastroduodenoscopy showed linear ulcers in the mid-esophagus and a mid-esophageal stricture 25 cm from the incisors with a suspected diameter of approximately 16-18 mm (Figure 1). Serial balloon dilatations were performed, from 18 mm to 19 mm followed by 20 mm. She was started on omeprazole 40 mg twice daily, and alendronate was held. Her symptoms improved, but only temporarily. She presented again in 3 months with recurrent stricture. At this time, the patient was referred to us. We noted similar stricture and performed 3 controlled radial expansion (CRE) balloon dilations, 18-mm dilation followed by 19-mm and 20-mm dilations each for 30 seconds. Biopsy of the stricture was also taken. Pathology showed suprabasilar acantholysis consistent with esophageal PV (Figure 2). Two months later, she again presented with dysphagia and recurrent stricture. Three CRE balloon dilations were performed again, 18-mm dilation followed by successful 19-mm and 20-mm dilations, each for 30 seconds ( Figure 3). We also injected intralesional triamcinolone 10 mg in 4 quadrants around the lesion. In addition, topical fluticasone 220 mg was prescribed. The patient was continued on the same systemic immunosuppressive regimen without any changes in dosage. The patient responded well, and her dysphagia resolved completely. Follow-up esophagogastroduodenoscopy 12 months later showed complete healing of ulcers and stricture ( Figure 4). DISCUSSION PV is an autoimmune disorder characterized by intraepithelial blister formation. This is due to the loss of adhesion between epidermal cells (acantholysis) caused by circulating Immunoglobulin G autoantibodies directed against intracellular adhesion molecules desmoglein 1 and 3. 7 Patients develop flaccid blisters and erosions of skin and mucous membranes, typically oral mucosa. Desmoglein 3 is strongly expressed in esophageal epithelia, and esophageal PV is now being reported in the literature. 8-10 Some people may not have skin lesions at the time of esophageal disease. 11 The lack of cutaneous findings can lead to a delay in diagnosis, as in our patient whose skin lesions had resolved under optimal immunosuppressive therapy and was initially misdiagnosed as peptic stricture. In addition, because some patients with esophageal involvement may be asymptomatic, endoscopy may not be routinely performed; hence, esophageal PV may be underreported. Some physicians argue against endoscopic examination considering it a risky procedure because of fragility of the esophageal mucosa and potential Nikolsky sign (clinical sign elicited by a gentle mechanical pressure to the mucosa or skin resulting in blisters separating or peeling away). 3,12 However, endoscopic examination is vital to the diagnosis of esophageal PV and considered safe because neither biopsy nor brushing procedures increase the risk of worsening esophageal lesions. 13 Moreover, endoscopy can be helpful in ruling out peptic ulcer disease because many patients with PV are maintained on corticosteroid therapy, a risk factor for peptic ulcer disease. Endoscopic findings of esophageal PV usually include mucosal local erythema, red longitudinal lines, blisters, erosions, and ulcers. 10 PV can also rarely present as esophagitis dissecans superficialis, which can include stripped mucosa with bleeding, total desquamation of the esophageal mucosa without bleeding, long linear mucosal break and vertical fissures, and circumferential cracks with peeling. 14 Stricture formation is rare in patients with esophageal PV. Differential diagnosis includes achalasia, peptic stricture, pill esophagitis, postradiation stricture, motility disorder, and Schatzki ring. Treatment options for esophageal strictures include acid suppression therapy, endoscopic dilation using balloons or bougies, endoscopic stricturoplasty, and stenting. 5,15 Intralesional steroid injection is an effective option for resistant peptic strictures. 16 However, there is no consensus on the management of esophageal strictures in patients with PV. In one case, intravenous immunoglobulin resulted in remission. 17 To our knowledge, there has been no reported case of intralesional steroid approach for management of strictures in esophageal PV, although steroid injections have been used to manage resistant cases of oropharyngeal PV. 18 We report a case of recurrent esophageal stricture in a pemphigus patient who was on maximal medical therapy and had failed multiple balloon dilations. The patient was successfully treated with intralesional steroids, followed by oral topical fluticasone. The topical use of steroids is the recommended treatment for eosinophilic esophagitis, which we used as a trial in our patient because both diseases have autoimmune pathophysiology. 19 Thongprasom recently reported the effectiveness of topical steroids in the treatment of oral lesions in PV. 20 In our case, PV esophageal stricture response to intralesional and topical steroids was prompt, complete, and sustained on 12-month follow-up. In summary, we highlight the diagnostic and therapeutic challenge of recurrent esophageal stricture due to PV. Clinical awareness of the condition and early biopsy of esophageal stricture are encouraged. Our experience suggests that intralesional steroid injections can be used for recurrent esophageal stricture arising from PV. Topical steroids can be a useful adjunct. Further studies to evaluate the efficacy of topical and intralesional steroids in patients with esophageal strictures from PV will be helpful. DISCLOSURES Author contributions: All authors contributed equally to the manuscript. M. Bilal is the article guarantor. Financial disclosure: None to report. Informed consent was obtained for this case report.
2019-03-28T13:34:00.462Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "ba4b28d8ccb6bbb1782f370089a7472b6c21b3b6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.14309/crj.0000000000000022", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "633580389dc2997b7dec7a6cb19b59626af60ea5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237935681
pes2o/s2orc
v3-fos-license
Spatiotemporal Correlation Spectroscopy Reveals a Protective Effect of Peptide-Based GLP-1 Receptor Agonism against Lipotoxicity on Insulin Granule Dynamics in Primary Human β-Cells Glucagon-like peptide-1 receptor (GLP-1R) agonists are being used for the treatment of type 2 diabetes (T2D) and may have beneficial effects on the pancreatic β-cells. Here, we evaluated the effects of GLP-1R agonism on insulin secretory granule (ISG) dynamics in primary β-cells isolated from human islets exposed to palmitate-induced lipotoxic stress. Islets cells were exposed for 48 h to 0.5 mM palmitate (hereafter, ‘Palm’) with or without the addition of a GLP-1 agonist, namely 10 nM exendin-4 (hereafter, ‘Ex-4’). Dissociated cells were first transfected with syncollin-EGFP in order to fluorescently mark the ISGs. Then, by applying a recently established spatiotemporal correlation spectroscopy technique, the average structural (i.e., size) and dynamic (i.e., the local diffusivity and mode of motion) properties of ISGs are extracted from a calculated imaging-derived Mean Square Displacement (iMSD) trace. Besides defining the structural/dynamic fingerprint of ISGs in human cells for the first time, iMSD analysis allowed to probe fingerprint variations under selected conditions: namely, it was shown that Palm affects ISGs dynamics in response to acute glucose stimulation by abolishing the ISGs mobilization typically imparted by glucose and, concomitantly, by reducing the extent of ISGs active/directed intracellular movement. By contrast, co-treatment with Ex-4 normalizes ISG dynamics, i.e., re-establish ISG mobilization and ability to perform active transport in response to glucose stimulation. These observations were correlated with standard glucose-stimulated insulin secretion (GSIS), which resulted in being reduced in cells exposed to Palm but preserved in cells concomitantly exposed to 10 nM Ex-4. Our data support the idea that GLP-1R agonism may exert its beneficial effect on human β-cells under metabolic stress by maintaining ISGs’ proper intracellular dynamics. Introduction Pancreatic β-cell dysfunction, determined by the interplay of genetic and acquired factors, has a major role in the development and progression of type 2 diabetes (T2D) [1,2]. In this regard, increased concentrations of certain fatty acids (palmitate in particular) have been shown to induce a lipotoxic effect and impair β-cell function, survival and even proliferation [3][4][5]. Among the several pharmacological treatments for the therapy of diabetes, GLP-1 receptor (GLP-1R) agonists are being used, with a favorable benefit/risk ratio [6,7]. These compounds, in fact, have several beneficial actions, including possible protection of β-cells against metabolic stresses [8] or against death by increasing autophagic flux and restoring lysosomal function [9]. In spite of much work conducted to investigate the cellular and subcellular mechanisms of lipotoxicity and, in turn, of the protective effect of GLP-1R agonists [10][11][12], the overall scenario is still not fully understood. Worthy of mention, in the last decade, the insulin secretory granule (ISG) attracted growing interest as an essential subcellular node for signaling in the β-cell, and not merely as a (insulin) container/carrier [13]. Indeed, modifications of ISG structural (e.g., size) and dynamic (e.g., diffusivity) properties are being found as hallmarks of pancreatic β-cells dysfunction in many circumstances. For instance, hypercholesterolemia was associated with an overall increase in granule size accompanied by impaired granule trafficking [14]; type-1 diabetes (T1D) onset was demonstrated to be accompanied by, among others, an ISG change in structural and functional properties (through fusion with lysosomes) [15]. In spite of such general interest and preliminary indications, however, rapid and robust measurement of ISGs' structural and dynamic properties in living β-cells remains a challenging task. Two limit strategies are currently available, but each with specific limitations: on the one hand, Transmission Electron Microscopy (TEM) provides ultrastructural details, but at the expenses of information on dynamics (and, for what concerns specifically ISGs, prone to artifacts [16]); on the other hand, fluorescence-based optical microscopy allows to study ISG dynamics in a living matter [17][18][19][20][21], but with (i) limited or null access to structural information, and (ii) limited efficacy if applied to a three-dimensional environment where many of the objects are packed closer than the resolution limit of the optical setup, as in the case of labeled ISGs. In this context, some of us recently introduced an algorithm of spatiotemporal fluctuation analysis that simultaneously extracts the average structural (i.e., size) and dynamic (i.e., diffusivity, anomalous coefficient) properties of diffusing objects directly from the standard time-series of optical microscopy images with no need to extract single-object trajectories [22]. The corresponding experimental protocol consists of a few steps. First, imaging of the region of interest is performed at high temporal resolution. Then, average spatial-temporal correlation functions are calculated from the stack of images. Finally, by Gaussian fitting of the series of correlation functions, the average 'diffusion law' is obtained directly from imaging, in the form of the so-called imaging-derived Mean Square Displacement (iMSD) [22]. At this point, the characteristic parameters describing ISG average structural properties (i.e., size) and dynamics (i.e., diffusion coefficient, D, and anomalous coefficient, α, at different spatiotemporal scales) can be readily extracted and used to define the fingerprint of the structure of interest. The potential of the method was already demonstrated for a variety of biological objects, ranging from molecules to nanoparticles or entire subcellular organelles/structures [22][23][24][25][26][27][28]. In particular, some of us recently validated the iMSD approach to the use of fluorescently labeled ISGs in a model of β-like immortalized cells, the insulinoma-1E (INS-1E) cells [26]. With this in mind, here we propose the application of the iMSD approach to evaluate the effect of palmitate-induced lipotoxicity ('Palm' treatment) and potential protection by the GLP-1 agonist Exendin-4 ('Ex-4' treatment) on ISGs in primary living β-cells dissociated from human pancreatic islet (dHI). To this end, cells disaggregated from human islets were transiently transfected with syncollin-EGFP in order to fluorescently mark the insulin granules. Then, the iMSD analysis was used as a fast and robust tool to define the structural/dynamic fingerprint of ISGs under the conditions of interest. In brief, it was shown that Palm affects ISGs dynamics in response to acute glucose stimulation by abolishing the ISGs mobilization typically imparted by glucose and, concomitantly, by reducing the extent of ISGs active/directed intracellular movement. By contrast, co-treatment with Ex-4 normalizes ISG dynamics, i.e., re-establish ISG mobilization and ability to perform active transport in response to glucose. These effects are correlated with standard glucose-stimulated insulin secretion (GSIS), which resulted in being significantly reduced in cells exposed to Palm but preserved in cells concomitantly exposed to 10 nM Ex-4. Present results allow postulating a beneficial effect of GLP-1R agonism exerted at the level of intracellular insulin granules, in particular by maintaining their dynamic properties at physiological levels. Human Pancreatic Islets Pancreata from 11 non-diabetic organ donors (age 64.9 ± 13.4; sex 8M/3F; body mass index 24.7 ± 3) were used for islet isolation, through procedures approved by the Ethics Committee of the University of Pisa. Islets were isolated by collagenase digestion followed by density gradient purification, as previously reported [29,30], and cultured at 37 • C, 5% CO 2 atmosphere, in M199 culture medium supplemented with 10% bovine serum, 100 U/mL penicillin, 100 µg/mL streptomycin, 750 ng/mL amphotericin B and 50 µg/mL gentamicin (Sigma-Aldrich, St. Louis, MO, USA). Within 3 days from isolation, islets were exposed to 0.5 mM palmitate (Sigma-Aldrich, St. Louis, MO, USA) for 48 h with or without 10 nM of Exendin-4 (Sigma-Aldrich, St. Louis, MO, USA). Palmitate was dissolved in 90% ethanol, heated to 60 • C and 1:100 diluted to a final concentration of 0.5 mM and a molar ratio of palmitate:bovine serum albumin of 3.33, corresponding to an unbound palmitate concentration of 27 nM [31]. Control incubations contained the same concentrations of ethanol and albumin. In certain experiments, islet cells were dissociated as follows. Approximately 600 islets were suspended in calcium-free Krebs Ringer Bicarbonate solution, containing 1 mmol/L EGTA. Dispersed islet cells were obtained by adding 100 µg/mL trypsin and 8 µg/mL Dnase (Roche Diagnostics, Mannheim, Germany), at 37 • C. The samples were checked every 2 min, and the digestion was stopped by adding cold Krebs Ringer Bicarbonate solution when mostly single cells or small cell aggregates (three to five cells) were detected. Dispersed cells were washed carefully with culture medium by centrifugation at 300× g for 2 min and cultured on biTreat µ-Dish 35 mm, high walls, #1.5 polymer coverslip, tissue culture treated, sterilized and fluorescence microscopy suitable (Ibidi, Martinsried, Germany), previously treated with Matrigel. Insulin Secretion Assays Batches of 15 handpicked islets, previously cultured at the above-mentioned conditions, were pre-incubated at 3.3 mM glucose for 45 min, then challenged with 3.3 mM glucose for 45 min, followed by other 45 min incubation at 16.7 mM glucose [29,32]. Islet insulin content was measured after acid-alcohol extraction, as previously reported [29,32]. Insulin was quantified by a radioimmunometric assay (DIAsource ImmunoAssays S.A., Nivelles, Belgium). Insulin release was expressed as Insulin Stimulation Index (ISI), calculated as the ratio of insulin release at 16.7 mM glucose over release at 3.3 mM glucose. Immunostaining Guinea pig polyclonal antibody against total insulin (preproinsulin, proinsulin and insulin) from Abcam, ab7842 was used to perform an immunostaining experiment on dispersed HI cells. The Alexa Fluor 541-conjugated goat anti-mouse secondary antibody (LifeThermo Fisher, Waltham, MA, USA) was used to detect signals for imaging experiments. Plasmid Transfection Dispersed human islet cells were transfected using lipofectamine 2000 reagent as per manufacturer's instructions using Optimem culture media to dilute reagents (LifeTechnologies, Thermo Fisher, Waltham, MA, USA) and cells were cultured for 48 h prior to microscopy. Syncollin-EGFP plasmid was a kind gift of Micheal Edwardson (Department of Pharmacology, University of Cambridge). Fluorescence Microscopy and iMSD Analysis Fluorescence measurements on dHI cells were carried out with Zeiss LSM 800 inverted confocal microscope (Jena, Germany). Images were acquired illuminating the sample with a 488 nm laser for EGFP using a 63× (N.A. 1.4) oil-immersion objective. EGFP fluorescence was collected between 500 and 600 nm with a GaAsP detector. For iMSD analysis, each acquisition consists of a collection of 500 frames (256 × 256 pixels) at a temporal resolution of 204 ms/frame and with a pixel size of 50 nm. The theoretical framework and main applications of iMSD analysis can be found in Refs. [1][2][3] for molecules diffusing within cells and in Refs. [4][5][6][7][8] for studying the motion of sub-cellular organelles/nanostructures in the cell cytoplasm. Briefly, a time-lapse series of 500 frames were analyzed using a custom script working on MATLAB (MathWorks Inc., Natick, MA, USA), which computes by Fast Fourier methods the spatiotemporal correlation function, defined as follows: g(ξ, η, τ) can be fitted with a standard Gaussian function: whose variance σ 2 (τ) is analogous to the mean square displacement extracted directly from imaging, iMSD. Of particular note, the apparent particle size could be calculated using: In this case Size app (apparent) represents the average diameter of imaged ISGs, i.e., the real size of the ISGs convolved with the instrument's PSF. To obtain the actual average ISGs size could be used the following relation: Cluster Similarity Analysis The measured dynamic parameters (i.e., the short-scale diffusion coefficient D, the iMSD intercept value σ 0 and the exponent of anomalous diffusion α) of each image-stack define a data point in a three-dimensional space. Thus, the set of data points corresponding to the dynamics of a specific system is a 3D multivariate distribution of the measured values. To quantify a degree of similarity among the investigated dynamics, we calculated the statistical difference between two distributions, as described in a previous report [8]. ISGs Tracking Analysis Trajectories analysis was performed using TrackMate plugin for ImageJ (Bethesda, MD, USA). LogDetector algorithm was applied to detect fluorescence spots; the Lap Tracker algorithm was used to perform tracking analysis. iMSD-Based Structural/Dynamic Fingerprint of ISGs from Human β-Cells First, we sought to define the structural and dynamic fingerprint of ISGs in humanderived primary β-cells. The general workflow of our experiments is schematically represented in Figure 1a (further details about islet isolation, cell disaggregation and transfection can be found in Materials and Methods). Thus, primary human cells disaggregated from islets of Langerhans were plated and transiently transfected with syncollin-EGFP to obtain labeled ISGs suitable for fluorescence microscopy analysis. Please note that syncollin-EGFP overexpression, based on previous results on INS-1E cells [8], does not significantly alter ISG structural and dynamic properties, contrary to what was observed using alternative granule markers (e.g., Phogrin-FPs). Technically, time-lapse series of about 500 images were acquired and analyzed by the iMSD algorithm 1 : measured iMSD traces are reported, for the sake of clearness, in Figure S1. As already demonstrated [7,8], the iMSD algorithm can extract information on ISGs average diffusion law and, upon fitting, parameters describing their structure (i.e., their average size) and motility (i.e., the local diffusivity (D m ) and the anomalous (α) diffusion coefficients) directly from standard imaging without the need to extract individual trajectories. Normalized distributions extracted by fitting procedures of size, D m , and α are plotted in Figure 1b (green histograms) and compared to data obtained from similarly-labeled ISGs in INS-1E cells (grey curve, taken from Ref. [26], Figure S1). The triplet of size, D m and α values for each analyzed human-derived cell are shown in a 3D plot in Figure 1c, together with the 68% confidential ellipsoid (green), as compared to data from INS-1E cells (represented only by the 68% confidential ellipsoid, grey). A cluster similarity analysis yields a value of statistical cluster distance (SCD) of 0.536, indicating only partial superimposition of the two clusters (SCD = 0 total superimposition, SCD = 1 absence of superimposition) ( Table 1). Worthy of mention, in fact, iMSD analysis reveals that the ISG fingerprint in human-derived cells is substantially different from that obtained in INS-1E cells. In more detail, the average size of ISGs is nearly 30% smaller in primary human-derived cells as compared to INS-1 E cells (see Table 1). Please note that this result well agrees with TEM-based estimates obtained on similar models: Rosengren and co-workers, in fact, reported a mean granule diameter from primary human β-cells (285 ± 11 nm [33]), sensibly lower than what was reported for granules in INS-1 E cells (>315 nm [34]). In addition, ISGs from human β-cells show here a decreased local diffusivity (1.2 × 10 −3 µm 2 /s) as compared to their immortalized counterparts (2.4 × 10 −3 µm 2 /s), although the mean values of the α anomalous coefficients are identical (~0.71). These results indicate that structural and dynamic properties of ISGs from human primary and immortalized β-cell are not identical, but these differences in the average size and local diffusivity are not surprising, also in light of the growing body of evidence supporting the idea that intracellular vesicles/organelles might be inherently altered, in terms of structural and trafficking properties, in immortalized cellular models as compared to primary cell models (for a review see [35]). Worthy of mention, in parallel, disaggregated and transfected cells were also fixed and immunostained against insulin to distinguish β-cells from non-β-cells (Figure 1d). This control experiment assures that nearly 75% of the syncollin-EGFP expressing cells analyzed are actually true β-cells in our assays, as they are positive for insulin. GLP-1 Agonism Effect on Human β-Cell ISGs under Lipotoxic Stress The iMSD-based fingerprinting procedure can now be used as a fast and robust tool to evaluate the effect of lipotoxicity induced by Palm treatment and the possible protective effect elicited by Ex-4, as detailed in the following. To this end, the established protocol was slightly modified to include standard ELISA-based insulin secretion assays to be performed just before islet disaggregation in the three relevant conditions of control, exposure to Palm and co-exposure to Palm and Ex-4 (workflow in Figure 2a). The insulin stimulation index (ISI) of control islets (incubation for 48 h in plain M199 culture medium, see Methods for further details) was 3.0 ± 0.6. As expected, prolonged exposure to 0.5 mM palmitate caused a reduction in the ISI to 2.1 ± 0.5 (p < 0.05). However, the concomitant presence of 10 nM Exe-4 in the palmitate-containing medium prevented the reduction in ISI, which resulted in being 2.9 ± 0.6. Overall, these observations are in keeping with what was found in previous studies [36][37][38]. GLP-1 Agonism Effect on Human β-Cell ISGs under Lipotoxic Stress The iMSD-based fingerprinting procedure can now be used as a fast and robust tool to evaluate the effect of lipotoxicity induced by Palm treatment and the possible protective effect elicited by Ex-4, as detailed in the following. To this end, the established protocol was slightly modified to include standard ELISA-based insulin secretion assays to be performed just before islet disaggregation in the three relevant conditions of control, exposure to Palm and co-exposure to Palm and Ex-4 (workflow in Figure 2a). The insulin stimulation index (ISI) of control islets (incubation for 48 h in plain M199 culture medium, see Methods for further details) was 3.0 ± 0.6. As expected, prolonged exposure to 0.5 mM palmitate caused a reduction in the ISI to 2.1 ± 0.5 (p < 0.05). However, the concomitant presence of 10 nM Exe-4 in the palmitate-containing medium prevented the reduction in ISI, which resulted in being 2.9 ± 0.6. Overall, these observations are in keeping with what was found in previous studies [36][37][38]. In parallel experiments, cells dissociated from islets and transfected with syncollin-EGFP were exposed to the control medium, Palm or Palm + Exe-4, and analyzed by iMSD. The extracted parameters are used to build ISGs fingerprints in all the relevant experimental conditions. The structural/dynamic fingerprint of control cells responds to acute glucose stimulation (passing from 3.3 to 16.7 mM) as expected based on previous data on INS-1E cells [26]. In fact, glucose does not impact the average ISG size (Figure 2c left plot, In parallel experiments, cells dissociated from islets and transfected with syncollin-EGFP were exposed to the control medium, Palm or Palm + Exe-4, and analyzed by iMSD. The extracted parameters are used to build ISGs fingerprints in all the relevant experimental conditions. The structural/dynamic fingerprint of control cells responds to acute glucose stimulation (passing from 3.3 to 16.7 mM) as expected based on previous data on INS-1E cells [26]. In fact, glucose does not impact the average ISG size (Figure 2c left plot, Table 2) but induces an increase in both ISG characteristic local diffusivity, D m (Figure 2c middle plot, Table 2), and α coefficient (Figure 2c right plot, Table 2). Worthy of mention, the variations in both parameters observed in human cells are of the same magnitude (~2-folds increase in D m ,~1.15-folds increase in α) of those measured in immortalized cells by some of us [26] and others [21]. Such an effect of glucose on the average dynamic properties of ISGs is commonly interpreted as the combined result of granule mobilization (increase in diffusivity) and overall commitment to secretion by active/directed intracellular movements, presumably along with cytoskeleton components. This picture was also confirmed, in INS-1E cells, by experiments in which similar fingerprint variations (e.g., increase in α upon stimulation) were abolished by cytoskeleton disruption or cholesterol overload [26]. With this in mind, we analyzed the effect of glucose stimulation in cells exposed to 0.5 mM Palm. Please note that the average size of ISGs is not affected by Palm neither in low nor high glucose conditions (Figure 2c, left plot; Table 2). By contrast, ISGs dynamic properties are substantially altered. In particular, Palm is able to abolish the increase in both granule D m and α coefficients typically induced by glucose stimulation (Figure 2c, middle-right plot; Table 2). At the same time, co-treatment with 10 nM Exe-4 is sufficient to restore granule dynamics, both in terms of D m and α, to the same extent as control cells (Figure 2c, middle-right plot; Table 2), revealing a protective effect of this compound against the observed effects of lipotoxicity. We are prompted to speculate that ISGs may have a reduced propensity to perform active transport in presence of palmitate. As demonstrated elsewhere both for ISGs [26] and other organelles [27], the iMSD-derived α coefficient is averaged over the whole population of granules captured during imaging: as such, it reflects the sum of all single-granule contributions. This latter can be appreciated by standard analysis of single-granule trajectories (exemplary cases for the experimental conditions tested here are reported in Figure S2). Isolated human Langerhans islets were exposed for 48 h to control medium (Ctrl), 0.5 mM Palm and 0.5 mM Palm + 10 nM Exe-4 and then tested for insulin secretion. In a parallel set of experiments, HI from the same donor were dissociated and transfected with Syncollin-EGFP plasmid in order to label ISGs. Dispersed cells were exposed for 48 h to conditions described above, then fluorescently labeled ISGs' structural and dynamic properties were evaluated in a confocal microscope experiment. (B) Insulin stimulation index measured with ELISA kit assay for intact islets exposed to control medium, Palm and Palm + Exe-4. (* p < 0.05). (C) iMSD derived parameters (size, Dm and α) were measured in a low (3.3 mM) glucose concentration and after stimulation with high (16.7 mM) glucose in Ctrl, Palm and Palm + Ex-4 treated cells. Data represented as Mean ± SE (* p < 0.05, ** p < 0.01-t-test). 'Ctrl': control medium; 'Palm': 0.5 mM palmitate; 'Palm + Exe': 0.5 mM palmitate + 10 nM Exendin-4. As mentioned above, these results from human-derived cells mirror what was observed in INS-1E cells in previous work [26]. In our opinion, it is interesting to note that the palmitate-induced lipotoxic effect does not modify the ISG characteristic size. In turn, this may suggest that the mechanism of action of palmitate does not imply palmitate accumulation at the ISG membrane or, in general, direct action on the ISG structure, contrary to what was observed for cholesterol, which was able to induce a ~40% increase in ISG size by direct accumulation at the granule-membrane level [14,26]. In keeping with this, it was suggested that palmitate-induced lipotoxicity in β cells might be exerted through either a mitochondria-or ER-dependent pathways [5,39], both leading to cellular stress (e.g., ROS production), inflammation and, finally, β-cell damage. Indeed, it was also postulated that the protective role of GLP-1 receptor agonists or other compounds (e.g., oleate) might be played by stimulating pro-survival and/or anti-inflammatory mechanisms [40,41]. Isolated human Langerhans islets were exposed for 48 h to control medium (Ctrl), 0.5 mM Palm and 0.5 mM Palm + 10 nM Exe-4 and then tested for insulin secretion. In a parallel set of experiments, HI from the same donor were dissociated and transfected with Syncollin-EGFP plasmid in order to label ISGs. Dispersed cells were exposed for 48 h to conditions described above, then fluorescently labeled ISGs' structural and dynamic properties were evaluated in a confocal microscope experiment. (B) Insulin stimulation index measured with ELISA kit assay for intact islets exposed to control medium, Palm and Palm + Exe-4. (* p < 0.05). (C) iMSD derived parameters (size, D m and α) were measured in a low (3.3 mM) glucose concentration and after stimulation with high (16.7 mM) glucose in Ctrl, Palm and Palm + Ex-4 treated cells. Data represented as Mean ± SE (* p < 0.05, ** p < 0.01-t-test). 'Ctrl': control medium; 'Palm': 0.5 mM palmitate; 'Palm + Exe': 0.5 mM palmitate + 10 nM Exendin-4. As mentioned above, these results from human-derived cells mirror what was observed in INS-1E cells in previous work [26]. In our opinion, it is interesting to note that the palmitate-induced lipotoxic effect does not modify the ISG characteristic size. In turn, this may suggest that the mechanism of action of palmitate does not imply palmitate accumulation at the ISG membrane or, in general, direct action on the ISG structure, contrary to what was observed for cholesterol, which was able to induce a~40% increase in ISG size by direct accumulation at the granule-membrane level [14,26]. In keeping with this, it was suggested that palmitate-induced lipotoxicity in β cells might be exerted through either a mitochondria-or ER-dependent pathways [5,39], both leading to cellular stress (e.g., ROS production), inflammation and, finally, β-cell damage. Indeed, it was also postulated that the protective role of GLP-1 receptor agonists or other compounds (e.g., oleate) might be played by stimulating pro-survival and/or anti-inflammatory mechanisms [40,41]. Table 2. iMSD-extracted parameters for dispersed HI exposed to Ctrl medium, Palm and Palm + Exe. Conclusions In conclusion, in our experimental conditions, GLP-1R agonism shows a previously unknown beneficial effect exerted at the level of intracellular insulin granules, in particular by maintaining their dynamic properties at physiological levels. In brief, it was shown that Palm affects ISGs dynamics in response to acute glucose stimulation by abolishing the ISGs mobilization effect imparted typically by glucose and, concomitantly, by reducing the extent of granule active/directed intracellular movement. Of note, co-treatment with Exe-4 is sufficient to normalize ISG dynamics. These effects are correlated with standard glucose-stimulated insulin secretion (GSIS), which resulted in being significantly reduced in cells exposed to Palm but preserved in cells concomitantly exposed to 10 nM Exe-4. Alongside the main goal of the present work, let us point out that, thanks to the proposed approach based on fast and robust spatiotemporal correlation spectroscopy, the structural and dynamic properties of ISGs from living primary human β-cells is readily extracted and can be compared to other β-cell-like standards. Despite the undeniable usefulness of immortalized models, in fact, it is useful to understand how much these are affordable/predictive models as compared to their primary, human-derived counterparts. Here, for instance, the iMSD analysis revealed that the granule fingerprint in human-derived β-cells is substantially different from that of immortalized INS-1 E cells, as previously measured by some of us [8]. We envision several lines of development based on present results. From a methodological point of view, the iMSD method proved to be a fast and robust approach to screen the structural and dynamic properties of subcellular structures, such as the ISG, in different experimental conditions. It requires only a microscope equipped for fast acquisition, and the structure of interest can be tagged to any genetically encoded or organic fluorophore, thus enabling also multi-channel imaging. Related to this, we believe that cross-iMSD analysis will be used in the near future to select sub-populations of subcellular structures to probe their interaction and co-diffusion within the cell. A few limitations shall be discussed: first, by using iMSD, the information on single objects (e.g., trajectories) is inevitably lost as quantitative parameters are extracted as average over the entire population of diffusing objects captured by imaging. This, in turn, implies that intracellular heterogeneity of both the structural and dynamic properties of the object of interest is averaged out. Finally, any detail related to the large amount of molecular information enclosed in dynamic subcellular nanostructures is averaged out during the measurement due to poor temporal resolution. Theoretically, however, no technical limit is present to the possibility to retrieve molecular information, provided that sufficient acquisition speed can be achieved [42]. From a biomedical point of view, the natural prosecution of the present study entails the use of intact islets, where inter-cellular signals/feedbacks may play an important regu-latory role that in disaggregated cells is inevitably lost. Moreover, the use of disaggregated cells implies that β-cells are cultivated on 2D supports (e.g., glass) and not maintained in the natural 3D context of the tissue, and this in turn inevitably induces cell morphological rearrangements with potential effects on functional cell properties. To apply the present approach to intact islets, however, a few technical and methodological issues have to be addressed, the main being that proper fluorescence labeling shall be achieved in the intact islet. This step, in turn, implies the use of either virus-based technologies [43,44] or newly-developed organic dyes for granule labeling [45,46], thus avoiding the potentially perturbative effects of lipofection (i.e., Lipofectamine in the present work) on cells. Among organic dyes, ZIGIR, in particular, shows promising properties as a membrane-permeable, Zinc-chelating agent [45]. As such, it would allow to fluorescently label granules both on 2D-cultured cells and in the intact islet, with the only remaining limitation being that glucagon-containing vesicles will concomitantly become fluorescent within α-cells, as they also contain Zinc ions. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/pharmaceutics13091403/s1, Figure S1: iMSD curves. iMSD curves of n = 123 analyzed acquisitions of syncollin-EGFP labelled ISGs in HI (green curves) and relative average curve (bold green). In black, average iMSD curve of n = 48 previously published acquisitions of syncollin-EGFP labelled ISGs in INS-1E cells. Figure Informed Consent Statement: Written informed consent was obtained from donors' next-of-kin. Conflicts of Interest: The authors declare no conflict of interest.
2021-09-28T05:09:47.605Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "1630f40035dfdd857a0e100f25170bf3556ab0ff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4923/13/9/1403/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1630f40035dfdd857a0e100f25170bf3556ab0ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237937273
pes2o/s2orc
v3-fos-license
Current Knowledge of Silver and Gold Nanoparticles in Laboratory Research—Application, Toxicity, Cellular Uptake Silver and gold nanoparticles can be found in a range of household products related to almost every area of life, including patches, bandages, paints, sportswear, personal care products, food storage equipment, cosmetics, disinfectants, etc. Their confirmed ability to enter the organism through respiratory and digestive systems, skin, and crossing the blood–brain barrier raises questions of their potential effect on cell function. Therefore, this manuscript aimed to summarize recent reports concerning the influence of variables such as size, shape, concentration, type of coating, or incubation time, on effects of gold and silver nanoparticles on cultured cell lines. Due to the increasingly common use of AgNP and AuNP in multiple branches of the industry, further studies on the effects of nanoparticles on different types of cells and the general natural environment are needed to enable their long-term use. However, some environmentally friendly solutions to chemically synthesized nanoparticles are also investigated, such as plant-based synthesis methods. Introduction Nanoparticles are defined as structures with at least one of the dimensions in the 1 to 100 nm range [1]. These particles enter cells mostly through endocytosis, particularly endocytotic vesicles formation and the release of ions into the cytoplasm [2][3][4]. From the clinical standpoint, the use of nanoparticles (NPs) is mainly motivated by their relatively large surface-to-volume area during interaction with cells. Further advantages include their specific physicochemical characteristics, such as catalytic properties and relatively low melting point (compared to the macroscopic properties of the metal they are derived from). Moreover, to ensure the safety of their use and appropriate dosage, correlations between these characteristics and the potential toxicity of nanoparticles can be determined using nanotoxicology techniques [5][6][7]. It has been proven that silver nanoparticles (AgNP) have the ability to penetrate the cellular walls of bacteria, altering their cell membranes and even potentially causing cell death. Moreover, through the release of silver ions, it is possible to increase cell membrane permeability, produce reactive oxygen species, and disturb DNA replication [13][14][15][16]. Absorption of Gold Nanoparticles Studies in rats show that gold nanoparticles can be absorbed through the respiratory and digestive systems [33,64]. In Sprague-Dawley rats subjected to spheroid AuNP (diameter under 6 nm) inhalation for 90 days, a decrease in respiratory parameters, i.e., lung function, respiratory volume, and minute volume, was observed compared to the control. Furthermore, histopathological examination demonstrated minimal alveoli, inflammatory infiltration of mixed cell type (lymphocytes/neutrophils/macrophages), and increased macrophage counts in rats receiving high doses of AuNP (20 µg/m 3 ) [64]. Moreover, studies by Kreyling et al. (2018) also confirmed the absorption of "potatoshaped" gold nanoparticles (size: 20 nm, density: 19.5 g/cm 3 ) by inhalation in rats. About 30% of AuNP accumulated in the epithelium of the respiratory tract, causing rapid mucociliary removal and swallowing into the gastrointestinal system. Long-term removal (after 28 days) of AuNP was dominated by macrophage-mediated transport through interstitial tissue to the larynx and gastrointestinal tract. Furthermore, AuNP retention has also been observed in the liver, spleen, kidneys, uterus, and brain [65]. In Wistar rats, ten days after intravenous administration of 25 nm colloidal AuNPs (0.3619 mg of particles/mL, per 1 kg), more than 50% of AuNP accumulated in the liver with smaller amounts in the lungs and spleen. This occurrence was associated with the collection of AuNPs from the circulation by the mononuclear phagocyte system. The total AuNP content of all organs represented 60% of the initial dose. In contrast, oral administration showed almost 50 times lower AuNP levels at the same amount (1.4% of the initial dose). Most AuNP was excreted in feces within four days after exposure. In turn, alterations in biochemical parameters were observed 72 h after intravenous AuNP administration. An increase in AST (aspartate aminotransferase) was observed, with a decrease in ALT (alanine aminotransferase), which affects the physiology of the liver. Furthermore, an increase in blood glucose has also been noted; thus, the effect of AuNP on pancreatic functions cannot be excluded [66]. The penetration of gold nanoparticles through the skin of the hind paw and the anterior abdominal wall of Sprague-Dawley rats was also confirmed by Raju et al. (2018), with smaller AuNPs (22 nm) showing higher penetration compared to larger nanoparticles (105 and 186 nm). The effect of 3-hour AuNP incubation on a fibroblast cell line (L929 mouse fibroblast cells) was also investigated, with no observed effect of AuNP on cell viability at any of the concentrations used (0.1, 1, and 10% v/v) [67]. In mice, the kidneys were the primary site of AuNP accumulation after oral administration for 8 days at 25,22,20,18, and 15 µg gold/kg bodyweight concentrations, and subsequent intestinal absorption. AuNP can induce anti-inflammatory effects in macrophage RAW264.7 cells pretreated with 1/1000 OD of the AuNP for 5 h before stimulation with lipopolysaccharides (LPS) and incubation for another 20 h. AuNP reduce by reducing the lipopolysaccharide receptor expression on the cell surface, as well as catalytic detoxification of nitrite peroxide and hydrogen peroxide. The highest accumulation of gold nanoparticles was shown by those that were 5 nm in size and coated with PVP, compared to 5 nm AuNPs coated with citrate or tannic acid (TA) [68]. In other studies conducted in men subjected to nanoparticle inhalation for 2 h during intermittent exercise, AuNP was also confirmed to enter the lungs. Gold was detected in the urine after exposure to 4 nm AuNPs, but not in the urine of volunteers exposed to larger particles (34 nm). In mice, gold nanoparticles were also detected in urine only after exposure to particles ≤5 nm. AuNP found in human blood was usually at low levels after inhalation of AuNP, although the concentration of smaller particles was notably higher. This effect was also confirmed in mice, in which the incidence of detected gold and blood gold levels were significantly higher after exposure to smaller particles [69]. The possibility of absorption through human skin has been studied using surgically resected dermal fragments incubated for 24 h with AuNP. The permeability of spherical nanoparticles (15 and 100 nm) was confirmed by using a TEM (transmission electron microscope), with nanoparticles observed in the deeper stratum corneum, epidermis, and dermis [70]. Nanoparticle Toxicity Excessive ROS production can cause DNA damage and activate several signaling pathways, i.e., p53 suppressor protein, AKT (serine/threonine protein kinase B), and MAPK (mitogen-activated protein kinases) [71]. Furthermore, nanoparticle toxicity may increase pro-inflammatory cytokine expression and activation of pro-inflammatory cells such as macrophages and neutrophils, which result in increased ROS production [2]. In various physiological states, ROS are produced as intermediate products. Their concentrations in cellular organelles are strongly regulated by various detoxifying enzymes, such as superoxide dismutase (SOD), glutathione peroxidase (GPx), and catalase (CAT), or by different antioxidants including flavonoids, ascorbic acids, vitamin E, and glutathione (GSH). The production of free radicals induced by nanoparticles leads to a reduction of GSH to oxidized form, followed by induction of oxidative stress [2,12,[72][73][74][75]. Silver Nanoparticle Toxicity Based on the data shown in Table 1, it appears that an increase in AgNP levels affects the change in cell morphology and viability, and the production of ROS [76][77][78][79][80][81][82][83][84]. A more significant impact of smaller nanoparticles on increased apoptosis induction, and an increase in dye fluorescence intensity resulting from increased production of reactive oxygen species compared to larger nanoparticles, were also observed [85][86][87]89]. In the cited works [86,87,89], monitoring of reactive oxygen species formation in the material studied was investigated by measuring the DCFH-DA fluorescence intensity. In turn, in the publication by Sriram [85], the level of ROS production was determined through nitrotetrazolium blue (NBT) reduction assay. Rat alveolar macrophages showed higher fluorescence intensity of the DCFH-DA dye with smaller size of silver nanoparticles (15 nm), compared to 30 and 50 nm. There was also an increase in dye fluorescence intensity with increasing concentration of silver nanoparticles (5, 10, 25, 50, 75 µg/mL) [89]. Studies on human lymphocytes showed a correlation between the increasing fluorescence intensity of the DCFH-DA dye and concentrations of silver nanoparticles (10, 20, 75, 100 µg/mL) [86]. HepG2 cells showed the most significant increase in DCFH-DA dye fluorescence in the presence of the smallest nanoparticles (5 nm) compared with 20 and 50 nm [87]. A link between the phenomenon of apoptosis and incubation of silver nanoparticles in different sizes has been proposed by researchers: Carlson et al. [89]-Based on the intensity analysis of fluorescent cationic dye, 5,5 ,6,6tetrachlor-1,1 ,3,3 -tetraethyl-benzamidazolocarbocyanine iodide (JC-1), it was shown that 55 nm AgNPs, in contrast to 15 and 30 nm, did not lead to significant toxicity at concentrations up to 50-75 µg/mL. The authors suggest that the loss of mitochondrial membrane potential (MMP) function may be due to apoptosis arising from the mitochondrial apoptotic pathway. Liu et al. [87]-The study investigated median effective concentration for cell mortality (EC50) values, based on both mass concentration and surface area, in A549, HepG2, MCF-7, and SGC-7901 cell lines. It was shown that silver nanoparticles with the smallest size (5 nm) were the most toxic material to cells, compared to 20 and 50 nm. Sriram [85]-Analysis of the effects of AgNP concentrations ranging from 100 to 1000 nM on BREC cells showed that the smaller ones (22.4 nm) induced apoptosis at concentrations of 300 nM and above, while the larger ones (49.5 nm) at concentrations above 500 nM. Smaller nanoparticles cause structural modifications, e.g., changes in lymphocyte cell membrane morphology with nanoparticles ≤ 20 nm, with no similar effects observed for particles ≥ 200 nm in size [86]. Cell shrinkage and no visible plasma membrane were observed in rat alveolar macrophages after treatment with 15 nm, but not 30 nm AgNPs [89]. Furthermore, larger AgNPs (42.5 nm) did not induce cell shrinkage in BREC cells, unlike those of smaller sizes (22.4 nm) [85]. In HepG2 cells, swelling occurred at the smallest AgNP size (5 nm), while when incubated with larger NPs (20 and 50 nm), some cells retained their typical structure. In contrast, Hoechst 33342 analysis showed condensation of cell nuclei at AgNP size of 5 nm, while most nuclei of cells incubated with AgNPs of 20 and 50 nm were standard [87]. It was also noted that changing the external molecular properties of NPs through reactive groups on their surface modifies their effects on cellular processes [2], with some nanoparticles able to form aggregates or agglomerates. In rat alveolar macrophages, cells incubated with larger nanoparticles (30 nm) showed agglomeration of nanoparticles both inside and outside, while at 50 nm, agglomeration occurred only on the cell surface [89]. Structural modifications and changes in external molecular properties, which ultimately leads to the formation of reactive groups on the particle surface [2], which can directly result in ROS production. In addition, the adsorption of surrounding particulate matter, such as ozone and nitric oxide, on the NP surface affects the induction of oxidative stress [2,90]. Small NPs appear to be more toxic than large NPs, which may be explained by a relatively larger surface area to volume ratio compared to larger NPs [30]. The initial NAC (N-acetyl-L-cysteine) treatment of human Chang liver cells [78], human liver cancer cell line (HepG2) [80], human lung cancer cell line (A549) [82], and mouse line of DC2.4 dendritic cells [93] reduced ROS production in cells incubated with AgNPs. The use of synthetic NAC antioxidant in cells incubated with AgNP resulted from a change in mitochondrial membrane permeability, preventing the loss of its potential, which is a characteristic feature of apoptosis induction [93]. The use of instrumental neutron activation analysis (INAA) by Antsiferova et al. (2015) allowed for studying biokinetics of silver nanoparticles in biological tissues. The results confirm the highest uptake of nanoparticles through the liver and blood by mice, with a single exposure to 34 nm AgNP, compared to the brain. The animals were incubated with AuNP at a concentration of 100 µg/mL for one day. Furthermore, with an increase in incubation time of up to two months and the same dose, a higher accumulation of nanoparticles was observed in the liver than in the blood. The effectiveness of a monthlong distilled water feeding on the level of silver nanoparticle removal from the organism after two months of exposure was also investigated, with the fastest reduction in AuNP observed in the liver. The study confirmed the ability of the liver and blood to quickly dispose of silver nanoparticles. It also ruled out the possibility of dangerous effects of AgNP on these organs [94]. 2020), conducted on Sprague-Dawley rats exposed for 28 days to silver nanoparticle aerosols of variable size (18.1-19.6 nm) and three concentration groups: small (31.2 ± 8.5 µg/m 3 ), medium (81.8 ± 11.4 µg/m 3 ), and high (115.6 ± 30.5 µg/m 3 ), confirmed the presence of AgNPs in bronchoalveolar lavage (BAL). Furthermore, similar to previously mentioned studies, no effect of nanoparticles on body weight was noted. In turn, there was a statistically significant change in the weight of the right lung after a 7-day post-exposure observation based on average concentration. Biochemical changes, such as hemoglobin concentrations in erythrocytes, lymphocyte percentage, asparagine transaminase (AST), and lactate dehydrogenase (LDH) levels, were also observed at each of the concentrations used. Based on the number of neutrophils, indicating inflammation, significant increases were observed in high concentration groups [58]. Furthermore, histopathological analysis of preparations from the liver, lungs, and spleen of mice was carried out using a fluorescent microscope. REF cells showed toxicity only at 10% at the highest AuNP dose. For concentrations of 1 and 5 µg/mL, this parameter was even lower. The analyzed cells showed no change in nuclear morphology, and no changes in histopathological slices. Moreover, nanoparticles did not affect the bodyweight of mice. The results of these in vitro and in vivo tests therefore confirmed the safety of gold nanoparticle use at low concentrations [45]. Gold Nanoparticle Toxicity The effects of spherical AuPEG toxicity in mice were also studied by Chinese researchers in 2011. The study focused mainly on the size of nanoparticles, with four different sizes used: 5, 10, 30, and 60 nm. Mice were treated with gold nanoparticles at a concentration of 4000 µg/kg bodyweight for 28 days. A particle analyzer was used to assess the concentration of nanoparticles in the heart, lungs, spleen, and kidneys. Using a transmission electron microscope, preparations of bone marrow and blood were also characterized. Furthermore, biochemical blood tests were performed to evaluate the numbers of morphotic elements and enzyme levels. Both the heart and kidneys exhibited the highest concentration of 5 nm nanoparticles. In turn, 10 nm nanoparticles were most commonly detected in the liver, while those 30 nm mainly aggregated in the spleen. Five nanometer nanoparticles were also observed in the bone marrow without any significant reduction in their size, indicating that they did not undergo disintegration. Furthermore, other nanoparticle sizes were also localized in both intracellular and extracellular compartments of bone marrow, indicating longer NP retention in this tissue. In blood, nanoparticles of 5 nm were aggregated and could form 10-20 nm long structures. A similar situation was observed at 10 and 60 nm but not at 30 nm. Moreover, the presence of gold nanoparticles after 28 days of incubation indicated their long retention time in blood. Due to the lack of immune response caused by gold nanoparticles of different sizes, no statistically significant differences of the mg/g index were detected between the study and control groups in either the thymus or the spleen. Hematology results after 28 days of intraperitoneal injection at the dose of 4000 µg/kg gold nanoparticles are shown in Table 2. Table 2. Effect of gold nanoparticle size on hematology results in mice incubated with AuNP at a dose of 4000 µg/kg for 28 days. Numerical values are given in nm ↑↑-largest increase, ↑increase ↓decrease. Leukocytes Erythrocytes Hemoglobin and Mean Corpuscular Hemoglobin Concentration Nanomaterials 2021, 11, x 9 of 23 An increase in the number of white blood cells observed in mice treated with 10 nm particles indicates an inflammatory reaction. In contrast, a decrease in the number of white blood cells observed in mice treated with 5 and 30 nm particles may be associated with infection. Furthermore, the increase in the number of red blood cells found in mice treated with 10 and 60 nm nanoparticles indicates that particles of this size affect the hematopoietic system. The level of biochemical enzymes in the blood of mice was also investigated ( Table 3). Table 3. Levels of biochemical enzymes in the blood of mice treated with gold nanoparticles for 28 days at a dose of 4000 μg/kg. ALT-alanine transaminase, AST-aspartate transaminase, GLOB-globulin, CREA-creatinine, ALB-albumin, TBIL-total bilirubin. ↑↑-largest increase, ↑-increase, decrease, ↓↓-largest decrease. Hematocrit Mean erythrocyte volume Thrombocytes Nanomaterials 2021, 11, x 9 of 23 An increase in the number of white blood cells observed in mice treated with 10 nm particles indicates an inflammatory reaction. In contrast, a decrease in the number of white blood cells observed in mice treated with 5 and 30 nm particles may be associated with infection. Furthermore, the increase in the number of red blood cells found in mice treated with 10 and 60 nm nanoparticles indicates that particles of this size affect the hematopoietic system. The level of biochemical enzymes in the blood of mice was also investigated ( Table 3). Table 3. Levels of biochemical enzymes in the blood of mice treated with gold nanoparticles for 28 days at a dose of 4000 μg/kg. ALT-alanine transaminase, AST-aspartate transaminase, GLOB-globulin, CREA-creatinine, ALB-albumin, TBIL-total bilirubin. ↑↑-largest increase, ↑-increase, decrease, ↓↓-largest decrease. ALT AST GLOB An increase in the number of white blood cells observed in mice treated with 10 nm particles indicates an inflammatory reaction. In contrast, a decrease in the number of white blood cells observed in mice treated with 5 and 30 nm particles may be associated with infection. Furthermore, the increase in the number of red blood cells found in mice treated with 10 and 60 nm nanoparticles indicates that particles of this size affect the hematopoietic system. The level of biochemical enzymes in the blood of mice was also investigated (Table 3). Table 3. Levels of biochemical enzymes in the blood of mice treated with gold nanoparticles for 28 days at a dose of 4000 µg/kg. ALT-alanine transaminase, AST-aspartate transaminase, GLOB-globulin, CREA-creatinine, ALBalbumin, TBIL-total bilirubin. ↑↑-largest increase, ↑-increase, decrease, ↓↓-largest decrease. ALT AST GLOB white blood cells observed in mice treated with 5 and 30 nm particles may be associat with infection. Furthermore, the increase in the number of red blood cells found in m treated with 10 and 60 nm nanoparticles indicates that particles of this size affect t hematopoietic system. The level of biochemical enzymes in the blood of mice was also investigated (Tab 3). Table 3. Levels of biochemical enzymes in the blood of mice treated with gold nanoparticles for 28 days at a dose of 4000 μg/kg. ALT-alanine transaminase, AST-aspartate transaminase, GLOB-globulin, CREA-creatinine, ALB-albumin, TBIL-total bilirubin. ↑↑-largest increase, ↑-increase, decrease, ↓↓-largest decrease. ALT AST GLOB Biochemical changes suggest that 10 nm particles may be highly toxic to the liv and 60 nm particles could present toxicity to both the kidneys and liver. However, liver and kidney damage was observed for 5 and 30 nm AuPEGs. The study results suggest that 10 nm gold nanoparticles coated with PEG are n sufficiently safe to be administered at a concentration of 4000 μg/kg. The results of t studies presented above contradict previous assumptions linking in vitro cell incubatio with smaller nanoparticles with higher toxicity [95]. Lasagna-Reeves et al. (2010) investigated the possibility of toxicity induction 12-week-old mice through the administration of 12.5 nm colloidal AuNPs with regu shape for eight days, in doses of 40, 200, and 400 μg/kg/day. The evaluation of t parameter aimed to clarify the possibility of AuNP use in, e.g., drug delivery or disea diagnostics. Hematological analysis evaluating white and red blood cells, platele hemoglobin, and hematocrit was performed using a hematological analyzer (Coul T540 hematology system). Histopathological evaluation was performed based on H& (hematoxylin-eosin) staining. The distribution of gold nanoparticles was characteriz using GF-AAS (graphite furnace atomic absorption spectrophotometry) and ICP-M (inductively coupled plasma mass spectrometry). The concentration of go nanoparticles in the liver, kidneys, spleen, and lungs increased with dosage. Consideri different sizes of organs, the total percent of the applied dose detected was highest in t CREA ALB TBIL with infection. Furthermore, the increase in the number of red blood cells found in m treated with 10 and 60 nm nanoparticles indicates that particles of this size affect hematopoietic system. The level of biochemical enzymes in the blood of mice was also investigated (Ta 3). Table 3. Levels of biochemical enzymes in the blood of mice treated with gold nanoparticles for 28 days at a dose of 4000 μg/kg. ALT-alanine transaminase, AST-aspartate transaminase, GLOB-globulin, CREA-creatinine, ALB-albumin, TBIL-total bilirubin. ↑↑-largest increase, ↑-increase, decrease, ↓↓-largest decrease. ALT AST GLOB Biochemical changes suggest that 10 nm particles may be highly toxic to the liv and 60 nm particles could present toxicity to both the kidneys and liver. However, liver and kidney damage was observed for 5 and 30 nm AuPEGs. The study results suggest that 10 nm gold nanoparticles coated with PEG are sufficiently safe to be administered at a concentration of 4000 μg/kg. The results of studies presented above contradict previous assumptions linking in vitro cell incubati with smaller nanoparticles with higher toxicity [95]. Lasagna-Reeves et al. (2010) investigated the possibility of toxicity induction 12-week-old mice through the administration of 12.5 nm colloidal AuNPs with regu shape for eight days, in doses of 40, 200, and 400 μg/kg/day. The evaluation of t parameter aimed to clarify the possibility of AuNP use in, e.g., drug delivery or dise diagnostics. Hematological analysis evaluating white and red blood cells, platel hemoglobin, and hematocrit was performed using a hematological analyzer (Cou T540 hematology system). Histopathological evaluation was performed based on H (hematoxylin-eosin) staining. The distribution of gold nanoparticles was characteriz using GF-AAS (graphite furnace atomic absorption spectrophotometry) and ICP-(inductively coupled plasma mass spectrometry). The concentration of g nanoparticles in the liver, kidneys, spleen, and lungs increased with dosage. Consider different sizes of organs, the total percent of the applied dose detected was highest in Biochemical changes suggest that 10 nm particles may be highly toxic to the liver, and 60 nm particles could present toxicity to both the kidneys and liver. However, no liver and kidney damage was observed for 5 and 30 nm AuPEGs. The study results suggest that 10 nm gold nanoparticles coated with PEG are not sufficiently safe to be administered at a concentration of 4000 µg/kg. The results of the studies presented above contradict previous assumptions linking in vitro cell incubations with smaller nanoparticles with higher toxicity [95]. Lasagna-Reeves et al. (2010) investigated the possibility of toxicity induction in 12-week-old mice through the administration of 12.5 nm colloidal AuNPs with regular shape for eight days, in doses of 40, 200, and 400 µg/kg/day. The evaluation of this parameter aimed to clarify the possibility of AuNP use in, e.g., drug delivery or disease diagnostics. Hematological analysis evaluating white and red blood cells, platelets, hemoglobin, and hematocrit was performed using a hematological analyzer (Coulter T540 hematology system). Histopathological evaluation was performed based on H&E (hematoxylin-eosin) staining. The distribution of gold nanoparticles was characterized using GF-AAS (graphite furnace atomic absorption spectrophotometry) and ICP-MS (inductively coupled plasma mass spectrometry). The concentration of gold nanoparticles in the liver, kidneys, spleen, and lungs increased with dosage. Considering different sizes of organs, the total percent of the applied dose detected was highest in the liver, followed by kidneys and the spleen. In turn, the level of AuNP in the blood was independent of the administered doses, which indicates that uptake and absorption of gold nanoparticles mainly occur in tissues. In addition, the percentage of accumulated gold decreases with the increase in AuNP dose, which suggests efficient removal of nanoparticles from the body. Concentrations of urea nitrogen, uric acid, and creatinine were examined to evaluate AuNP nephrotoxicity. Moreover, total bilirubin and alkaline phosphatase levels in the blood were used for functional evaluation of the liver and bile ducts. The biochemical analysis of these parameters did not show statistically significant differences in any metabolites, regardless of the dose used. Furthermore, the hematological analysis did not show statistically significant differences between gold nanoparticle incubated mice and control samples. This confirms the conclusion that gold nanoparticles do not cause extensive inflammation in mice. In addition, for observation of possible toxic effects, macroscopic morphological analysis of tissues was carried out. Tissue damage was not shown in any of the sections taken from the kidneys, liver, spleen, brain, or lungs. Gold nanoparticles also did not affect the weight of mice regardless of the dose of nanoparticles used [96]. The effect of the surface functionalization of gold nanoparticles was studied by Zhang et al. (2020); AuNPs stabilized by PEG (polyethylene glycol) are commonly used as nanodrug carriers due to their biocompatibility, obtained through nanoparticle stabilization. The study compared surface functionalization of nanoparticles using PEG (AuPEG) and Trolox (AuTrolox). The second substance is a Vitamin E derivative, with its administration leading to inhibition of oxidative stress through the removal of ROS and nitrogen oxides. The studies were conducted in vitro on the SH-SY5Y neuroblastoma cell line. The cells were incubated with gold nanoparticles of 4.5, 13, and 30 nm and 25 µg/mL for 24 h. An MTT test was used to investigate the toxicity of SH-SY5Y cells. Using a laser confocal microscope, the level of reactive oxygen species was measured. Furthermore, malondialdehyde (MDA, malondialdehyde) levels were assessed as a marker of oxidative stress. Subsequently, mitochondrial membrane potential was also investigated using JC-1 dye and confocal microscopy techniques. In turn, the effect of gold nanoparticles on apoptosis values was measured using a flow cytometer and a commercial reagent kit. To determine which apoptotic proteins are involved in signaling pathway induction, Bcl-2, caspase-3, and PARP (poly (ADP-ribose) polymerase) proteins were investigated. Finally, using the ICP-MS (inductively coupled plasma mass spectrometer) method, the distribution of AuPEG and AuTrolox in individual mouse organs was investigated. AuPEG nanoparticles of 4.5 nm were shown to have higher toxicity than particles of other sizes. Therefore, they were selected for further studies evaluating the effects of antioxidants on nanoparticle-induced oxidative stress. For AuPEG, six times higher levels of ROS and two times higher MDA values were demonstrated compared to control. However, a significant reduction in MDA and ROS occurred after the surface of nanoparticles was functionalized with Trolox. Furthermore, through increased green fluorescence signal and reduced red signal of JC-1 dye, mitochondrial damage in cells incubated with AuPEG was confirmed. A decrease in green fluorescence levels was also observed when AuTrolox was used. AuPEG and AuTrolox nanoparticles (4.5 nm) were used to evaluate apoptosis, at concentrations of 2.5, 5, 10, 25, 50, 75, and 100 µg/mL, during 24-hour incubation. AuPEG was shown to significantly affect cell viability, with a decrease of almost 40% confirmed for concentrations of 50, 75, and 100 µg/mL. However, AuTrolox mediated inhibition of apoptosis was also detected. After 48-hour incubation of SH-SY5Y cells, at 25 µg/mL concentration, the apoptosis level in AuPEG treated cells was 40%, while it was 20% for AuTrolox. Flow cytometry results also confirmed differences in apoptosis induction. Furthermore, Western-blot analysis showed a decrease in Bcl-2 protein expression, and activation of caspase-3 and PARP. The study confirmed that the use of Trolox on the surface of gold nanoparticles significantly reduces the adverse effects of ROS and MDA and improves antioxidant enzyme activities compared to AuPEG. According to the researchers, Trolox is an antioxidant with proven effects in the reduction of oxidative stress. The authors also state that combining antioxidants with gold nanoparticles can increase their activity. The possibility of AuPEG induction of apoptosis via the mitochondrial apoptosis signaling due to ROS presence was also confirmed. However, this process was reversed after the application of AuTrolox. An in vivo model of male mice weighing approximately 20 g was used to further assess the neurotoxicity of nanoparticles, with the animals receiving ranging NP doses (12.5 and 25 mg/Kg). Using ICP-MS, the greatest accumulation of gold nanoparticles was detected in the liver and then in the spleen, especially compared to the heart, kidneys, and lungs. This occurrence was associated with increased phagocytic activity in the cells of the mononuclear phagocyte system. Furthermore, higher concentrations of nanoparticles translated into their higher accumulation in the test organs. No difference in body weight was observed between mice treated with AuPEG and AuTrolox. In samples taken from the mouse hippocampus, the levels of antioxidant enzymes such as SOD (superoxide dismutase), CAT (catalase), and GSH-Px (glutathione peroxidase) were evaluated. The principle of antioxidant action is presented in Figure 2. Application of AuPEG at 25 mg/kg for three months caused a decrease in MDA and antioxidant enzyme levels (SOD, CAT, GSH-Px) in the hippocampus, compared to AuTrolox [72]. Through induction of oxidative stress in cells, gold nanoparticles lead to the accumulation of free radicals and thus reduce the activity of liver antioxidant enzymes (SOD, CAT) and GSH levels [97]. Overall, the publication of Zhang et al. (2020) demonstrates the possibility of inducing oxidative stress and apoptosis by 4.5 nm PEG-stabilized AuNPs. Inhibition of this process through the use of an antioxidant-Trolox-was also noted. This research provides a reasonable basis for the potential development of a nanoparticle drug delivery system [72]. A mouse fibroblast cell line (Balb/3T3) was the subject of research by Coradeghini et al. (2013). Gold nanoparticles of 5 and 15 nm in size and concentrations of 10, 50, 100, 200, and 300 µM were used. Incubation with AuNP was conducted for 2, 24, and 72 h. Culture performance tests assessing colony-forming efficiency (CFE) and Trypan blue were used to determine the toxicity of nanoparticles. A transmission electron microscope was also used to evaluate NP location qualitatively. In turn, the quantitative uptake of nanoparticles by Balb/3T3 cells was characterized using ICP-MS. In the CFE test, smaller nanoparticles showed higher toxicity at 72 h of incubation, especially at concentrations above 50 µM. For 15 nm NPs, no statistically significant differences were observed for any concentration and incubation time. Based on the Trypan blue dye test, 5 and 15 nm nanoparticles were not toxic to mouse fibroblasts, regardless of the concentration and exposure period used. Furthermore, the cells were incubated with 10 and 30 µM gold nanoparticles for 2 and 24 h. Using a transmission electron microscope, cellular internalization was determined to occur in both AuNP sizes used. Moreover, enclosure of nanoparticles inside vesicles was observed. However, AuNPs did migrate to other organelles, such as the nucleus, mitochondria, or Golgi apparatus. The number of nanoparticles collected in endocytotic vesicles increased with concentration. Moreover, autophagosome formation was also shown to be possible. Furthermore, it was noted that even after 2 h of incubation, the nanoparticles already reached the the end of the endo/lysosomal pathway. The uptake of nanoparticles after 2 h was confirmed using ICP-MS (plasma mass spectrometry). The number of nanoparticles increased with incubation time and was higher in those 15 nm in size. Summarizing, the study confirmed 5 nm AuNP toxicity at incubation time 72 h and concentrations above 50 µM. Therefore, the importance of the nanoparticle size used should be taken into account in studies determining their effects on cell biology [98]. Scientists in Gdansk 2019 investigated the effect of the shape of gold nanoparticles on toxicity in cancer cells. They used four cell lines: human fetal osteoblast (hFOB 1.19), human bone osteosarcoma (143B), human osteosarcoma cell line (MG-63), and pancreatic duct (hTERT-HPNE). The gold nanoparticle shapes investigated were nanosphere, nanostar, and nanorod. Nanoparticle morphology was characterized using SEM and TEM microscopes. The average length of nanoparticles of a nanosphere shape was 14 nm, while the size of the nanostars was about 200 nm. Nanorods had an average length of 45 nm and a diameter of 16 nm. The cell lines investigated were incubated with different AuNP shapes for 24 h. Cell viability was tested using the MTT test to measure the cellular activity of NADPHdependent oxidoreductase, as reduced cell life span may be associated with the process of apoptosis. In order to assess the ability of live undamaged cells to collect dye in lysosomes, neutral red (NR) assay was used, as this test also allows for measuring the integrity of the cell membrane. Concentrations of nanoparticles of 0.3, 0.6, 1.2, 2.5, and 5 µg/mL were used for both tests. A comparison of tests determining cell viability showed that nanostars had the most significant impact on reducing cell lifespan. In this shape, the survival rate of the cell lines investigated decreased as the concentration increased. The highest susceptibility to toxicity caused by nanostars was attributed to the 143B cell line cells. However, the NR test did not show similar activity for nanostars at 0.3 µg/mL concentration. Nanorods showed significant toxicity at higher concentrations (2.5 and 5 µg/mL), especially with MG63 and 143B cell lines. However, the NR test results showed a lower effect on cell survival at the same concentrations of these nanoparticles. Nanospheres exerted the smallest impact on cell line survival. Despite that, a slight reduction in cell life of 143B cells was shown in the MTT test. Moreover, the results of both tests point to toxicity mediated by AuNPs via alterations in mitochondrial activity (MTT) and integrity of the cellular membrane (NR). Owing to these toxicity results, nanostars and nanorods were further analyzed in relation to their effect on the levels of apoptotic proteins. The NR test showed that hFOB1.19 cells were the most resistant to nanoparticles, prompting the use of 2 remaining cell lines (MG63 and 143B) for further study. The concentrations of a proapoptotic protein (Bax) and antiapoptotic protein (Bcl-2) were determined using the Western blot method. The concentrations used were 1 and 2 µg/mL for nanorods and 0.1, 0.3, 0.6, and 1 µg/mL for nanostars. Both 143B and MG63 cell lines showed an increase in proapoptotic protein levels in the presence of nanorods. Concerning the antiapoptotic protein, a decrease in Bcl-2 levels was observed only in MG63 cells. For the 143B cell line, an increase in Bcl-2 at 1 µg/mL and a decrease at 2 µg/mL was demonstrated. In turn, nanostars increased Bax levels as their concentrations increased for both cell lines. They also caused a decrease in Bcl-2 expression in both 143B and MG63 cells. The most significant changes in Bax and Bcl-2 protein levels were observed at AuNP concentration of 1 µg/mL. The morphology of the hTERT-HPNE cell line was analyzed in a TEM. Nanostars and nanorods were shown to penetrate cells and cause changes in their ultrastructure. Nanostars were used at a concentration of 10 µg/mL, resulting in intensive vacuolization of the cytoplasm, with an especially prominent appearance of autophagic vacuoles. At a concentration of 50 µg/mL, cell damage occurred, consisting of cell membrane rupture, cytoplasmatic vacuolization, and cell degeneration. In turn, nanorods at a concentration of 10 µg/mL localized outside the cell, along the cell membrane. As a result of endocytosis, they were also observed in endosomes. At this concentration, the cell showed an unchanged structure of the rough endoplasmic reticulum, and the presence of numerous autophagosomes. At a higher concentration (50 µg/mL), cell degradation occurred, including cell membrane damage,. Given the demonstrated relationship between the shape of the nanoparticles used and the resulting toxicity, this parameter should be considered when designing biomedical applications [99]. Summarizing, the size of the gold nanoparticles used is essential for their biokinetics. Smaller nanoparticles (approximately 10 nm) accumulate in many organs, e.g., liver, spleen, kidneys, testicles, and lungs, and blood [66]. Smaller nanoparticles <10 nm show more significant toxicity compared to the larger ones, probably due to their ability to penetrate the cell nucleus [68]. However, renal filtration is disturbed for larger gold nanoparticles (>65 nm), resulting in a lack of urine excretion. Instead, they are eliminated from the blood by the reticuloendothelial system and tend to accumulate in the spleen and liver [66]. Comparison of Silver and Gold Nanoparticle Toxicity Several research groups evaluated silver (dispersion NP) and gold nanoparticle (colloidal solution) toxicity in comparative experiments. Results of one of such study conducted on a mouse model were published by Shrivastwa et al. in 2015. The study was based on male Swiss albino mice, 25-30 g in weight. Blood and tissues from the brain, liver, kidneys, and spleen were used for the experiment. The nanoparticles analyzed were silver and gold, with a size of 20 nm, at concentrations of 1 and 2 µM/kg. AgNP and AuNP were administered to mice interally for 14 days. The amount of reactive oxygen species in treated mice was examined through the analysis of the fluorescence level of the DCFH-DA dye in blood and mouse tissues. GPx (glutathione peroxidase) and GST (glutathione-S-transferase) were used to assess the level of antioxidant enzymes in the blood and tissues. Within the tissues, the ratio of GSH: GSSG was evaluated (reduced glutathione: oxidized glutathione), while total glutathione levels were analyzed in blood. Furthermore, the level of inflammation was marked in soft tissues using IL-6 (interleukin-6). Concerning the results of the analysis, significant weight loss was observed in mice exposed to nanoparticles, especially at a higher dose (2 µM). Moreover, AgNP seemed to cause more substantial weight loss than AuNP. The intensity of fluorescence increased after incubation of blood with nanoparticles in both doses compared to control, with the largest difference noted for AgNP at 2 µM dose. Concerning blood analysis, AgNPs at a dose of 2 µM were found to be the most stimulating, as AgNPs inhibited analyzed enzymes to a greater extent than AuNPs. The effect of gold and silver nanoparticles on soft tissues is presented in Table 4. GPx (glutathione peroxidase) activity decreased in the brain, especially at a concentration of 2 µM AgNP. In contrast, its level increased significantly in the kidneys at the same dose. An increase in value was also observed at a concentration of 1 µM for both types of NP. In the liver, gold had a stimulating effect on GPx levels, especially at a dose of 1 µM, while silver inhibited this enzyme at the same amount. Concerning GST levels in the brain, one µM AgNP resulted in the most significant increase of this protein. In the liver and kidneys, all NPs acted inhibitory, with the most considerable effect attributed to AgNPs at 2 µM. In turn, in the spleen, the largest inhibitory potential was presented by 2 µM AgNPs. An increase in the fluorescence intensity of the DCFH-DA dye was noted in blood and all soft tissues incubated with AgNPs and AuNPs. In the brain, liver, kidneys, and spleen, 2 µM levels had the most promoting effects, especially concerning silver nanoparticles. Furthermore, a significant increase in IL-6 levels was observed in both types of nanoparticles, especially at a dose of 2 µM, compared to the control sample. The application of nanoparticles also significantly affected toxicity. This process manifested through inflammation and increased ROS release resulting from oxidative stress. More pronounced adverse effects were attributed to silver nanoparticles, especially at higher analyzed doses [75]. In another study, Barkur et al. (2020) studied oxidative stress caused by silver and gold nanoparticles on human red blood cells (RBC) using Raman spectroscopy. The cells were incubated for 24 and 48 h with 50 nm silver and gold nanoparticles. Moreover, thiol levels were studied through absorbance measurements. Raman spectroscopy showed an adverse effect of increasing concentrations of nanoparticles on the binding of oxygen to hemoglobin. After 24 h of incubation, minimal spectral changes were observed. RBC treated with 100 µL of silver nanoparticles ruptured after 48 h of incubation, making it impossible to perform spectroscopy. Using 50 µL of NPs, more significant spectral fluctuations were shown in cells incubated with silver than gold nanoparticles. The Raman spectra showed more variability with AgNPs of 30 nm, compared to larger sizes (50, 80 and 100 nm). In addition, for AuNPs, the highest and lowest spectral variability levels appeared at 30 and 10 nm sizes, respectively. This indicates a more harmful effect of AgNPs on the affinity of hemoglobin in erythrocytes for binding oxygen. Due to their antioxidant properties, examination of thiol levels in cells allowed for evaluation of oxidative stress in the presence of nanoparticles. Significant changes in thiol values indicate that such a state can result from NP administration, with more reduced thiol content observed for silver nanoparticles, indicating greater levels of oxidative stress. In addition, changes in hemoglobin structure resulting from incubation with NPs were demonstrated. This occurrence was related to the possibility of nanoparticle adherence to the cell membrane of erythrocytes, causing oxidative stress. Adverse effects of nanoparticles on hemoglobin oxygen-binding ability were also observed. Such processes could potentially cause negative side effects of metal nanoparticle penetration into the human body [100]. The main properties and applications of silver and gold nanoparticles are shown in Figure 3. The antimicrobial activity of nanoparticles is related to their electrostatic interaction with negatively charged cell surfaces, which improves their ability to penetrate cellular membranes. This process leads to the damage of biological membranes, coagulation of proteins, stimulation of ROS production, and ultimately to the reduction of microbial viability [10,23,[101][102][103]. The antimicrobial activity of silver nanoparticles occurs through their attachment to the cell wall and penetration to the cytoplasm. There, the released silver ions interact with proteins at the amino acid level, disrupt electron transport in the respiratory chain and inhibit DNA replication. Due to the release of silver ions and alteration in the cell surface structure, cellular enzymes are deactivated, resulting in ROS production. These changes induce toxicity and ultimately lead to microbial death [10,23,30,101,102]. In turn, the antifungal activity of AgNPs is associated with disruption of membrane potential due to interaction between AgNPs and cell membrane of fungi, e.g., Candida albicans, Trichophyton rubrum, Stachybotrys chartarum, and Mortierella alpina. AgNPs induce the formation of perforations on the cell membrane surface, leading to osmotic shock and ultimately fungal death [101]. Finally, the antiviral activity of AgNPs occurs via the inhibition of virus attachment to the cell surface. Nanoparticles cause denaturation of disulfide bridge, affecting associated modifications of viral proteins [101]. The antimicrobial effect of gold nanoparticles is related to their attachment to the bacterial cell wall, resulting in the creation of pores and penetration into the intracellular space. There, they disrupt the metabolic processes, bind to DNA and inhibit the transcription process, and distort the ribosome units for tRNA binding. These effects ultimately result in a breakdown in the biotic mechanism of the bacteria [10,23,101,102]. Antifungal activity of gold nanoparticles has been confirmed in, among others, Candida albicans [101,103]. Furthermore, AuNPs also demonstrated antiviral activity by inactivating the virus via destruction of its capsid and inhibition of viral entry into cells [101]. The large surface-to-volume ratio of silver and gold nanoparticles enables high absorption of various molecules, e.g., polymers and therapeutic agents, facilitating their common use in biomedical fields [10,102]. Furthermore, a large surface area also improves the interaction of nanoparticles with bacterial cells [23]. The unique optical properties of gold and silver nanoparticles occur due to the excitation of localized resonance of surface plasmons, with greater excitation occurring on silver NPs, compared to their gold counterparts [10,102,104]. The Blood-Brain Barrier Due to the increasing use of nanoparticles, the possibility of them crossing the bloodbrain barrier (BBB) remains a significant concern. This process was confirmed, e.g., by Tang et al., in an experiment based on rat brain microvessel vascular endothelial cells (BMVEC) and astrocytes (AC) incubated with spherical AgNP (100 µg/mL). Under culture conditions, nanoparticles were observed to localize inside endothelial cells. NPs were described to enter the cells to transcytosis, a process that also allows them to infiltrate other tissues of the organism. Further in vitro studies have also confirmed the possibility of AgNPs crossing the blood-brain barrier and have identified potentially harmful effects of AgNP on brain tissue and their interaction with cellular organelles such as mitochondria and the rough endoplasmic reticulum [12,105]. Hence, due to their ability to cross the BBB, spherical AgNPs are considered as a potential neurotoxin. The increased transport of fluorescein by BBB indicates an NP sizedependent increase in BBB permeability, correlated with the severity of immunotoxicity. AgNPs have been shown to induce an increased release of pro-inflammable mediators, i.e., tumor necrosis factor (TNF-α), interleukins (e.g., IL-1β), and prostaglandin E2 (PGE2) in concentration of AgNP 50 µg/cm 3 , which are associated with increased rBMEC (rat brain microvascular endothelial cells) monolayer permeability and may cause BBB dysfunction. The permeability of primary rBMEC monolayers was evaluated as the percentage of fluorescein flow across the rBMEC monolayers. AgNP was used at a concentration of 15 µg/cm 3 . Furthermore, systemic exposure to AgNPs, depending on their size, can result in microvascular damage to the brain. It has been reported that smaller (25 nm) AgNPs produce a more robust inflammatory response, correlated with increased brain microvascular permeability and cell monolayer perforation compared to larger AgNPs (40 and 80 nm) [88]. Feeding mice with 34 nm silver nanoparticles for two months significantly increased the concentration of AgNP in the brain compared to one-day incubation. The effect of month-long distilled water administration on the AgNP concentration was also investigated, proving that the level of nanoparticles remained at about 6%. This may be related to the mechanism of endocytosis and exocytosis across the blood-brain barrier and confirm the accumulation of silver nanoparticles in the brain. Therefore, low levels of AgNP removal from the brain may be a risk to patients treated with products containing NPs, potentially resulting in an opposite rather than expected effect during long-term use [94,106]. Moreover, an experiment studying the effects of long-term oral administration of AgNPs confirmed continuous accumulation of silver nanoparticles in the mouse brain due to exposures lasting up to 4 months. The accumulation of 34 ± 1.4 nm AgNPs at a concentration of 25 µg/mL in the brains of experimental animals was significantly higher after 4-month vs. 2-month administration. Slow removal of AgNP from the brain was also observed after discontinuation of NP intake. After specific administration periods, the amount of silver accumulated in brain tissues was determined by neutron activation analysis (NAA) and compared with Morris water maze (MWM) behavioral test results. The presence or absence of AgNP in the body did not have an apparent effect on memory: differences in dynamics and ranges of parameters described above were found in both the experimental and control subgroups. However, there is a possibility of the results being affected by the memory effect, as the animals, after testing in MWM, could have retained information about the research area, such as the structure of its internal space and the location of external clues [107]. Furthermore, Lasagna-Reeves et al. (2010) confirmed the ability of 12.5 nm colloidal AgNPs with regular shape for crossing the blood-brain barrier in mice. Given the relatively constant level of gold in the blood after AgNP administration at different doses, increased accumulation of gold in the brain suggests their uptake from the blood to the brain. An increase of AuNP accumulation with dosage confirms the possibility of nanoparticle use for targeted therapy in the brain without production of detectable toxicity. These properties emphasize the potential application of gold nanoparticles to treat and diagnose neurodegenerative disorders [96]. Nanoparticles in the brain were also observed in mice administered 4.5 nm gold nanoparticles intravenously for three months. Nonetheless, AuPEG accumulation in the brain was much higher than with AuTrolox. Furthermore, morphological changes have been studied through HE staining. In the AuPEG group, some hippocampus neurons were condensed and darkly stained in the CA1, CA3, and Hilar regions, indicating their damage and loss. However, cells incubated with AuTrolox were rarely stained and had visible nuclei, implying a Trolox mediated decrease in AgNP toxicity. Next, immunohistochemical staining was performed using a primary antibody against NeuN. This protein is localized in nuclei and nuclear cytoplasm of most central nervous system neurons in mammals and is considered a reliable indicator of post-mitotic neurons. The intensity of hematoxyline staining was studied using a tissue cytometer (TissueFAXSplus), showing decreased CA1, CA3, and Hilus regions at both 12.5 and 25 mg/kg AuNPs. With the addition of Trolox, an increase in NeuN antibody expression and inhibition of apoptosis were observed when compared to the AuPEG group. After isolating and homogenizing the hippocampus, LDH (lactate dehydrogenase) levels were investigated. After treatment of cells with AuPEG at a 25 mg/kg concentration, a two-fold increase in LDH levels was observed, indicating that AuPEG may cause cell death [72]. Conclusions The history of the use of silver and gold nanoparticles began in antiquity. Ancient Greeks used silver-coated dishes to store wine for longer times [108]; this metal was also used to clean wounds and treat infections [109]. In addition, gold was used for medical treatment, e.g., smallpox, skin ulcers, and measles [108]. Due to the development of nanotechnology and the use of nanoparticles, it is necessary to study their effects on cells. Many parameters of NPs influence toxicity, including size, shape, concentration, type of coating, and incubation time [33,106]. It is worth noting that response to NPs also depends on the cell type. Hence, to increase the safety of their use, this aspect needs to be considered, especially in the context of biomedical applications [106]. Differences in physicochemical variables make the assessment of nanoparticle toxicity a relatively complex process. Even minor modifications to the surface coating can affect biodistribution in the body, e.g., through differences in the uptake of nanoparticles by macrophages, the level of accumulation in different organs, or the rate of removal from the organism [33,110]. The effect of an increase in the concentration of silver nanoparticles leading to apoptosis is shown in Figure 2. AgNP toxicity is most likely associated with their surface oxidation and the release of silver ions, which cause biochemical changes, abnormalities in cell functions, and neurotoxic modifications [106,111]. The nanotoxicity mechanism relies on the production of ROS, including singlet oxygen, superoxide radical ions, oxide radicals, superoxide ions, hydrogen peroxide, and hydroxyl radicals. ROS generation processes involve mitochondrial respiration and the subsequent release of ROS into the cytoplasm through pores in mitochondrial membranes formed by nanoparticles. In normal cells, a balance is maintained between intracellular antioxidants and ROS. However, nanoparticles can directly damage the mitochondria, causing an increase in intracellular ROS, which may stimulate the further release of ROS from mitochondria in a process known as ROS-induced ROS release. This process can significantly increase intracellular ROS levels and exacerbate oxidative imbalances. High levels of ROS can result in oxidative stress and damage to cellular organelles, DNA, cell membranes, ion channels, and cell surface receptors, leading to toxicity [33,112]. AgNPs have a higher potential for toxicity compared to AuNPs. Therefore, an increase in the share of the use of AuNP was observed, especially in functionalized, therapeutic, and diagnostic methods [106]. Given the primary use of AgNP in clothing and skin surface disinfectants, the retention of silver in the stratum corneum due to aggregation can be beneficial. It could be considered a reservoir of silver ions that may promote and prolong the antibacterial effect. Since the aggregates are several µm in size, they will not penetrate deeper skin layers and will eventually be removed from the stratum corneum by exfoliation. Hence, the formation of aggregates can be seen as a mechanism of detoxification due to the fact that only a rudimental amount of silver reaches the systemic circulation [62]. Gold nanoparticles, characterized by increased biocompatibility, stability, and low toxicity, are considered one of the most suitable carrier systems for medicines. AuNPs functionalized by PEG, due to their ability to bind to cell membranes, have an increased ability to penetrate target cells. Furthermore, fluorescent dye coating allows for monitoring of their movement [113]. Increased use of hazardous chemicals used in the production of nanoparticles could become a serious cause of environmental degradation. Hence, the use of nanoparticles of plant origin could aid in the reduction of harmful chemicals in NP synthesis, as research indicates the possibility of their therapeutic use. They were described to manifest anticancer potential in the treatment of lung, liver, and cervical cancer, and were proposed for use in antidiabetic drugs, acting as an inhibitor of α-amylase. The green synthesized nanoparticles were proven to prevent the development of microbes and have therefore been used in disinfectants and as an antimicrobial coating on medical devices such as catheters. However, the mechanisms of action of these nanoparticles are not yet fully understood. While additional research is needed, the results of past experiments provide an optimistic perspective of a potential increase in the use of plant-based nanoparticles in industry and medicine [109,114]. The emerging discrepancies between literature data on the correlation between the physicochemical properties of nanoparticles such as size, shape, type of functionalized surface, and induced toxicity are the inspiration for further scientific research [111]. It is also essential to more broadly investigate the correlation between the physicochemical properties of nanoparticles and their biodistribution in the organism, as it could help assess the risk of their use in humans [115].
2021-09-28T05:14:51.666Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "6b20a010b1a72f5d8b7b0d4d3ab7729a305992af", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/9/2454/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6b20a010b1a72f5d8b7b0d4d3ab7729a305992af", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
95417695
pes2o/s2orc
v3-fos-license
Real Time Dynamics of Hole Propagation in Strongly Correlated Conjugated Molecular Chains: A time-dependent DMRG Study In this paper, we address the role of electron-electron interactions on the velocities of spin and charge transport in one-dimensional systems typified by conjugated polymers. We employ the Hubbard model to model electron-electron interactions. The recently developed technique of time dependent Density Matrix Renormalization Group (tdDMRG) is used to follow the spin and charge evolution in an initial wavepacket described by a hole doped in the ground state of the neutral system. We find that the charge and spin velocities are different in the presence of correlations and are in accordance with results from earlier studies; the charge and spin move together in the noninteracting picture while interaction slows down only the spin velocity. We also note that dimerization of the chain only weakly affects these velocities. Introduction Low-dimensional many-body systems have always been the focus of theoretical and experimental interest. The physics of these systems is quite different from those of three (3D) systems. For example, these materials show the phenomena of spin-charge separation, wherein the spin and charge degrees of freedom of the electron get decoupled and evolve independently of each other with different velocities. These materials find wide scale applications in the field of molecular electronics (spintronics). Amongst low-dimensional materials, the π-conjugated polymers have attracted a lot of interest, being potential candidates for various molecular electronics and spintronics applications; examples include the organic light emitting diodes (OLEDS), organic semiconductors, organic thin film transistors, etc. [1,2,3,4]. However, spin and charge transport in these systems is still not well established because of the strong electron-electron correlations that exist in these systems. Transport in these materials is strictly a non-equilibrium phenomena to understand which, one needs to investigate the time evolution of strongly interacting quantum many body systems. Recently, there has been a considerable progress in investigation of non-equilibrium time evolution of many body systems. Analytical approaches like the perturbative Keldysh formalism [5], is restricted to a few integrable models only, but in the case of low-dimensional systems, efficient numerically accurate techniques have been developed and successfully applied to a variety of models. One such efficient method which has gained tremendous impetus in recent years, is the time-dependent Density Matrix Renormalization Group technique (tdDMRG) [6,7,8,9,10,11]. In this paper, we use tdDMRG to address the effects of (1) electron-electron correlations and, (2) dimerization, on the charge (spin) transport in quasi 1-dimensional strongly correlated polyene chains. Moreover, we also look into the dynamics of spin-charge separation in these systems. To address the above questions, we focus our attention on the real time quantum dynamics of an hole with up spin injected at site-1 of polyene chains. Model Hamiltonians and Parameters We have modeled the π-conjugated chains using three model Hamiltonians: (a) the Tight-Binding (TB) Hamiltonian, known as Hückel model to chemists [12,13], (b) the single-band Hubbard model [14,15,16], and (c) Pariser-Parr-Pople (PPP) model [17,18]. Amongst these, (2) and (3) are interacting Hamiltonians that include explicit electron correlations and (3) is a realistic model Hamiltonian used for describing π-conjugated polymers. In second quantized formalism, these three model Hamiltonians can be expressed as given below [19]: Here, a † iσ (a iσ ) creates (annihilates) an electron at site-i of the polyene chain, t 0 is the nearestneighbour (nn) hopping integral for an undimerized chain, and h.c refers to the hermitian conjugate. In the case of a dimerized polyene chain, the nn hopping integral is given by, where δ is the dimerization parameter. For the present study, we have taken δ = 0.07, so that the nn hoping term for long and short bonds are respectively given by, t long−bond = 1.07 and t short−bond = 0.93, t 0 = 1.0 for the Hubbard model. n i↑ (n i↓ ) are the number density of upspin (downspin) electrons at site-i of the polyene chain. The Hubbard model is characterized by U , the Hubbard parameter, which represents on-site Coulomb repulsion between two electrons of opposite spins occupying the same site of the polyene chain. For homogenous systems, this parameter is same for all sites. U is measured in terms of t 0 , and the parameter, (U /t 0 ) characterizes the π-electronic motion in single band systems. In our study, we have taken U /t 0 = 0.0 (the Hückel model), 2.0, 4.0, 6.0 and 10.0. In the PPP model, the V i j is the inter-site Coulomb repulsion between two different sites, i and j, of the polyene chain. In keeping with the spirit of phenomenology associated with the PPP Hamiltonian, the inter-site electron repulsion integrals, V i j are interpolated smoothly between U for zero separation and e 2 r 12 for the inter-site separation tending to infinity; thus, the explicit evaluation of the repulsion integrals is avoided. There are two widely used interpolation schemes used to evaluate V i j , the Ohno scheme [20], and the Mataga-Nishimoto scheme [21]. In the Ohno interpolation scheme which we use, the inter-site electron repulsion integrals, V i j are given by, The Mataga-Nishimoto formula is given by, The Ohno interpolation formula decays more rapidly than that of Mataga-Nishimoto scheme. In both the above interpolation schemes, it is assumed that r i j is inÅ, while t 0 , U , and V i j are in measured in ev. z i is the chemical potential of site-i; the function of z i is to keep the i th -carbon atom neutral when singly occupied. Time-dependent DMRG -Xiang's Algorithm For carrying out quantum dynamics of the an up spin hole injected at site-1 of the polyene chain, we first create the necessary initial state, by annihilating an up spin electron from site-1 from the ground state of the π-conjugated chain. Mathematically, this amounts to the following: Here, | ψ(0) is the desired initial state and | ψ GS is the ground state of the neutral polyene chain. Using ψ(0), we numerically solve the time-dependent Schrödinger equation (TDSE) which is given by, where H is any of the three time-independent model Hamiltonians discussed in sec. II. The above equation has the formal solution, Numerically, given a small time step ∆t, H and | ψ(0) , the TDSE can be solved by expanding the exponential function in equation (8), to various orders of (H ∆t). The simplest of this is the Euler (EU) scheme as given below: This equation is then repeatedly used to obtain to propagate the initial wave-packet. This scheme is an explicit one without requiring any matrix inversion. However, it suffers from two serious drawbacks: (1) it is non-unitarity, and (2) there is an instability due to the lack of time inversion symmetry (t −→ -t). To avoid these problems, the TDSE is solved using the implicit Crank-Nicholson (CN) scheme [22] in which the exponential function is approximated by the Caley transform The CN scheme is unitary, unconditionally stable, and accurate up to (H∆t) 3 . However, this scheme also has a serious limitation, namely: Each time evolution step requires a matrix inversion, which for large systems and with the increase of dimensionality, requires huge memory and CPU time, making this method prohibitive. Hence, there has been a surge towards the development of explicit, stable integration schemes. The first of these is a symmetrized version of the EU scheme, called the second order differencing scheme (MSD2) [23]. This scheme is symmetric in time as seen below, is conditionally stable, and accurate upto (H∆t) 2 . The MSD2 scheme can be extended to higher order accuracy forms, which are collectively called the multistep differencing (MSD) schemes [24], for example, the fourth and sixth order MSD (MSD4 and MSD6) which can given as below (equations (12) and,(13)): These higher order schemes are explicit and conditionally stable, for example, MSD4 is stable if and only ∆t < 0.4, while for MSD6, stability exists for ∆t < 0.1. Predictor-Corrector (PC) techniques are another class of ordinary differential equation solvers. For our present studies, we have developed a PC scheme of our own, which we call the MSD4-AM4 method. In this, we use the explicit MSD4 (equation (12)) scheme as the predictor, and fourth order implicit Adams-Moultan method as the corrector (equation (14)). We found this scheme to be very robust, and as efficient as the CN method; moreover, this PC technique is much faster and less memory consuming compared to the CN scheme [25]. So far we have discussed about model Hamiltonians, preparing the initial state, and time evolution of this state by solving the TDSE numerically. For obtaining the initial and ground states of the polyene chains, we use tdDMRG as given by Xiang and coworkers [26]. However, before discussing this technique, we'll briefly discuss the conventional infinite system DMRG method [6,7] as proposed by White and others. The basic idea of the DMRG method is to divide a given finite system into two parts, namely, system and surrounding, followed by retaining only the m most highly weighted eigenstates of the reduced density matrix of these partial "systems" [27]. Using these reduced density matrices, one or more pure states of this total system is obtained. In case of the infinite system DMRG algorithm, the system size is increased in units of "two sites" (see fig. 1) [6,7,8]. So far, all tdDMRG schemes can be categorized into three classes: (1) Static time-dependent DMRG, (2) Dynamic time-dependent DMRG, and (3) Adaptive time-dependent DMRG method [9]. The static tdDMRG method was first introduced by Cazalilla and Marston [28], who exploited this technique for investigating time-dependent quantum many-body effects. They studied a time-dependent Hamiltonian, H(t) = H(0) + V(t), where V(t) represents the time-dependent part of the Hamiltonian. Initially, infinite system DMRG method was used for constructing a lattice of desired size keeping a substantially large number of reduced density matrix eigenstates (m). Time evolution of this final lattice system is then carried out using the time-dependent Hamiltonian, H e f f (t), which is given by, where H e f f (0) is the final superblock Hamiltonian approximating H(0), and V e f f (t) is an approximation of V(t), and is built using the representations of operators in the final block bases. The basic idea of this method is to fix the reduced Hilbert space at its optimal value at time t = 0, and then, projecting all wavefunctions and operators on to it. In other words, the effective Hamiltonian which has been obtained by targeting the ground state of the t = 0 Hamiltonian is capable of representing adequately the time-dependent states that will be reached at later times. The major disadvantage of this scheme is that it fails completely for long time evolution as there is a significant loss of information due to the 'final superblock truncation'. Moreover, the number of DMRG states, m, grows with the simulation time as they need to incorporate a constantly increasing number of nonequilibrium states. To overcome this, in 2003, Luo, Xiang and Wang [26] came up with a targeting method, which is called the Dynamic tdDMRG or the LXW method, and will be utilized by us, for the present study. We will however, not discuss the Adaptive tdDMRG scheme. Interested readers can refer the relevant articles [9,10,11,29,30,31]. The algorithm, as implemented by us, is given in details below: (1) The Hamiltonian of a small, exactly diagonalizable superblock(SB) of L (= 4) sites, H SB L=4 , is first constructed. (2) The ground state, ψ 4 gs , of this 4-site SB is obtained by exact diagonalization of H SB L=4 . Using ψ 4 gs , a desired initial state ψ 4 0 of interest is prepared. Exact time evolution of this initial state is then carried out from t = 0, to t = N steps , by solving the TDSE numerically, using a convenient integration scheme. (In our case, we use our own MSD4-AM4 scheme). At the end of the time evolution, a set of time-dependent wavefunctions are obtained, {ψ(t i ) : t i ∀ ∈ (0, N steps )}. (3) Using this set of time-dependent wavefunctions, the reduced density matrices for the left-(ρ l ) and right-half (ρ r ) blocks for the next SB is build using LXW prescription [26]. Mathematically, Here, | ψ(t i ) is called the ith-target state and ω i is its corresponding weight in the half-block reduced density matrix. In the original LXW method, in building of ρ l and ρ r , only ψ(0) and ψ(t i ) ∀ i ∈ (1, N steps ) are included. However, in our case, we have two systems at hand: neutral polyene, and "cationic" polyene, having +1 charge on it. We found that in the case of homogeneous systems, devoid of heteroatoms, it is un-important whether we keep the "ionic" ground state for building of the density matrices. However, for systems with heteroatoms, including the "ionic" ground state is essential for building the density matrices. Furthermore, we have also found that by comparing tDMRG results with exact results, for small chains, ω ion < ω 0 is required. Hence, the above pair of equations get modified as, Here, O L is a (4m × m) matrix whose columns contain m highest eigenvectors of ρ l (ρ r ), and A l+1 is an operator in the system block (left-, or right-half block). (6) A new SB of size (L+2) is formed, usingH l+1 , two newly added sites, andH r+1 . (7) The steps from (2) to (6) are repeated to iteratively increase the SB size by two sites at a time. Apart from the "ionic ground state" correction to the original LXW Algorithm, we have also introduced another modification, which we call the "n-slot" modification. This modification basically means that instead of storing all the time-dependent non-equilibrium wavefunctions, after every "n-th" time step, the wavefunction is stored for building the density matrix. We have studied "n" = 10, 100, 500 and 1000 cases. It is found that for getting correct results using LXW Dynamic tdDMRG technique, we need "n" > 25. The basic idea behind this method is that, the time-dependent wavefunctions for a SB of size L, explores the Hilbert space as much as possible, and transfers the information through the reduced density matrix, towards building Hilbert space of a SB of size, (L+2). However, this technique suffers from a major problem, namely, it needs large CPU times. Parallelizing this algorithm would mitigate this drawback. The dynamical variables that we study are charge (spin) densities at site-1, site-L of the polyene chain, along with charge (spin) velocities. These variables will be discussed in detail in the next section. Results and Discussion In the previous section, dynamical variables that are calculated in this paper, were mentioned. Here, we discuss these quantities in detail, along with our results. The charge (spin) density at the i th -site of a given polyene chain, at time τ is given by: We have calculated these two quantities at all sites of a given polyene chain. However, we focus our attention on n L (τ) ( S z L (τ)), "L" being the last site of the polyene chain. We have considered chains containing 10, 20, 30 and 40 C-atoms in the present study. Two different evolution times have been used namely, 33 fs (femtoseconds) and 10 fs. The dimension of the DMEV Basis is kept at an optimal value of 200 for the Hückel and Hubbard chains. Site-L of the polyene chain at time t = 0 has n L = 1.0, with equal probability for being occupied by either an "up (down) spin", thereby making S z L = 0.0. As time progresses, the injected hole propagates from the first to the last site, and this is represented by appearance of a minima in the time evolution profiles of both charge and spin densities. The time at which the 1st minima appears is therefore the time taken by the injected hole to reach the other of the chain. Hence, we focus our attention on this quantity throughout our studies. fig. 2 it is seen that with increasing chain length, the time taken by the injected hole to reach the end of the chain also increases. The velocity of the hole appears to be reasonably constant for systems of different length. Furthermore, dimerization appears to slightly decrease the "holevelocity" compared to the uniform chain. Careful examination of the time profiles of charge (spin) densities also reveal that they are identical, in features indicating that there is no spincharge separation. Dynamics in Hubbard Chains: Figs. 3, 4, and 5 gives the time evolution profiles of charge and spin densities at the last site of Hubbard chains for different chain lengths, and for several representative values of U, namely, U/t = 2.0, 4.0, and 6.0, respectively. It is observed that the charge and spin dynamics are no longer identical as was seen in case of Hückel chains clearly indicating spin-charge separation. Furthermore, as the magnitude of U increases, the extent of spin-charge decoupling also increases. In the literature, the decoupled spin and charge excitations are referred to as spinons, and holons [32,33]. For a given U/t, "velocity" of the charge excitation (holon) as well as that of the spin excitation (spinon) seem to be weakly dependent on the chain length. It is however that at all chain lengths and nonzero U/t values, "velocity" of the holon is higher than that of the spinon. If one examines the plots carefully, another interesting observation can be made; in the correlated models, dimerization plays little or no role in influencing the "velocity" of the injected hole. Figs. 6 and 7, show the variation of n L (τ) and S z L (τ) for regular and dimerized chains of given length, for different values of U. Solid curves are for U = 2.0, dashed curves represents U = 4.0, while U = 6.0 is depicted by dotted curves. It is observed that for a fixed chain length, increasing U, does not perceptibly affect the velocity of the holon appreciably, but spinon's velocity is considerably altered. This is simply because, in case of the 1-dimensional Hubbard model, analytical expressions for the holon (v h ) and spinon (v s ) velocities are given by [34,35], where, t and U are the nn hopping matrix element and the one-site Coulomb repulsion term, respectively, and n is the particle density (n ≤ 1). Clearly, the velocity of the holon does not depend on U, while that of the spinon decreases, as established also from our tdDMRG studies. Furthermore, as U → ∞, spin velocity goes to zero. The holon moves by virtue of the hopping matrix element while the effective spin-spin exchange, which is of the order of t 2 U , propagates the spinon. And, in the thermodynamic limit, that is, U → ∞ limit, only the holon propagates, the spinon doesn't "move" at all, v s being zero. As the magnitude of U increases, the velocity of the spinon decreases. Summary and Outlook To summarize, we find from our tdDMRG calculations that when a hole with a desired spin is injected at one end of the π-conjugated backbone of a polyene chain, it propagates from one end of the chain, to the other. The motion of this hole can be monitored by focusing our attention on the time evolution of the charge and spin densities at the two end-sites of the chain. In the absence of any external reservoirs (source-drain), the hole gets reflected back and forth across the length of the chain showing oscillatory motion. In the absence of electron-electron correlations, the charge and spin degrees of the hole do not get decoupled. The time taken by the hole to travel across the whole polyene backbone increases with approximately constant velocity. We are currently extending these studies to PPP model and polymer topologies involving phenyl rings. It is seen that for dimerized chains, the velocity decreases further because of the fact that velocity of the hole is determined by the smaller of the two hopping matrix element, t i,i+1 = t 0 (1 − (δ) i ), where δ is the dimerization parameter and t 0 is the mean hopping matrix element. For Hubbard chains, where spin-charge separation occurs, the hole "breaks-up" into two elementary excitations, one carrying only charge (holon), and the other, only spin (spinon), both of which moves with different velocities. It is found in accordance with the earlier literature, the holon moves faster than the spinon, and with increasing U, although the velocity of the holon remains "almost" unaltered, that of the spinon significantly decreases.
2010-03-08T09:54:17.000Z
2007-11-01T00:00:00.000
{ "year": 2010, "sha1": "d95880db648fc5301798c51fa622a2216ccb5595", "oa_license": null, "oa_url": "https://brill.com/fileasset/downloads_static/static_investorrelations_brill_p&a_meeting_25_april_2022.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "d95880db648fc5301798c51fa622a2216ccb5595", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
128358659
pes2o/s2orc
v3-fos-license
Dietary Nitrate Supplementation Improves Exercise Tolerance by Reducing Muscle Fatigue and Perceptual Responses The present study was designed to provide further insight into the mechanistic basis for the improved exercise tolerance following dietary nitrate supplementation. In a randomized, double-blind, crossover design, twelve recreationally active males completed a dynamic time-to-exhaustion test of the knee extensors after 5 days of consuming both nitrate-rich (NITRATE) and nitrate-depleted beetroot juice (PLACEBO). Participants who improved their time-to-exhaustion following NITRATE performed a time-matched trial corresponding to the PLACEBO exercise duration with another 5 days of dietary nitrate supplementation. This procedure was performed to obtain time-matched exercise trials with (NITRATEtm) and without dietary nitrate supplementation (PLACEBO). Neuromuscular tests were performed before and after each time-matched condition. Muscle fatigue was quantified as percentage change in maximal voluntary torque from pre- to post-exercise (ΔMVT). Changes in voluntary activation (ΔVA) and quadriceps twitch torque (ΔPS100) were used to quantify central and peripheral factors of muscle fatigue, respectively. Muscle oxygen saturation, quadriceps muscle activity as well as perceptual data (i.e., perception of effort and leg muscle pain) were recorded during exercise. Time-to-exhaustion was improved with NITRATE (12:41 ± 07:18 min) compared to PLACEBO (09:03 ± 04:18 min; P = 0.010). NITRATEtm resulted in both lower ΔMVT and ΔPS100 compared to PLACEBO (P = 0.002; P = 0.001, respectively). ΔVA was not different between conditions (P = 0.308). NITRATEtm resulted in reduced perception of effort and leg muscle pain. Our findings extend the mechanistic basis for the improved exercise tolerance by showing that dietary nitrate supplementation (i) attenuated the development of muscle fatigue by reducing the exercise-induced impairments in contractile muscle function; and (ii) lowered the perception of both effort and leg muscle pain during exercise. The present study was designed to provide further insight into the mechanistic basis for the improved exercise tolerance following dietary nitrate supplementation. In a randomized, double-blind, crossover design, twelve recreationally active males completed a dynamic time-to-exhaustion test of the knee extensors after 5 days of consuming both nitrate-rich (NITRATE) and nitrate-depleted beetroot juice (PLACEBO). Participants who improved their time-to-exhaustion following NITRATE performed a time-matched trial corresponding to the PLACEBO exercise duration with another 5 days of dietary nitrate supplementation. This procedure was performed to obtain timematched exercise trials with (NITRATE tm ) and without dietary nitrate supplementation (PLACEBO). Neuromuscular tests were performed before and after each time-matched condition. Muscle fatigue was quantified as percentage change in maximal voluntary torque from pre-to post-exercise ( MVT). Changes in voluntary activation ( VA) and quadriceps twitch torque ( PS100) were used to quantify central and peripheral factors of muscle fatigue, respectively. Muscle oxygen saturation, quadriceps muscle activity as well as perceptual data (i.e., perception of effort and leg muscle pain) were recorded during exercise. Time-to-exhaustion was improved with NITRATE (12:41 ± 07:18 min) compared to PLACEBO (09:03 ± 04:18 min; P = 0.010). NITRATE tm resulted in both lower MVT and PS100 compared to PLACEBO (P = 0.002; P = 0.001, respectively). VA was not different between conditions (P = 0.308). NITRATE tm resulted in reduced perception of effort and leg muscle pain. Our findings extend the mechanistic basis for the improved exercise tolerance by showing that dietary nitrate supplementation (i) attenuated the development of muscle fatigue by reducing the exercise-induced impairments in contractile muscle function; and (ii) lowered the perception of both effort and leg muscle pain during exercise. INTRODUCTION The capacity to maintain physical activity (i.e., exercise tolerance) is crucial for endurance athletes, but is at least as important for the general population since it has been shown that poor aerobic fitness is linked to cardiovascular disease and overall mortality (Myers et al., 2002;Kodama et al., 2009). Exercise intolerance is known as a major symptom of various diseases [e.g., peripheral arterial disease (Leeper et al., 2013), chronic obstructive pulmonary disease (Pepin et al., 2007), or chronic heart failure (Mentzer and Auseon, 2012) with detrimental consequences for quality of life (Belfer and Reardon, 2009)]. It is therefore not surprising that its underpinning mechanisms have been extensively investigated for more than a century (McKenna and Hargreaves, 2007), but are still highly debated (Marcora and Staiano, 2010). As a multifactorial phenomenon, exercise tolerance is determined by various physiological [e.g., cardiovascular, respiratory, metabolic, and neuromuscular mechanisms (McKenna and Hargreaves, 2007)] and psychological factors [e.g., external motivation, mental fatigue (McCormick et al., 2015)]. Given its broad significance, many efforts have been made to identify possible interventions to improve exercise tolerance by utilizing, e.g., nutritional (Matson and Tran, 1993), pharmacological , or psychological strategies (McCormick et al., 2015). Dietary nitrate supplementation, commonly administered in the form of beetroot juice, has been demonstrated as a promising approach to improve exercise tolerance at low and high intensities (Bailey et al., 2009(Bailey et al., , 2010Lansley et al., 2011;Thompson et al., 2014). The ergogenic effect of dietary nitrate on exercise tolerance is attributed to the actions of the biological messenger nitric oxide (NO), since dietary nitrate supplementation is thought to be an effective method to elevate its bioavailability (Lundberg and Govoni, 2004). NO is known for its regulatory function in various physiological processes including vasodilation (Ignarro, 1989), angiogenesis (Papapetropoulos et al., 1997), mitochondrial respiration (Brown, 1995), and contractile function (Kobzik et al., 1994). Several studies have shown that the ergogenic effect of dietary nitrate on exercise tolerance is associated with lower oxygen (O 2 ) cost of submaximal exercise (Bailey et al., 2009;Larsen et al., 2010Larsen et al., , 2011, which might be related to a more efficient mitochondrial adenosine triphosphate (ATP) synthesis (Larsen et al., 2011) and/or a more efficient ATP utilization during skeletal muscle work (Bailey et al., 2010). Moreover, increased dietary nitrate intake has been shown to improve vascular (Ferguson et al., 2013), metabolic (Bailey et al., 2010), and skeletal muscle function in response to exercise (Hernández et al., 2012). Any of the physiological alterations associated with dietary nitrate are capable of modulating skeletal muscle function (Affourtit et al., 2015) and thus the development of muscle fatigue. Muscle fatigue [also referred to as 'performance fatigability' (Enoka and Duchateau, 2016)] is characterized by impairments in motor performance resulting from an exercise-induced decline in the force-generating capacity of the involved muscles and stems from a decrease in neural activation of muscles (traditionally termed 'central fatigue') and/or alterations at or distal to the neuromuscular junction that result in contractile dysfunction (traditionally termed 'peripheral fatigue') (Gandevia, 2001). The capacity of the neuromuscular system to generate the required power for the task is considered as critical factor of endurance performance (Burnley and Jones, 2018). However, the traditional assumption that exercise tolerance is exclusively limited by the inability to generate the power output required for the task despite maximal voluntary effort (also referred to as 'task failure') is highly debated (Enoka and Duchateau, 2016). Several authors suggest that endurance performance is rather regulated by a complex interplay of physiological and psychological factors (Marcora, 2008;Noakes, 2008;Venhorst et al., 2018). Particularly effort perception and muscle pain are considered as important factors that determine exercise tolerance (Noakes, 2008;Marcora and Staiano, 2010;Mauger, 2014). To the authors' knowledge, no study to date has investigated the impact of dietary nitrate supplementation on central and peripheral mechanisms of muscle fatigue or its impact on effort and muscle pain perception during submaximal endurance exercise. The present study was designed to provide further insight into the mechanistic basis for the improved exercise tolerance after dietary nitrate supplementation by investigating key-determinants of endurance performance. Therefore, we quantified exercise tolerance via the use of single-joint endurance exercise, which provides a suitable model to investigate the underlying mechanisms of endurance performance without cardiorespiratory limitations typically associated with wholebody endurance exercise (Andersen et al., 1985). First, by using a randomized, counterbalanced, double-blind, crossover design, participants completed a high-intensity time-to-exhaustion test of the knee extensors after 5 days of dietary nitrate and placebo supplementation. Second, those who improved their time-toexhaustion with dietary nitrate, performed a time-matched trial corresponding to the exercise duration of the placebo condition. The time-matched conditions were further examined to analyze the impact of dietary nitrate on (i) central and peripheral aspects of muscle fatigue; (ii) muscle O 2 saturation (SmO 2 ), (iii) electromyographic (EMG) activity, and (iv) perception of effort and leg muscle pain. To improve the validity of the present data, we also aimed to control for distinct factors (e.g., task motivation, trait and state fatigue), which are thought to affect both performance and perceptual measures during fatiguing exercise (Pageaux, 2014;Enoka and Duchateau, 2016). We hypothesized that muscle fatigue development is attenuated with dietary nitrate supplementation. Subjects An a priori sample size calculation was conducted based on the effect size of a previously published study investigating, amongst others, the impact of dietary nitrate on time-to-exhaustion during high-intensity knee extension exercise (Bailey et al., 2010). A two-sided significance level of 0.05 and a power of 0.95 indicated that 8 participants would be required. Based on the observation that not all participants improved their exercise tolerance following dietary nitrate supplementation (Wilkerson et al., 2012;Coggan et al., 2018), 14 recreationally active males were initially recruited to participate in the present study. Given the fact that two participants did not reach exhaustion within the predefined time window (25 min) of the fatiguing task, a total of 12 subjects (age: 27 ± 5 years; height: 183 ± 7 cm; body mass: 85 ± 9 kg; physical activity: 6 ± 3 h · wk −1 ) was considered for analysis. Taking into account that muscle fatigue is thought to depend on sex [for a review see (Hunter, 2014)], we chose a sample comprising exclusively male subjects. All participants were familiar with high-intensity exercise. Subjects were asked to abstain from (i) vigorous exercise, analgesics, caffeine, and alcohol consumption for 24 h prior to the laboratory visits as well as (ii) nitrate-rich foods [i.e., leafy green vegetables, beetroot, and processed meats (Hord et al., 2009)] and (iii) antibacterial mouthwash during the entire study period (Bondonno et al., 2015). Furthermore, participants were instructed to record their diet 24 h prior to the first laboratory visit and to repeat this for all subsequent visits. The study was approved by the university ethics committee and was conducted according to the Declaration of Helsinki. All participants gave written informed consent in accordance with the Declaration of Helsinki. Experimental Procedure Subjects visited the laboratory on at least three different occasions. During the first visit, participants were thoroughly familiarized with the following procedures: (i) one-leg dynamic exercise (OLDE) of the knee extensors (for more details, see One-leg dynamic exercise); (ii) neuromuscular tests comprising maximal voluntary contractions (MVC), and peripheral nerve stimulation as well as (iii) ratings of perceived effort and leg muscle pain. Furthermore, an OLDE incremental test was performed to determine peak power output. Using a randomized, counterbalanced, double-blind, crossover design, participants performed an OLDE timeto-exhaustion test of the knee extensors at 85% peak power output after 5 days of supplementation with dietary nitrate via beetroot juice (NITRATE; ∼6.5 mmol nitrate per 70 mL; Beet it, Heartbeet Ltd., Ipswich, United Kingdom) and nitratedepleted beetroot juice (PLACEBO; ∼0.04 mmol nitrate per 70 mL; Beet it, Heartbeet Ltd., Ipswich, United Kingdom), respectively. The duration of the supplementation period was chosen based on data from Bailey et al. (2010), who have shown that time-to-exhaustion during high-intensity knee extension exercise was significantly improved following 4-6 days of dietary nitrate supplementation. For each experimental condition, subjects were instructed to consume 70 mL beetroot juice every morning and 2 h prior to the laboratory visits. The second and third occasion was separated by 7 ± 1 days and took place at the same time of the day (±2 h). Participants who improved their time-to-exhaustion following NITRATE by at least 8% performed a time-matched trial corresponding to the PLACEBO exercise duration after a 10 days wash-out period (Larsen et al., 2007) and another 5 days of dietary nitrate supplementation. This procedure was performed in order to allow comparison of neuromuscular, oxygenation, and perceptual data between time-matched exercise trials with FIGURE 1 | (A) Illustration of the experimental design. A time-to-exhaustion test of the knee extensors was performed after 5 days of dietary nitrate (NITRATE) and PLACEBO supplementation. Neuromuscular function of the quadriceps muscle was assessed before and immediately after both PLACEBO and time-matched dietary nitrate condition (NITRATE tm ). Effort perception and leg muscle pain were recorded every min during exercise. Electromyography (EMG) and near-infrared spectroscopy (NIRS) data were continuously recorded during exercise. (B) The neuromuscular testing procedure comprised isometric MVC of the knee extensors combined with electrical stimulation to assess maximal voluntary torque (MVT), voluntary activation (via the interpolated twitch technique), and quadriceps twitch torques in response to paired electrical stimuli at 100 Hz (PS100) and at 10 Hz (PS10) as well as single stimuli (SS). A representative torque-time curve of the neuromuscular assessment procedure can be found in a previous publication from our group (Husmann et al., 2018). (NITRATE tm ) and without dietary nitrate supplementation (PLACEBO). Time-to-exhaustion tests that do not differ by more than 8% between conditions were considered as time-matched. The worthwhile change for the time-to-exhaustion test was defined according to Pageaux et al. (2015) who reported a coefficient of variation of ∼8% for the intersession reliability of the OLDE protocol. Neuromuscular tests were performed before and immediately after exercise termination (<10 s) ( Figure 1A). SmO 2 and EMG data as well as ratings of perceived effort and leg muscle pain were continuously recorded during each experimental condition. Prior to pre-exercise measurements, participants performed an initial warm-up on a stationary bicycle (5 min; 100 W; 90 rpm) followed by a specific warm-up on a dynamometer comprising two isometric contractions for 5 s at 50, 70, and 90% of maximal voluntary torque interspaced by 60 s of rest (MVT; determined during the familiarization session), respectively. Neuromuscular tests comprised supramaximal electrical stimulations of the femoral nerve during and after an isometric MVC ( Figure 1B). All measurements were conducted on the quadriceps muscle of the dominant leg (i.e., kicking preference). During OLDE and neuromuscular testing, subjects were comfortably seated and secured on a CYBEX NORM dynamometer (Computer Sports Medicine R , Inc., Stoughton, MA, United States). The seating position was adjusted for each participant and settings were documented for the subsequent sessions. One-Leg Dynamic Exercise One-leg dynamic exercise is an exercise protocol characterized by rhythmic isotonic contractions of the knee extensor muscles alternated with passive knee flexions. In contrast to whole body exercise, OLDE is not limited by cardiorespiratory function (Rossman et al., 2014). The OLDE protocol used in the present study was recently developed and proved as reliable to measure muscle endurance and to investigate central and peripheral mechanisms of muscle fatigue (Pageaux et al., 2015). It has been shown that OLDE induces severe levels of muscle fatigue (−40% MVT) with significant impairments in both peripheral (−40% contractile twitch torque) and central factors (−13% voluntary activation) (Pageaux et al., 2015). Furthermore, it allows bypassing the time lag between exercise termination and neuromuscular testing, which is typically associated with whole body endurance exercise. A more detailed description of the OLDE protocol used in the present study can be found elsewhere (Pageaux et al., 2015). Briefly, OLDE was performed on a CYBEX NORM dynamometer (Computer Sports Medicine R , Inc., Stoughton, MA, United States) with the range of motion set from 10 to 90 • (0 • = full knee extension). A metronome was used to ensure a cadence of 50 contractions per min (cpm), enabling an active knee extension with ∼106 • · s −1 and a passive knee flexion velocity of ∼180 • · s −1 . During the first visit, subjects were thoroughly familiarized with the OLDE protocol using torque and EMG feedback. An OLDE incremental test was performed afterwards to determine peak power output (89.5 ± 13.1 W). Testing started at an isotonic load of 4 N · m (∼7.4 W) for 1 min and was increased by 3 N · m every min (∼4.5 W) until exhaustion. Exhaustion was defined as a decline in cadence below 40 cpm for a period of ≥10 s despite strong verbal encouragement. On the second and third visit, subjects performed a 2 min warmup at 10% peak power output followed by a high-intensity OLDE time-to-exhaustion test at 85% of peak power output (76.0 ± 11.1 W). Monetary rewards were announced for the three best performances (50 €, 30 €, and 20 €) in order to motivate the participants to exercise for as long as possible during the timeto-exhaustion test. Exhaustion was again defined as a drop in cadence below 40 cpm for a period of ≥10 s despite strong verbal encouragement. An upper time limit for the time-to-exhaustion test was set at 25 min. Torque Recordings A CYBEX NORM dynamometer (Computer Sports Medicine R , Inc., Stoughton, MA, United States) was used to capture instantaneous torques. Participants were seated on an adjustable chair with the hip fixed at 80 • (0 • = full extension). Straps were fixed tightly across the subjects' waist and chest to avoid excessive movements during data recording. The dynamometer rotation axis was aligned with the knee joint rotation axis and the lever arm was attached to the lower leg just above the lateral malleolus. Isometric MVC were performed at 90 • knee flexion (0 • = full extension). Isometric MVT was defined as the highest torque value prior to the superimposed twitch evoked by electrical stimulation. For each trial, subjects were instructed to cross their arms in front of their chest and to push as hard and as fast as possible against the lever arm of the dynamometer. Strong verbal encouragement was given by the investigator during MVC testing. Visual feedback of the torque-time curve was provided on a digital oscilloscope (HM1508, HAMEG Instruments, Mainhausen, Germany). EMG Recordings A detailed description of the EMG recordings can be found in a previously published study from our laboratory (Behrens et al., 2015). Briefly, myoelectrical signals of the vastus medialis (VM), rectus femoris (RF), and vastus lateralis (VL) muscles were recorded using surface electrodes in a bipolar configuration (EMG Ambu Blue Sensor N). EMG signals were amplified (2500×), band-pass filtered (10-450 Hz), and digitized with a sampling frequency of 3 kHz using an analog-to-digital converter (NI PCI-6229, National Instruments, Austin, TX, United States). Maximum compound muscle action potential amplitudes (M max ) elicited by electrical stimulation were measured peak-to-peak. Muscle activity during exercise was assessed by calculating the root mean square of the EMG signal (RMS-EMG) averaged for five contractions at the beginning, as well as at 25, 50, 75, and 100% of each trial, respectively. Only EMG data during the concentric phase of each repetition were considered for analysis. RMS-EMG of VM, RF, and VL was normalized to the corresponding M max value (RMS · M −1 ). To estimate the total muscle activity of the quadriceps during knee-extension exercise, RMS · M −1 was averaged across VM, RF, and VL . Electrical Nerve Stimulation Neuromuscular function of the quadriceps muscle was assessed by using electrical stimulation of the femoral nerve. A constant-current stimulator (Digitimer DS7A, Hertfordshire, United Kingdom) was used to deliver square-wave pulses of 1 ms duration with maximal voltage of 400 V. A ball probe cathode (10 mm diameter) was pressed in the femoral triangle always by the same experienced investigator. The anode, a self-adhesive electrode (35 mm × 45 mm, Spes Medica, Genova, Italy), was affixed over the greater trochanter. After determining the optimal site for stimulation, the position was marked onto the subjects' skin to ensure repeatable measurements within each session. Individual stimulation intensity was progressively increased until M max of VM, RF, and VL as well as a plateau in knee extensor twitch torque was achieved. During the subsequent testing procedures, the stimulation intensity was increased by additional 40% to guarantee supramaximal stimulation. Potentiated quadriceps twitch torques evoked by paired electrical stimuli at 100 Hz (PS100), 10 Hz (PS10), and single stimuli (SS) were elicited 2, 4, and 6 s after isometric MVC, respectively. Peak twitch torques (i.e., highest values of the torque-time curve) were determined for PS100, PS10, and SS, respectively. The PS10 · PS100 −1 torque ratio was calculated as an index of low-frequency fatigue. A reduction of this ratio is thought to indicate impairments in excitation-contraction coupling (Verges et al., 2009). To determine the level of voluntary activation during isometric MVC, the interpolated twitch technique was applied. Electrical paired stimuli were delivered to the femoral nerve at 90 • knee flexion 2 s after torque onset (during the plateau phase) and 2 s after MVC. The level of voluntary activation was calculated using a corrected formula: (Strojnik and Komi, 1998). MVT is the maximal torque level and T b the torque value immediately before the superimposed twitch. The corrected formula is used to avoid the potential problem that the superimposed stimuli are not always applied during the maximum torque level. As shown recently by our group, voluntary activation of the knee extensors can be reliably assessed during isometric contractions using the corrected formula (Behrens et al., 2017). Muscle Oxygenation Muscle oxygenation of VL was continuously monitored using a portable near-infrared spectroscopy (NIRS) device (Moxy, Fortiori Design LLC, Hutchinson, MN, United States). The Moxy monitor has been recently shown to allow reliable measurements of SmO 2 (Crum et al., 2017). SmO 2 reflects the balance between O 2 delivery and O 2 demand in the analyzed muscle (Ferrari et al., 2011). Prior to optode placement on the VL, subjects' skin was shaved and cleaned. The NIRS probe was attached at mid-thigh level, closely to the VL EMG electrodes. The optode was secured with tape and covered with a protective shell to avoid artifacts caused by motion and light. Reliable optode placement between sessions was assured by recording the distance to the patella, measured from the subject's patella to the greater trochanter. Furthermore, skinfold thickness above the VL was measured using a skinfold caliper (5 ± 1 mm). All signals were recorded with a sampling frequency of 2 Hz. A 4th order low-pass zero-phase Butterworth filter (cutoff frequency 0.2 Hz) was applied. NIRS-derived indices of muscle oxygenation were averaged across 5 s at 25, 50, 75, and 100% of each trial, respectively. Shortly after the OLDE warm-up, resting baseline values were averaged for 30 s prior to the start of the OLDE protocol. Baseline values were captured at rest in a seated position. SmO 2 and total hemoglobin (tHb) were reported as percentage changes from baseline ( SmO 2 and tHb). Ratings of Perceived Effort and Leg Muscle Pain During the first visit at the laboratory, subjects were familiarized with ratings of perceived effort and ratings of leg muscle pain. Subjects' perception of effort was recorded by using the 15-point Borg scale (Borg, 1982). Prior to each testing session, participants received written instructions based on guidelines recently proposed by Pageaux (2016). Briefly, instructions comprised the definition of effort ("the conscious sensation of how hard, heavy, and strenuous a physical task is"), exercisespecific descriptions ("How hard is it for you to drive your leg?"), exercise-anchoring (e.g., "maximal exertion corresponds to the effort you experienced while you were performing a MVC"), and the distinction of effort, pain, and other exerciserelated sensations (Pageaux, 2016). Leg muscle pain was defined as the intensity of pain perceived by the subject exclusively in the exercising quadriceps. A modified category-ratio 10 (CR-10) scale was used to quantify leg muscle pain during exercise (Cook et al., 1997). At the beginning of each minute, the participants were asked to rate their perceived effort and leg muscle pain. The average of all ratings across the entire exercise duration is reported as mean levels of effort and leg muscle pain. Endexercise levels of effort and leg muscle pain refer to the last rating of effort and leg muscle pain before exercise termination. Task Motivation Participants' motivation to successfully complete the time-toexhaustion test was assessed by using the success motivation and task interest motivation subscales designed and validated by Matthews et al. (2001). On a 5-point Likert scale (0 = not at all, 1 = a little bit, 2 = somewhat, 3 = very much, 4 = extremely) subjects rated 8 items (e.g., "I wanted to succeed on the task" and "I was eager to do well"). Therefore, total scores range between 0 and 32. The questionnaire was presented to the participant prior to the start of each task. In order to control for potential differences in task motivation, total scores were compared across all experimental conditions. If there is a significant difference in task motivation between conditions, it is considered as a covariate in the statistical analysis. Trait and State Properties of Fatigue According to Enoka and Duchateau (2016), fatigue is defined as a disabling symptom which can be assessed by self-report and quantified as a state variable or as a trait characteristic. Both properties of fatigue are considered as modulating factors of human performance. The Modified Fatigue Impact Scale (MFIS), a self-reported measure of the impact of fatigue on cognitive, physical, and psychosocial aspects of daily activity, was utilized to assess the level of trait fatigue over the course of the last 7 days before each laboratory visit. State fatigue was examined by using the fatigue scale of the Profile of Mood States (POMS-F), which has been shown to provide a reliable and valid instrument to assess the level of state fatigue across a wide range of cohorts (O'Connor, 2004). Before each exercise trial, subjects were asked to complete both questionnaires. If there are significant differences in self-reported measures of fatigue between conditions, they are considered as covariates in the statistical analysis. Quantification of Muscle Fatigue In the present study, muscle fatigue was quantified via the percentage change in MVT values from pre-to post-exercise ( MVT). Percentage changes in voluntary activation ( VA) and PS100 ( PS100) from pre-to post-exercise were used to quantify central and peripheral factors of muscle fatigue, respectively. Statistical Analysis All data were screened for normal distribution using the Shapiro-Wilk test. Differences in percentage changes from pre-to postexercise of all neuromuscular parameters were tested using Student's paired t-tests. Cohen's d effect size was calculated for each paired comparison. Effect sizes of 0.20, 0.50, and 0.80 were considered small, medium, and large, respectively (Cohen, 1988). Two-way ANOVAs with repeated measures on time and condition were conducted for all parameters derived from EMG and NIRS recordings during exercise. Post hoc tests were performed with Bonferroni adjustments. The effect size was determined by calculating partial eta squared (η 2 p ). Data were analyzed using the SPSS statistical package 22.0 (SPSS Inc., Chicago, IL, United States) and statistical significance was accepted at P ≤ 0.05. Sample size was calculated with the statistical software package G * Power (version 3.1.4.). Exercise Tolerance Time-to-exhaustion was significantly improved with NITRATE (12:41 ± 07:18 min) compared to PLACEBO (09:03 ± 04:18 min, P = 0.010, d = 0.61). Individual data are presented in Figure 2. Eight participants improved their exercise performance following dietary nitrate supplementation by at least 8% and completed another trial corresponding to the PLACEBO exercise duration. Exercise trials that do not differ by more than 8% from each other were considered as time-matched. Together, time-matched conditions of 11 subjects were taken into account for further FIGURE 2 | Mean values and individual data for time-to-exhaustion (s) between experimental conditions. Please note that eight out of twelve participants improved their performance over a range from ∼9 to ∼51%. Significantly different between conditions: * P ≤ 0.05. analysis and are referred to as NITRATE tm and PLACEBO. Please note that two participants reached the upper time limit of 25 min during the NITRATE condition. Maximal Voluntary Torque A significantly lower MVT was found for NITRATE tm compared to PLACEBO (P = 0.002, d = 0.66; Figure 3). Absolute values for MVT are presented in Table 1. Electrically Evoked Twitch Torque A significantly lower PS100 was found for NITRATE tm compared to PLACEBO (P = 0.001, d = 0.91; Figure 3). Furthermore, SS was shown to be lower for NITRATE tm compared to PLACEBO (P = 0.007, d = 0.64). No significant differences for PS10 · PS100 −1 ratio were found between NITRATE tm and PLACEBO (P = 0.183, d = 0.31; Figure 3). Absolute values for PS100, SS, and PS10 · PS100 −1 ratio are presented in Table 1. Voluntary Activation No significant differences between NITRATE tm and PLACEBO were found for VA (P = 0.308, d = 0.14; Figure 3). Absolute values for voluntary activation are presented in Table 1. levels of leg muscle pain were not significantly different between time-matched conditions (P = 0.066, d = 0.34). Absolute values of perceived effort and leg muscle pain are presented in Table 3. DISCUSSION The present study was designed to provide further insight into the mechanistic basis for the improved exercise tolerance after dietary nitrate supplementation. We showed that dietary nitrate supplementation significantly improved exercise tolerance of the knee extensors during a high-intensity endurance task in two thirds of all subjects. Furthermore, by comparing time-matched exercise conditions with and without dietary Values are expressed as mean ± SD. The average of all ratings across the entire exercise duration is reported as mean levels of effort and leg muscle pain. Endexercise levels of effort and leg muscle pain refer to the last rating of effort and leg muscle pain before exercise termination. NITRATE tm , time-matched dietary nitrate condition. Significantly different between conditions: * P < 0.05. nitrate supplementation, we found that dietary nitrate attenuated the development of muscle fatigue by reducing the exercise-induced impairments in contractile quadriceps function. Another important finding was that perception of effort and leg muscle pain was significantly lower following dietary nitrate supplementation. Exercise Tolerance and Muscle Fatigue In the present study, we found that exercise tolerance during high-intensity OLDE was improved after a 5 days dietary nitrate supplementation as indicated by a significantly increased time-to-exhaustion. Improved tolerance to low and high-intensity whole-body endurance exercise was also found in healthy adults after acute and short-term (4-6 days) dietary nitrate supplementation (Bailey et al., 2010;Lansley et al., 2011;Thompson et al., 2014). In the present study, however, only 8 out of 12 participants improved their exercise tolerance following 5 days of dietary nitrate supplementation. Interindividual differences in the responsiveness to dietary nitrate supplementation were also found by others (e.g., Wilkerson et al., 2012;Coggan et al., 2018). It has been suggested that the participant's training status (e.g., aerobic capacity), capillary density, endothelial NO synthase activity, and fiber type distribution can affect the impact of dietary nitrate supplementation on exercise performance (Jones, 2014b). Although we have not analyzed it, the interindividual variability in exercise performance could be further explained by differences in plasma [nitrate] and [nitrite] (Wilkerson et al., 2012;Coggan et al., 2018). As recently shown by Vanhatalo et al. (2018), this might be due to differences in the nitrate reducing capacity of oral microbiota. Those who have not improved their exercise tolerance following 5 days of supplementation with ∼6.5 mmol nitrate per day, might have benefited from a higher daily dosage or a longer supplementation period. Further research is warranted to elucidate if there are 'non-responders' to dietary nitrate supplementation or whether individualized dosing strategies are necessary to achieve an ergogenic effect. By comparing time-matched conditions, we showed that dietary nitrate supplementation attenuated the development of quadriceps muscle fatigue as indicated by a significantly lower MVT during NITRATE tm (−42% ± 12%) compared to PLACEBO (−50% ± 11%). We found that VA was not different between PLACEBO and NITRATE tm , suggesting that dietary nitrate does not significantly affect central factors of muscle fatigue during high-intensity OLDE. However, we have shown, for the first time, that dietary nitrate attenuates the development of quadriceps muscle fatigue mainly by reducing the impairments in contractile quadriceps function during highintensity OLDE. This was indicated by a significantly lower PS100 during NITRATE tm (−36% ± 12%) compared to PLACEBO (−46% ± 8%). Increased dietary nitrate intake has been shown to improve vascular (Ferguson et al., 2013), metabolic (Bailey et al., 2010), and skeletal muscle function in response to exercise (Hernández et al., 2012;Haider and Folland, 2014). Any of the physiological alterations associated with dietary nitrate are capable of modulating skeletal muscle function during fatiguing exercise (Affourtit et al., 2015). Bailey et al. (2010) have found that 4-6 days of dietary nitrate supplementation reduces the degradation of phosphocreatine (PCr) as well as the concomitant accumulation of adenosine diphosphate (ADP) and inorganic phosphate (P i ). The latter is thought to be a main contributor to exercise-induced impairments in Ca 2+ handling and myofibrillar Ca 2+ sensitivity (Allen et al., 2008). Consequently, a lower intracellular [P i ] associated with dietary nitrate supplementation seem to a plausible explanation for the reduced impairments in contractile function. On the one hand, data from Bailey et al. (2010) suggest that a reduction in the ATP cost of muscle force production might be responsible for the lower PCr degradation and accumulation of P i following dietary nitrate supplementation. This assumption is supported by experiments in both humans (Haider and Folland, 2014;Whitfield et al., 2017) and mice (Hernández et al., 2012), showing that dietary nitrate enhances the contractile force production in response to low-frequency electrical stimulation. Hernández et al. (2012) have shown that the increased contractile force production results from an improved intracellular Ca 2+ handling. On the other hand, there is data from experiments in rodents (Ferguson et al., 2013) and humans (Richards et al., 2018) suggesting that dietary nitrate increases muscle blood flow via local vasodilation, which in turn may improve oxygen delivery to the contracting muscles. It is generally well accepted that changes in O 2 delivery to muscles alter intracellular metabolism, metabolite accumulation and ultimately contractile muscle function during exercise (Amann and Calbet, 2007). Since the rate of PCr hydrolysis and concomitant intracellular accumulation of P i have been shown to be slower under conditions of increased O 2 availability (Hogan et al., 1999), a higher O 2 availability during exercise might have contributed to the reduced impairments in contractile quadriceps function following dietary nitrate supplementation. Improvements in NIRS-derived indices of muscle oxygenation following dietary nitrate supplementation were found in healthy adults (Bailey et al., 2009) and patients with peripheral arterial disease (Kenjale et al., 2011) during whole body endurance exercise. Bailey et al. (2009) have shown that deoxyhemoglobin peak amplitude, an estimate of muscle fractional O 2 extraction, was significantly lower after beetroot juice consumption for 4-6 days when measured during moderate-intensity exercise. In the present study, however, there was no significant condition effect for SmO 2 of the VL during exercise. Although conclusions should be drawn with caution, a subsample analysis of those who have improved their timeto-exhaustion with dietary nitrate has revealed a significant condition effect. Consequently, it cannot be fully excluded that an improved SmO 2 of the quadriceps muscle during exercise has contributed to the attenuated impairment in contractile function following dietary nitrate supplementation. Further studies are therefore needed to understand the exact causes for the reduced exercise-induced impairments in contractile function following dietary nitrate supplementation. Furthermore, we found a significant condition effect for Q RMS · M −1 during exercise. Although only significant for the first half of the exercise protocol (see Figure 4), the exerciseinduced increase in quadriceps muscle activity was lower during NITRATE tm compared to PLACEBO. A rise in muscle activity in the course of a submaximal motor task at a constant power output is commonly interpreted as an increased recruitment of additional motor units to compensate for the progressive loss in contractile muscle function (Devries et al., 1982;Moritani et al., 1993). Since we found that the exercise-induced loss in contractile torque production is lower following dietary nitrate intake, less muscle activation might be required to ensure the same power output. Perceptual Responses During Exercise In the present study, we found that NITRATE tm resulted in lower mean and end-exercise levels of effort perception compared to PLACEBO, suggesting that dietary nitrate reduces the perception of effort during a high-intensity endurance task of the knee extensors. This finding is of particular importance since effort perception is thought to be a key-determinant of endurance performance (Marcora and Staiano, 2010). Based on the psychobiological model of exercise tolerance (Marcora, 2008), it has been stated that participants disengage from a task as a result of an effort-based decision. There is evidence suggesting that tolerance to high-intensity aerobic exercise in highly motivated athletes is predominantly limited by effort perception but not by the inability of muscles to generate the required power for the task (Marcora and Staiano, 2010). Our participants can be characterized as highly motivated since we (i) documented high levels of task motivation during each experimental condition, (ii) announced a monetary reward for the best three performers, and (iii) provided strong verbal encouragement during the task. Therefore, a reduced effort perception could be a significant contributor to the nitrateinduced improvements in exercise tolerance during highintensity OLDE. Although the exact physiological mechanisms underlying the perception of effort are still debated (Marcora, 2009), it is well accepted that neural processing of sensory signals in the brain is involved in effort perception (Noble and Robertson, 1996). Based on the corollary discharge model, it has been stated that effort perception reflects a centrally mediated feedforward mechanism in which an efference copy of the central motor command is sent from motor to sensory brain areas to enable a conscious awareness of processes associated with motor output (Poulet and Hedwig, 2007). During fatiguing contractions, the progressive rise in effort perception is thought to reflect the increase in central motor command which is necessary to compensate for the exercise-induced impairments in contractile muscle function in order to ensure adequate power output to maintain the task (de Morree et al., 2012). Consequently, the lower perception of effort following dietary nitrate supplementation could result from a reduced central motor command as a result of the preserved contractile function during exercise. By contrast, it has also been suggested that afferent feedback from working muscles contribute to the perception of effort (Amann et al., 2010). Although this assumption is highly debated (Marcora, 2009), it cannot be ruled out that an attenuated afferent feedback from the working muscles due to less metabolic disturbances in the periphery has contributed to the lower effort perception after dietary nitrate supplementation. Although it is well accepted that perception of effort plays a crucial role in endurance performance (Marcora and Staiano, 2010), there is hardly any evidence for the impact of dietary nitrate on effort perception during endurance exercise. To the authors' knowledge, there is only one study that recorded ratings of perceived exertion during work-matched submaximal exercise (Cermak et al., 2012). Cermak et al. (2012) have found that ratings of perceived exertion during submaximal constant-load cycling were not significantly affected by 6 days of dietary nitrate supplementation. However, since the authors have not reported the underlying definition of exertion, it is possible that participants' rating included other exercise-related sensations than effort (e.g., discomfort and muscle pain) which are based on different neurophysiological mechanisms (Marcora, 2009). Therefore, it remains to be elucidated, if dietary nitrate supplementation also affects effort perception during submaximal, whole-body endurance tasks. Future studies investigating the mechanistic bases for the improved endurance performance following dietary nitrate supplementation should pay special attention on effort perception and its appropriate assessment [as recently proposed by Pageaux (2016)]. Furthermore, we found that 5 days of dietary nitrate supplementation resulted in lower mean levels of leg muscle pain during high-intensity OLDE, indicating that the participants had to tolerate less muscle pain in the course of exercise compared to PLACEBO. To our knowledge, there is no study to date assessing the impact of dietary nitrate on muscle pain during submaximal knee extension exercise. A previous study investigating patients with peripheral arterial disease has shown that dietary nitrate supplementation delays the onset of claudication pain during walking, which in turn resulted in an improved time-to-exhaustion (Kenjale et al., 2011). In healthy participants, exercise-induced muscle pain has also been proposed as an important factor in endurance performance (Mauger, 2013). Experimental evidence supporting this assumption is, however, scarce and often ambiguous (Stevens et al., 2018). Consequently, it cannot be ruled out that reductions in muscle pain perception contributed to the improved exercise tolerance. While effort perception is likely centrally generated and seems to be independent of afferent feedback (Marcora, 2009), exercise-induced pain is thought to be related to feedback from nociceptive group III/IV muscle afferents about alterations associated with muscular contraction [e.g., increased intramuscular pressure, heat, high levels of metabolites, or deformation of tissue (Mense, 1993)]. As stated earlier, dietary nitrate supplementation has been shown to be associated with changes in local muscle perfusion, intramuscular metabolism, and reduced metabolite accumulation (Bailey et al., 2010). Given the fact that group III/IV afferents are sensitive to changes in metabolite concentration, it can be speculated that a reduced metabolite accumulation might attenuate the afferent feedback and thus lowers the perception of exercise-induced muscle pain. However, it should be noted that pain is not always directly related to the magnitude of the nociceptive signal, since pain is considered as subjective experience with a strong emotional component (Mauger, 2013). Further studies are necessary to better understand the impact of dietary nitrate on perception of exercise-induced muscle pain. Limitations Although single-joint endurance exercise provides an appropriate model to investigate underlying determinants of endurance performance, whole-body endurance exercise (e.g., cycling, running, and walking) has the advantage to better mimic real world activities and sport events . Given the fact that whole-body exercise requires a greater amount of muscle work with concomitant higher cardiorespiratory demands, there is a greater potential for systemic responses (e.g., hyperthermia, respiratory muscle fatigue, and arterial hypoxemia), which might affect the fatigability of the neuromuscular system (Sidhu et al., 2013). Therefore, the impact of dietary nitrate ingestion on muscle fatigue, perceptual responses, and its implications for exercise tolerance is currently limited to single-joint exercise. In line with other studies using high-intensity OLDE (Pageaux et al., 2015, we found a large interindividual variability in exercise duration of the time-to-exhaustion test (Figure 2). Considering that both muscle fatigue and the effectiveness of dietary nitrate with regard to endurance performance are thought to be task-dependent (Enoka et al., 2011;Jones, 2014a), the observed variability in duration and intensity might have affected the impact of dietary nitrate on muscle fatigue development. CONCLUSION The present findings extend the mechanistic basis for the improved exercise tolerance following dietary nitrate supplementation by investigating key-determinants of endurance performance. We showed, for the first time, that 5 days of dietary nitrate supplementation were associated with reduced levels of muscle fatigue compared to the time-matched placebo condition. Data indicate that the attenuated development of muscle fatigue following dietary nitrate ingestion was mainly due to lower exercise-induced impairments in contractile function (i.e., less peripheral fatigue). Therefore, dietary nitrate supplementation might be a promising approach to reduce muscle fatigue in situations (e.g., altitude) or populations (e.g., patients with peripheral arterial disease, type 2 diabetes, chronic heart failure) in which muscle fatigue is exacerbated and exercise tolerance is compromised. Another important finding of the present study was that dietary nitrate supplementation was accompanied by a lower effort perception during exercise. Based on the well-accepted corollary discharge model (Christensen et al., 2007;Poulet and Hedwig, 2007), we assume that the lower effort perception following dietary nitrate resulted from a lower rise in central motor command as an adjustment to the preserved contractile function compared to PLACEBO. We conclude that an attenuated development of muscle fatigue as well as lower levels of perceived effort and muscle pain have contributed to the improved exercise tolerance following dietary nitrate supplementation. ETHICS STATEMENT The study was approved by the university ethics committee and was conducted according to the Declaration of Helsinki. All subjects were informed about possible risks and discomfort associated with the investigations prior to giving their written consent to participate.
2019-04-24T13:16:35.314Z
2019-04-24T00:00:00.000
{ "year": 2019, "sha1": "d0c522abd55f244e66eb00c86c540708593cbb27", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.00404/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d0c522abd55f244e66eb00c86c540708593cbb27", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226646114
pes2o/s2orc
v3-fos-license
Satisfaction of Tunisian Healthcare Staff Towards Preparedness of Covid-19 Introduction : Since December 2019, china, where the first cases appeared, has declared that it is fighting a new generation of Betacoronavirus from the corona family. She was shocked at the speed of transmission of this virus (Zhang, 2020; Li, 2020; Zhu et al, 2020; Misra, 2020). The start of this pandemic was sudden for everyone but all health systems must be well prepared to fight this health emergency (Zhang, 2020). Tunisia recorded its first case in 02 March 2020, three months after the onset of infection in China. According to INEAS (2020), it important to prevent the transmission of this virus by protecting hospitals and health care workers. To discover what happened in the practice side, this survey will conducted. Aim : To describe the satisfaction of health professionals with the strategy adopted by the Ministry of Health and the new measures taken by hospitals, as well as the availability of protective equipment and the dissemination of information. Methodology: This survey is a quantitative descriptive cross-sectional study which will study satisfaction among healthcare staff. It was carried out in multiple hospitals in Tunisia in 21 days from March 2020 until April 2020. Finally, 454 healthcare professionals agreed to participate in our study (n = 454). An auto-administrative questionnaire was distributed to the participants. Results: Most of the respondents are young people, which explains why 83,3% of them have professional experience <10 years. More than 50% were not involved in decision-making either in the hospital or in the department in which he works. More than 70% did not participate either in training on hygiene measures or on triage of patients or on the white plan and the precision of the role of each. More than 60% of the participants were not satisfied with the availability of protection material except for the disposable gutters and for the liquid soap. Conclusion: Despite the fact that Tunisia has experienced the pandemic of SARS-Cov-2 late in comparison to other countries, the health professionals were not satisfied with the procedures and preconceptions put in place. Introduction Humanity and the world has known several pathologies which have been declared as a universal emergency by the world health organization (WHO) like H1N1 en 2009 which caused the death of 18 000, Ebola in 2013, Zika virus in 2013, SARS (Sever Acute Respiratory Syndrome) in [2002][2003], and SARS-Cov-2 that we are currently experiencing (Zhu et al, 2020;Yan et al, 2020). Since December 2019, china, where the first cases appeared, has declared that it is fighting a new generation of Betacoronavirus from the corona family. She was shocked at the speed of transmission of this virus (Zhang, 2020;Li, 2020;Zhu et al, 2020;Misra, 2020). Very quickly, this virus is found in the majority of the countries of the world with tragedy scenarios in some countries which have recorded a large number of cases and deaths like Italy and USA (Wang, 2020). This new virus called SARS-Cov-2 is transmitted by the respiratory tract and by contact with a person or an object infected by this virus (Zhang, 2020). This seventh generation of coronavirus causes signs of flu and attacks epithelial cells in the lungs (Zhu, 2020). This pandemic has created worldwide disruption by affecting several areas in the countries: economic since the citizens were forced to stay at home at home, psychological whether it be on people or especially health professionals who were in direct contact with the virus and unfortunately the world has recorded cases of deaths of healthcare professionals who have been infected by patients (Zhang, 2020;Zhu et al , 2020). The start of this pandemic was sudden for everyone but all health systems must be well prepared to fight this health emergency (Zhang, 2020). Tunisia recorded its first case in 02 March 2020, three months after the onset of infection in China. Theoretically, the Tunisian health system must be well equipped in hospitals and normally there must be a clear strategy to succeed in the war. In this case, l'INEAS was the first establishment who published a very important document about the protection against COVID-19. This publication was defined this new virus and his symptoms. It also clarify the different modality of giving care to hemodialysis patients, cardiac patients, pediatric patients and the health working professionals who caught SARS-Cov-2. It devoted a part to talking about protection equipment to use in many area of hospital such as operating room. Moreover, it cited the deferent steps which must be respected by health workers when they start to treat or transport a SARS-Cov-2 patient or corpse. Finally, it, explain who health workers can use the COVID-19s' circuit in emergency room to make the difference and protect both health workers and the other patients (INEAS;. To discover what happened in practice side, this survey will conducted. In fact, this work aims to describe the satisfaction of health professionals with the strategy adopted by the Ministry of Health and the new measures taken by hospitals, as well as the availability of protective equipment and the dissemination of information. Methodology This survey is a quantitative descriptive cross-sectional study which will study satisfaction among healthcare staff. It was carried out in multiple hospitals in Tunisia in 21 days from March 20, 2020 until April 10, 2020. This work targets health personnel who work in the hospitals of great Tunis. The inclusion criteria are: being a health professional working in one of the hospitals in the greater Tunis during the SARS-COV-2 pandemic period. The doctors and the administrative staff are excluded from this study. Finally, 454 healthcare professionals agreed to participate in our study (n = 454). Questionnaire The questionnaires consist of two parts: The first part is made by socio-demographic and professional data. The second part, included 35 items, is made with a Likert scale. Each item is coded from 1 to 5: 1= Very satisfied, 2=Satisfied, 3=Neutral, 4= Dissatisfied, 5=Very dissatisfied. Data analyses The questionnaire data were entered in Statistical Package for Social Sciences (SPSS) version 22. In the descriptive part, we determined the mean and the extremes for the quantitative variables. For qualitative variables, we determined the relative percentages of each category. Ethics considerations Free and informed consent of the participants was obtained following the description of the study's main objective and the explanation of the participant's responsibilities. The questionnaires are anonymous and each nurse completes only one questionnaire. -20] 52 11,5 > 20 24 5,3 The sample has almost the same distribution by sex of the participants. However, most of the respondents are young people, which explains why 83,3% of them have professional experience <10 years. For the first section, which concerns communication and information, the participants showed great disappointment. In fact, more than 50% were not involved in decision-making either in the hospital or in the department in which he works. Furthermore, they did not participate in meetings to get training and information about this new virus. For health care professions who participated in this study, coordination with the medical profession remains poor. For the second dimension concerning continuing education, health professionals were extremely dissatisfied. In fact, more than 70% did not participate either in training on hygiene measures or on triage of patients or on the white plan and the precision of the role of each. For the third dimension, which concerns the availability of protective equipment and materials, the percentage of dissatisfaction was very higher. In fact, more than 60% of the participants were not satisfied with the availability of protection material except for the disposable gutters and for the liquid soap. The percentage was more than 80% for disposable combination. For the disinfection of service units, the surveys were not satisfied on the surface disinfection products and services sterilization did not achieve. If we are going to classify the dimensions according to their percentages, we find that the participants are rather not satisfied with the staff and the measures taken by the hospitals. They assigned the highest percentage for the dimension of personnel training to fight against this pandemic (84.6%), then the dimension of the availability of personnel protective equipment (82.4%). Discussion The goal of this survey descriptive quantitative was to describe the satisfaction of health professionals with the strategy adopted by the Ministry of Health and the new measures taken by hospitals, as well as the availability of protective equipment and the dissemination of information. The results from this work were alarming and shocking. In fact, more than 70% of the participants were either dissatisfied or very dissatisfied with the measures taken by their hospitals and by the way of managing this critical health situation. In addition, the health professionals who participated in this study expressed their disappointment with the passage of information from the hierarchical superiors to the clinicians. Also, they were neither well trained about this new virus, not involved in decisionmaking even for measures which concern them. In this section, we discuss this survey dimension by dimension. 1) Communication and information and continues formation: These were the first two dimensions that gagged very dissatisfied by professional health workers. In fact, according to the participants, hospitals don't have a clear strategy to form their health care workers about SARS-Cov-2 as a new virus and a few education sessions were necessary. Actually, 30,4% were very dissatisfied about their participation on a meeting about SARS-Cov-2. Moreover, more than 50% are also very dissatisfied about the absence of support of their leaders and 78,4% were also dissatisfied because their hospitals did not provide continuing education sessions neither on the modalities of protection against this new virus nor on the white plan Those finding are similar to the results of the survey of Sagar in 2015. For author nurses "leading the fight against Ebola virus disease". That's why they recommended that nurses must be implicated when decision will take by the decision-makers. According to this article, the important of nurses role in caring patients who have Ebola virus without stigmatization, their disponibility in any health crisis despite the dangers, and the wasted time of their lives because of this diligent profession, forces politicians to involve nurses in decision-making during this type of health crisis. This is not easy to realize in underdevelopment countries, that why authors invited nurses to "seek a place at the table" for caregivers to change their negligence on the nurses and just consider them as task exactors. 2) Availability Personnel Protective Equipment (PPE): In this study, participants were expressed their dissatisfaction especially for FFP2 masks, glasses (61,2%), surgical boot (51,1%). Misra (2020) confirmed that it is very important to make available the equipment personal protective especially in hospital because doctors and professional health care are exposed to strong risqué of contamination. Author added that, it is not easy to achieve this goal because all the world need to have this type of equipment. Author added that "In China, despite high priority and dedicated funding, many healthcare workers bought protective gear with their own money or borrowed cash or donations from friends in china or other countries". Furthermore, Delgado et al (2020), in their survey found that health care professionals have a limited access to PPE during SARS-COV-2 pandemic. This similarity can be explained by the universal and ubiquitous nature of this pandemic which has paralyzed the whole world. Literature said that to win this wear against SARS-COV-2, it is important to protect health care worker because they are in front direct if the virus. It is crucial to reduce professional-patient transmission and professional-professional transmission (Crossmark; 2020). 3) Units' sterilization: More than 40% of participants were dissatisfied about units' sterilization. They were not satisfied by machines, rooms' sterilization and also about the availability of hygiene products Hero (2020) declared that among the urgent precautions that must be taken is the protection of health professionals. It is not that Tunisia which encounters this problem.in fact, by personnel protective equipment (PPE), and sterilization of rooms, services and all departments of hospital. Author added that it is essential to do this if we want to prevent a tragedy scenario. Kollie in 2016 with nurses and midwifes with Ebola in Liberia. Indeed, by a qualitative study, author understood how participants have lived this experience. She found that, Ebola experience was a scarred experience which changed the rhythm of their lives because they "living in fear and terror". Even their relations with patient and with themselves were modified. These changes also influenced their family lives. That why author declared that, supervisors and policy must think about those crucial elements when they declare theirs decisions. Conclusion Despite the fact that Tunisia has experienced the pandemic of SARS-Cov-2 late in comparison to other countries, the health professionals were not satisfied with the procedures and preconceptions put in place. The reaction of the hospitals in this study is not effective and suitable for clinicians. The results of this study invite the authorities to take measures to manage this type of situation in the future. It is important to react effectively and quickly to minimize the damage. Study's limitations: This survey interested just the hospitals of the capital. We don't have an idea about the other hospitals. A notational survey may be able to give us a clear vision and it can demonstrate if there are differences between the different heath structures. The present critical situation (SARS-Cov-2) influenced the number of the participants to this study.
2020-07-02T10:34:37.369Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "0d519f00bda318643c9afa97ae16a31c19357fda", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/JHMN/article/download/53103/54872", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f2002fd44a0e0b3442e56cf43173c983cc3d4e06", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
206994231
pes2o/s2orc
v3-fos-license
Model-based scale-up methodology for aerobic fed-batch bioprocesses: application to polyhydroxybutyrate (PHB) production This work presents a general model-based methodology to scale-up fed-batch bioprocesses. The idea behind this approach is to establish a dynamics hierarchy, based on a model of the process, that allows the designer to determine the proper scale factors as well as at which point of the fed-batch the process should be scaled up. Here, concepts and tools of linear control theory, such as the singular value decomposition of the Hankel matrix, are exploited in the context of process design. The proposed scale-up methodology is first described in a bioprocesses general framework highlighting its main features, key variables and parameters. Then, it is applied to a polyhydroxybutyrate (PHB) fed-batch bioreactor and compared with three empirical criteria, that are traditionally employed to determine the scale factors of these processes, showing the usefulness and distinctive features of this proposal. Moreover, this methodology provides theoretical support to a frequently used empirical rule: scale-up aerobic bioreactors at constant volumetric oxygen transfer coefficient. Finally, similar process dynamic behavior and PHB production set at the laboratory scale are predicted at the new operating scale, while it is also determined that is rarely possible to reproduce similar dynamic behavior of the bioreactor using empirical scale-up criteria. Latin symbols A x Cross-sectional flow area m 2 ð Þ a 1 ; a 2 ; a 3 Empirical coefficients for P g calculation dimensionless ð Þ c 1 ; c 2 ; c 3 Empirical coefficients for X m Maximum residual cell concentration g L À Á X t Total biomass concentration (P þ X) Introduction The scale-up of fermentation reactors as a key biotechnological problem was first introduced in the 1940s when the industrial penicillin production began [41,43].Developments within this field continue until today looking for more efficient routes to reproduce at an industrial scale the same performance targets set at the laboratory.However, efforts have failed to establish a straightforward and general protocol to tackle this issue [4,17,26].Thus, a particular strategy is commonly implemented for each individual product, process and facility [41], requiring detailed understanding of the process, experience, intuition and good luck to successfully scale it up [17,39]. Although a plethora of works have emerged concerning the scale-up of biotechnological processes [10,18,20,26,35,41,43,47], they can be classified into three basic approaches: (1) experimental, (2) physical, and (3) fundamental [23,36,38,46].The first one is based on the designer extensive practice in scaling up processes [5,23,26,38,55], the second one uses dimensionless numbers, variables or relations to bond the same process at different scales [43,50] and the third one involves proper modeling of the process under consideration [17,29,30,38].Here, both experimental and physical approaches are known as traditional scale-up methods, which have been applied to a wide range of biotechnological processes since the 1960s [10,18,21,24,37,46,54].Nevertheless, latent-based multivariable scale-up methods (part of the experimental approach) have recently emerged as parallel solution to bioprocesses scale-up problem [14].This approach involves the use of available process data and fundamental knowledge from certain plant (e.g.pilot-scale unit) to obtain black-box models for a second plant (e.g.industrialscale unit) [49].This method pillar is the definition of the process latent variables that consider the relation of common variables between the plants as well as the relation between all the variables within each plant [13].Finally, the fundamental approach has only been explored in the last few years due to past limitations in solving complex models [4,17,26,38]. Bioprocesses scale-up has been commonly faced within the experimental approach, using empirical rules that intend to keep constant some key process parameter as the scale is increased [17,21,37].Here, the most popular parameters susceptible to be fixed are volumetric power consumption [26,54], volumetric mass transfer coefficient [10,47,52], impeller tip speed [26,54], Reynolds number [17,26], circulation time [55], mixing time [26,46] and dissolved oxygen concentration [17].Even though these criteria represent a simple scale-up procedure, each one is suitable for a narrow range of processes and operating conditions.Moreover, there is no agreement of which of them should be used for each individual problem [17,47], evidencing that the scale-up remains as an open question in the development of the biotechnology industry. Therefore, seeking for a general methodology to scaleup fed-batch bioprocesses, this work takes advantage of the singular value decomposition (SVD) of the process Hankel matrix to determine an Output Impactability Index.This index enables both the quantification of each dynamics significance within the process and the establishment of the critical point of the fed-batch operating trajectory where the process should be scaled up.The methodology herein proposed is inspired on the procedure developed in [30] for chemical batch reactors, focusing on the particular characteristics of aerobic fed-batch bioprocesses and providing a theoretical justification, framed under the fundamental scale-up approach, for maintaining the same volumetric mass transfer coefficient throughout aerobic bioprocesses scale-up. In this work, the scale-up procedure is first developed for a general bioreactor, revealing the Hankel matrix singular value decomposition as a key tool in the scale-up problem of bioprocesses.Then, the procedure is applied to a polyhydroxybutyrate (PHB) fed-batch bioreactor allowing the process to be scaled up to a 90 L unit, achieving similar process dynamic behavior and PHB production at both laboratory and industrial scales.Here, a simulation comparison between three empirical scale-up criteria (constant volumetric power consumption, impeller tip speed and Reynolds number) and the proposed procedure is evaluated. Methods Consider the following matrix equation that describes the mass distribution within a general bioreactor [11]. where x 2 R n represents the state vector, y 2 R m the output vector consisting of a set of key variables from a design viewpoint, and x in 2 R n the input concentration vector.K 2 R nÂl is the matrix containing stoichiometric coefficients (yields) and r Á ð Þ 2 R l the reaction rate vector.q 2 R n represents the mass transfer from the liquid to the gaseous phase and f 2 R n the mass transfer from the gaseous to the liquid phase.Here, D 2 R þ is the dilution rate, equal to the ratio of the input flow rate (F in ) on the volume fraction (V L ). From ( 1) and ( 2), process variables and parameters are classified into state variables (x), design variables (z), synthesis parameters (p) and design-variable-dependent functions (w).In this sense, the model can be reformulated as follows: where z 2 R p is the design vector comprising all variables that can be freely varied by the designer, meaning that they can be changed according to the new scale requirements.z is composed of each species input concentration and the bioreactor initial liquid volume.In (3), p 2 R h represents the synthesis parameters that are inherent parameters of the process such as each species yield, inhibition and saturation constants, maintenance coefficients, among others.Once synthesis parameters are established at the current scale, they are considered to be constant for the scale-up.Also, the product Mz 2 R n represents the input concentration vector, where M 2 R nÂp selects the corresponding input concentrations from z. Here, K is purely composed of process synthesis parameters and r depends on the state variables and synthesis parameters.Finally, q; f and D depend on x; z; p and may depend on each other (as shown in the next section), thus they can be considered as designvariable-dependent functions (w).According to this, w depends on z; i.e. each of the entries of w can be written as an explicit function of z: Typically, parameters such as each species flow rate and volumetric mass transfer coefficients belong to this vector.It is worth clarifying that z is vector of fixed parameters that the designer may change according to scale increments, on the contrary, w is a vector of time-varying algebraic expressions that may change throughout scale changes. Assuming that under usual working conditions this process operates along the operating trajectory x N while is driven by the system input z N and that the nonlinear system operates inside a small neighborhood of the nominal trajectory, the process dynamics, described by ( 3) and ( 4), can be approximated by the first term of its Taylor series expansion. where A c ; B c and C c are the Jacobian matrices of the system given by ( 7)- (9). Here, subscript N represents the nominal operating trajectory.Notice that since the bioprocess model is linearized along the operating trajectory, rather than a fixed point, linear A c ; B c and C c matrices are, in general, time varying.B c and C c should be modified, using (10) and (11), to make both design and output variables dimensionless and normalized, guaranteeing for both of them to be homogeneous in magnitude.A c is not altered because the Hankel matrix is a tool that only considers inputs and outputs of the system [3], so any mathematical operation done over x will be annulled during the Hankel matrix calculation [2]. where subscripts max and min are the maximum and minimum values of z j and y i in each case.This type of transformation is called ''scaling'' in [1,44].However, the use of this word is avoided within this work to prevent any confusion when mentioning the words ''scale-up'' associated to scale increments.B c and C c transformation enables avoiding inaccuracy in the model from disparate values and units in process inputs and outputs, which always results in highly large and small entries into both matrices and, hence in an ill-conditioned problem [2,25]. The linear model consisting of ( 5) and ( 6) is conveniently discretized as shown: Here, are the process discrete matrices.In addition, the discrete model composed of ( 12) and ( 13) can be rewritten as shown in ( 14) [2,28,38]. Singular values of the Hankel matrix are closely related to the controllability and observability of the system [51,53].Therefore, SVD of H as shown in (18) provides additional information regarding the controllability and observability of the process, which can be used to quantify the importance of each state in the corresponding input-output system described by H [2,15]. where the matrices U 2 R nmÂnm and V 2 R npÂnp are the column (output) and row (input) spaces of H: Also, the diagonal elements of R 2 R nmÂnp are the singular values (r ii ) of H: From a physical perspective, the matrix of singular values (R) provides the information intensity of the system represented by H; where the highest singular value contains most of the system information [34].This means that if the Hankel singular values decrease rapidly, most of the input-output behavior is provided by the first few states [15,57].Therefore, by means of singular values analysis, the information contained in H can be reconstructed just by representing the system with the input-output variables related to the highest singular values [2,53]. Given that SVD of the Hankel matrix determines in a qualitative way whether a process is controllable/observable or not, as a way to quantify these process information extracted from (18), the output impactability index of each output variable (OII y k ) was introduced in [2].Considering that U represents the output space of H as can be noticed from (18), and hence each column of U is related to one output (left singular vector) [34,53], OII y k is defined as (19).The rationale behind this index is based on the concept of Euclidean norm, therefore, the sum of each squared output singular vector entries related to each output at each time instant U 2 kþms;i ðjÞ weighted by the corresponding squared singular value r 2 ii ðjÞ determines the importance of each output variable in the process [2]. ii ðjÞ where k ¼ 1; 2; . ..; m and r is the rank of H, i.e. the number of non-zero r ii .It is the system order and defines the dimension of the controllable and observable subspace [56].OII y k represents the impactability of the process design variables (z) as a whole over a kth given output variable (y k ).Here, the most impacted output variable (main dynamics) is the y k with the highest OII and it corresponds to the process governing dynamics.Therefore, based on the OII calculation, a quantitative hierarchical relation among the output variables of the process can be established.This relation is here called dynamics hierarchy and it determines the degree of importance of each dynamics along the process.Besides, based on the determination of the dynamics hierarchy, the point of the fed-batch trajectory where the process should be scaled up is established, called here the critical point of the operating trajectory.This point is determined from the OII curve for the main dynamics (governing dynamics) as the maximum value of the index for such dynamics.Given that the main dynamics is the output variable with the highest OII, slight changes in such dynamics may affect considerably the process evolution and, therefore, the point where the OII reaches its maximum value represents the time when the process output variables are most affected by the design variables, meaning that this is the best point of the operating trajectory for scaling up the process. On the other hand, the critical point also represents the time instant of the operating trajectory where specific process requirements are maximum, e.g.where the mass transfer is governing the overall process progress.This is because given that z is the vector comprising all variables that can be freely varied by the designer during the scale-up and the main dynamics corresponds to the governing process dynamics, the critical point corresponds to the time when the process requirements are maximum according to the design variables current values.So, for any point with an OII less than the OII computed at the critical point, the process requirements are fulfilled, considering that those requirements were fulfilled at the critical point. The idea underlying this proposal is to establish a new unit design in agreement with the process dynamics.To do so, the OII at the critical point must remain constant as the scale is increased and each design-variable-dependent function (w) should be given by a valid equation at both scales, i.e. an expression that maintains the OII of the critical point invariant throughout the scale-up.Bearing this in mind, based on this methodology implementation, it can be evaluated if certain scale-up criterion will deliver a new scale design with similar dynamics to the current scale.Therefore, if the OII value at the critical point remains constant throughout the scale-up, the current scale dynamic behavior is transferred to the new scale.As a result, the potential of this index can be exploited to improve the scale-up problem of biotechnological processes as shown in the next section. Results and discussion The process of polyhydroxybutyrate (PHB) production comprises two stages [6,19,32]: (i) biomass growth and (ii) PHB production, as can be seen in Fig. 1. During the initial stage, required nutrients that enable the biomass growth (carbon, nitrogen and oxygen sources) are supplied, allowing biomass concentration to rise up to the desired level [16].In the second stage, the nitrogen source input is stopped, disrupting biomass growth, allowing the excess carbon to be used in the PHB production [32].Here, it is assumed that the kinetic mechanism, described by ( 20)-( 22), governs the PHB synthesis [33]. where (20) describes the biomass growth on glucose, (21) the biomass growth on PHB and ( 22) the PHB production.In ( 20)-( 22), substrate, nitrogen, oxygen, biomass, PHB and carbon dioxide are represented by S, N, O, X, P and CO 2 , respectively.Yield coefficients involved in ( 20)-( 22) together with all variables and parameters within the model are defined in the Nomenclature section.The PHB production model, ( 23)-( 45), is borrowed from [33], but including the oxygen dynamics at both liquid and gaseous phases [8,40].From ( 23)- (29), it can also be noticed that the mathematical structure of the proposed model belongs to the general matrix model of ( 1) and (2). Fig. 1 Fed-batch bioreactor process Substrate, nitrogen and air flow rates are calculated using ( 43)-( 45) respectively, where ( 43) and ( 44) are open loop control laws that maintain both substrate and nitrogen concentrations constant.A practical application of this control strategy can be seen in [32] On the other hand, ( 45) is a closed loop law that regulates the dissolved oxygen concentration at O L;sp , where eðtÞ ¼ O L;sp À O L considering that O L;sp ¼ 0:55O H L for the first stage and O L;sp ¼ 0:30O H L for the second stage.Figure 2 shows the dynamic evolution of the most important process variables compared with the experimental data reported in [33], where X t ¼ X þ P and the subscript exp represents the experimental data. Here, it can be noticed that the proposed model represents the experimental data, as was reported in [33].Values of model parameters are reported in [33], the additional ones related to the oxygen dynamics are listed in Table 1. Notice that since O G;F is the oxygen concentration in the air feeding stream to the bioreactor, such concentration could be increased either mixing the air stream with pure oxygen or feeding pure oxygen to the bioreactor. The process model is linearized along the operating trajectory as shown in ( 5) and (6), where A c , B c and C c are given by ( 7)- (8).Output and input variables are normalized using (10) and (11), considering that y i;min and y i;max are minimum and maximum values of each y i along the process and, z j;min and z j;max are AE10 % of their nominal values, since minor changes are expected for these limits while the scale is increased because the process dynamic behavior is transferred from the current to the new scale using the proposed methodology [30].Then, the model is discretized as shown in ( 12) and ( 13) and, O, C and H are computed using ( 15)- (17).Subsequently, H is factorized via SVD using (18) to, finally, compute the OII y k using (19), where rankðHÞ ¼ r ¼ 5. Here, the output variables are considered to be the process state variables in order to determine which of them is the most important from a design viewpoint.It is worth highlighting that the SVD of H allows the designer to determine the effect of the design K O 0:000118 g L [42] m O 0:008987 N p 5 dimensionless [12] O G;F 0:269 g L [45] Fig. 2 State variables profiles at the current scale (input) variables over each state (output) variable.In this sense, Fig. 3 shows the OII y k profiles, where it can be seen that O L is the most impacted dynamics by the design variables of the process (variable with highest index) and, that the critical point of the batch is located at the end of the first process stage, where the oxygen requirement is maximum for the process.This point corresponds to OII O L ¼ 0:019 at t ¼ 31 h.Therefore, the process should be scaled up at this point using a valid expression for each design-variable-dependent function of w.Here, F S , F N and F O are computed with ( 43)- (45).On the other hand, K L a is computed using four criteria for the sake of comparison: (1) process requirements (herein proposed), (2) volumetric power consumption, (3) impeller tip speed and, (4) Reynolds number. According to the first criterion, K L a should be established by means of the required amount of oxygen for biomass growth.Therefore, (50) fulfills this condition because this expression is based on the mass balance for the oxygen in the gaseous phase, considering the oxygen demand at the liquid phase. Notice that K L a depends on F O .The second criterion determines the new scale unit design holding the volumetric power consumption from current to new scale [17,47].In this sense, ( 51) is used to calculate the power requirement and (39) to establish the volumetric mass transfer coefficient at the new scale [26,54]. In the third criterion, it is stated that the impeller tip speed (V t ) must remain constant as the scale is increased. Considering that V t ¼ pN i D i [41,55], ( 52) is used to determine the impeller speed and (39) to compute K L a at the new scale [17]. The last criterion establishes that the process should be scaled up maintaining the same Reynolds number (Re) [17,26].Taking into account that Re is given by ( 53), this empiric rule can be simplified into (54).Here, K L a is also computed with (39). Equations ( 50)-( 54) are evaluated at the critical point.It must be noticed that (50) does not fix the reactor dimensions in opposition to (39) that provides a specific geometry of the process unit.Therefore, reactor dimensions are similar to the current scale design when using empirical rules, meaning that K L a is fixed by the unit geometry while when using (50) the process unit can be completely redesigned at the new scale in agreement with the oxygen requirements (process requirements).Bearing this in mind, Table 2 summarizes the volumetric oxygen transfer coefficient, reactor dimensions, and the OII O L for all the previously described cases at the process critical point.Here, it is worth highlighting that K L a and OII O L are equal when the process is scaled up maintaining the process requirements, supporting the widely used criterion of keeping K L a constant [7,10,17,22,47,52].It can also be noticed from Table 2 a larger K L a value was estimated when the volumetric power consumption is kept constant.This means that a larger process unit than the required one was designed using this criterion.For the other two cases (constant V t and Re), on the other hand, a smaller K L a value was established at the new scale, meaning that a smaller process unit than the required one was designed.This deduction can also be established from Figs. 4 and 5, where a comparison of the dissolved oxygen dynamics and the air flow rate is done for all cases. From Fig. 4, it can be seen that curves for the process requirements and constant power overlap the current scale curve.For the former, it is demonstrated that the process requirements are identical at both scales (see K L a in Fig. 3 Output impactability index at the current scale Table 2), revealing that the oxygen dynamics governs the overall rate of the process at both scales.However, although the oxygen dynamics is similar for the latter case (constant power), Fig. 5 shows that the required air flow rate is less than the established at the current scale, confirming that the unit was oversized (see K L a in Table 2). It can also be noticed in Fig. 5 that when scaling up the process maintaining the impeller tip speed, the air flow rate reaches its maximum value during the last period of the first process stage, deteriorating the dissolved oxygen behavior (see Fig. 4) and corroborating that this unit was undersized (see K L a in Table 2).During this time, any disturbance introduced to the process cannot be countered by the controller.Here, the same controller parameters were used for all cases. On the other hand, regarding the case of constant Reynolds number, the air flow rate reaches its maximum value from the beginning of the process, meaning also that this unit was undersized (see K L a in Table 2).This process behavior demonstrates that maintaining the same Reynolds number as scale-up criterion hardly ever results in an adequate unit design as was stated in [26], since by means of this rule the estimated gassed input power is always lower than the effectively required. In addition, a comparison of the ratio of oxygen concentration in the gas phase (O g ) on PHB concentration (P) for all cases is done in Fig. 6. It can be noticed that the new scale profile for the constant power case is highly different from the current scale curve.This behavior ratifies that the unit was oversized.Here, given that this ratio is smaller at the new scale than the current one, it is possible to conclude that K L a was overestimated when using this criterion.In opposition, O g P is greater for the other two cases (constant tip speed and Reynolds number) than the current scale profile, confirming that both units were undersized and that K L a was underestimated when using these criteria.Finally, from Fig. 6, it can be seen that the profile for the process requirements case overlaps the current scale curve, revealing that the process has the same dynamic behavior at new scale when the process is scaled up using the proposed scale-up approach (see also Figs. 4, 5).Here, the process reaches the same ratio of oxygen concentration in the gas phase on PHB concentration from the current scale at new scale, which demonstrates that maintaining the same value of the volumetric oxygen transfer coefficient when increasing the process scale is the best method for scaling up this type of process. Conclusions This work presents a general approach to the scale-up of aerobic fed-batch bioprocesses, based on model of the process dynamics.The methodology herein proposed is based on the singular value decomposition of the process Hankel matrix taking advantage of the relation of Hankel singular values with the process controllability and observability.By means of this proposal, the designer is able to determine the most relevant process variables through the calculation of the Output Impactability Index.This index allows the designer to determine the point where the process should be scaled up to fulfill its dynamic requirements.Also, by means of the Output Impactability Index calculation, it is possible to establish if two or more designed units can carry out the same process with the same performance targets. A fed-batch fermenter was scaled up from 3 to 90 L in a realistic simulation environment.As a result, the scale factors for keeping the same ratio of oxygen concentration in the gas phase (O g ) on polyhydroxybutyrate (PHB) concentration, with the same dissolved oxygen dynamics, at both scales were found.Here, the most impacted dynamics by the design variables is the dissolved oxygen concentration (O L ) confirming that the oxygen dynamics governs the overall rate of the bioprocess.In addition, it was established that the volumetric oxygen transfer coefficient remains constant through the scale-up, which also ratifies that the oxygen transport from the gas/liquid interface to the bulk of the liquid is a key phenomenon when scaling-up this kind of fermentation. Fig. 4 Fig. 5 Fig. 4 Comparison of the dissolved oxygen concentrations at the current and new scale Table 1 Additional model parameters Table 2 Comparison of the new and current scale unit design for all criteria
2018-04-03T05:47:54.167Z
2015-01-30T00:00:00.000
{ "year": 2015, "sha1": "3c841f3cacea0d83f67b1ea95b7fdc414b68d931", "oa_license": "CCBY", "oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/134699/Documento_completo.pdf?sequence=1", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "a3212882597e282571cea605c4de0caa0ca75bd4", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Engineering" ] }
212843330
pes2o/s2orc
v3-fos-license
Maturation and physiological quality of IAC-863 Rangpur lime seeds 454 www.comunicatascientiae.com.br e-ISSN: 2177-5133 Received: 4 September 2019 Accepted: 19 December 2019 Introduction The propagation of citrus seedlings is made by the grafting technique, using seeds for the production of rootstocks. In this context, the demand for high quality seeds to obtain these rootstocks is increasing (Zucoloto et al., 2011). Normative Instruction 48 of the Ministry of Agriculture, Livestock and Supply (MAPA) of 24 September 2013 requires that citrus seeds must be analyzed in accordance with the Seed Analysis Rules and may only be marketed with at least 50% germination. The harvesting time of fruits for extraction is among factors that influence seed quality; thus, the knowledge of the ripening process, especially with regard to the definition of the ideal harvesting time, seeks to minimize seed deterioration caused by prolonged permanence in the field, and increase the germination rate, since early harvest will lead to the presence of immature seeds (Vidigal et al., 2009; Silip et al., 2010). In some fleshy fruit species, studies have shown that seeds kept for a certain period of time in the fruit after harvest continue the ripening process and may increase their physiological quality (Pereira et al., 2014). Therefore, post-harvest storage of fruits before seed extraction can be an advantageous aspect, since it allows early harvesting of fruits avoiding risks with possible unfavorable conditions (Castro et al. 2008). Post-harvest storage of fruits prior to seed extraction also reduces the number of harvests, simultaneously harvesting fruits at various ripening stages, immediately extracting seeds from ripe fruits and submitting the rest to storage. During this period, seeds not yet fully ripe would complete their maturation, while those that were already mature would have their quality preserved by remaining in osmotic equilibrium, i.e., with high degree of humidity (Dias et al., 2006). IAC-863 Rangpur lime is used as rootstock in most Brazilian citrus orchards. According to the Agricultural Defense Coordination, 4,546,104 Cravo IAC-863 lime seedlings were sold in 2018 in the state of São Paulo, representing 34.3% of the total rootstock seedlings used. However, there are no studies that have evaluated and correlated fruit characteristics with the physiological quality of seeds and there are no indications of the ideal point for harvesting and extraction. In addition, physicochemical changes that citrus fruits undergo during storage, which may influence seed quality, are not known. Thus, the aim of this work was to monitor the physicochemical alterations in IAC-863 Rangpur lime Abstract Introduction The propagation of citrus seedlings is made by the grafting technique, using seeds for the production of rootstocks. In this context, the demand for high quality seeds to obtain these rootstocks is increasing (Zucoloto et al., 2011). Normative Instruction 48 of the Ministry of Agriculture, Livestock and Supply (MAPA) of 24 September 2013 requires that citrus seeds must be analyzed in accordance with the Seed Analysis Rules and may only be marketed with at least 50% germination. The harvesting time of fruits for extraction is among factors that influence seed quality; thus, the knowledge of the ripening process, especially with regard to the definition of the ideal harvesting time, seeks to minimize seed deterioration caused by prolonged permanence in the field, and increase the germination rate, since early harvest will lead to the presence of immature seeds (Vidigal et al., 2009;Silip et al., 2010). In some fleshy fruit species, studies have shown that seeds kept for a certain period of time in the fruit after harvest continue the ripening process and may increase their physiological quality (Pereira et al., 2014). Therefore, post-harvest storage of fruits before seed extraction can be an advantageous aspect, since it allows early harvesting of fruits avoiding risks with possible unfavorable conditions (Castro et al. 2008). Post-harvest storage of fruits prior to seed extraction also reduces the number of harvests, simultaneously harvesting fruits at various ripening stages, immediately extracting seeds from ripe fruits and submitting the rest to storage. During this period, seeds not yet fully ripe would complete their maturation, while those that were already mature would have their quality preserved by remaining in osmotic equilibrium, i.e., with high degree of humidity (Dias et al., 2006 Abstract There is a growing demand for high quality seeds to obtain citrus rootstocks. Normative Instruction 48 (MAPA) of September 24, 2013, requires minimum of 50% germination for the marketing of citrus seeds. Harvest season is one of the stages of seed production with great importance to ensure quality, which makes knowing its maturation process an important step. Thus, the objective of this study was to monitor physicochemical changes in IAC-863 Rangpur lime fruits in order to characterize the physiological maturity of seeds, and to define the ideal harvest point in order to obtain seeds with high physiological quality to obtain rootstocks. Physicochemical analysis of fruits (mass loss, color, soluble solids and acidity) and analysis of seeds (water content, germination and emergence) was performed. Higher germination results were observed in seeds obtained from fruits with higher color index and soluble solids content. The storage of IAC-863 Rangpur lime fruits after harvest increases germination rate, especially in mid-season fruits. Maturation and physiological quality of IAC-863 Rangpur lime seeds Morelli et al. (2019) Maturation and physiological quality of... fruits, seeking to characterize the physiological maturity of seeds and to define the ideal point of harvesting fruits for seed extraction. Data analysis Evaluation data were submitted to analysis of variance. Averages between harvest times were compared by the Tukey test at 5% probability. Storage times were analyzed by regression. Mass loss, color and chemical quality of juice Mass loss was observed during cold storage of IAC-863 Rangpur lime fruits (Figure 1). However, ripe fruits suffered greater loss compared to other harvest seasons, and mass loss is closely linked to deterioration, rapid respiration rate and development of physiological disorders (Cao et al., 2010). According to Alves et al. (2010), the transpiration process continues after harvest; thus, fruits lose water, decreasing their mass and volume and visual quality. In the case of citrus, this loss comes mainly from the epidermis. The acidity of IAC-863 Rangpur lime fruits is formed in the early stages of growth when acids accumulate and reach their maximum content, which remains constant from that point. The change in acid concentration that occurs during ripening is mainly due to dilution caused by fruit growth (Guardiola, 1999). With cold chamber storage, the acidity of fruits harvested at 210 and 240 days after anthesis remained constant (Figure 4), different from fruits harvested at 180 after anthesis, in which titratable acidity increased, possibly because these fruits did not reach the maximum acid accumulation point before being stored. in peel coloration. According to Medina et al. (2005), in the process of fruit ripening, respiration declines slowly in the later development stages and ethylene evolution is extremely slow during ripening, being therefore classified as fruits with non-climateric behavior. Seed water content At time 0, the water content of seeds extracted from fruits harvested 240 days after anthesis was lower than seeds extracted from fruits harvested 180 and 210 days after anthesis ( Figure 6). In the three harvest seasons, the water content of seeds was between 30 and 45%. According to Carvalho and Nakagawa (2012), recalcitrant seeds reach physiological maturity with water content between 50 and 70%, while in orthodox seeds, maturity occurs in lower water contents, between 30 and 50%, similar to water contents observed for IAC-863 Rangpur lime seeds, although these are not classified as orthodox seeds. Seed germination and emergence There was an increase in germination percentage (Table 1). The process of fruit ripening observed by changes in juice acidity and peel color at different harvest seasons and during fruit storage increased seed germination rate. Thus, these tests may be important as indicative of seed germination response. Table 1. Simple correlation coefficients (r) between acidity and color index results of IAC-863 Rangpur lime fruits with greenhouse germination determined by the t-test at 5% probability (Pr). Conclusions Higher germination results of IAC-863 Rangpur lime seeds are observed in seeds obtained from fruits with higher color index and soluble solids content (240 days after anthesis: end-of-season fruits -yellow peel). The storage of IAC-863 Rangpur lime fruits after harvest increases the germination rate, especially in fruits harvested 210 days after anthesis (midle-season -fruits: green/yellow peel).
2020-02-06T09:12:02.903Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "4ce5dd84d737b708c5e12f24df2f8593f74c53b8", "oa_license": "CCBYNC", "oa_url": "https://www.comunicatascientiae.com.br/comunicata/article/download/3161/867", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "86a4accbc38e8fb737f613ef1d1484faa5f543c9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
195667926
pes2o/s2orc
v3-fos-license
The MetaCyc database of metabolic pathways and enzymes Abstract MetaCyc (https://MetaCyc.org) is a comprehensive reference database of metabolic pathways and enzymes from all domains of life. It contains more than 2570 pathways derived from >54 000 publications, making it the largest curated collection of metabolic pathways. The data in MetaCyc is strictly evidence-based and richly curated, resulting in an encyclopedic reference tool for metabolism. MetaCyc is also used as a knowledge base for generating thousands of organism-specific Pathway/Genome Databases (PGDBs), which are available in the BioCyc (https://BioCyc.org) and other PGDB collections. This article provides an update on the developments in MetaCyc during the past two years, including the expansion of data and addition of new features. INTRODUCTION MetaCyc (https://MetaCyc.org) is a highly curated reference database of metabolism from all domains of life. It contains data about chemical compounds, reactions, enzymes and metabolic pathways that have been experimentally validated and reported in the scientific literature (1). Most data in MetaCyc concerns small molecule metabolism, although an increasing amount of macromolecular metabolism (e.g. protein modification) is also present. MetaCyc is a uniquely valuable resource due to its exclusively experimentally determined data, intensive curation, extensive referencing, and user-friendly and highly integrated interface. It is commonly used in various fields, including biochemistry, enzymology, metabolomics, genome and metagenome analysis, and metabolic engineering. In addition to its role as a general reference on metabolism, MetaCyc can be used by the PathoLogic component of the Pathway Tools software (2,3) as a reference database to computationally predict the metabolic network of any organism that has a sequenced and annotated genome (4). During this partially automated process, the predicted metabolic network is captured in the form of a Pathway/Genome Database (PGDB). Pathway Tools also provides editing tools that enable improving and updating these computationally generated PGDBs by manual curation. SRI has used MetaCyc to create almost 11 000 PGDBs (as of August 2017), which are available through the BioCyc (https://BioCyc.org) website (5). In addition, many groups outside SRI have generated thousands of additional PGDBs (6)(7)(8)(9)(10). Some of these groups have further improved those databases by performing their own curation. Interested scientists may adopt any of the SRI PGDBs through the BioCyc website for further curation (https://biocyc.org/BioCycUserGuide.shtml#node sec 6). EXPANSION OF METACYC DATA Since the last Nucleic Acids Research publication (two years ago) (1), we added 219 new base pathways (pathways comprised of reactions only, where no portion of the pathway is designated as a subpathway) and four superpathways (pathways composed of at least one base pathway plus additional reactions or pathways), and updated 112 existing pathways, for a total of 335 new and revised pathways. The total number of base pathways grew by 9%, from 2363 (version 19.1) to 2572 (version 21.1) (the total increase is <219 pathways, because some existing pathways were deleted from the database during this period). The number of enzymes in the database grew by 10%; reactions by 13%; chemical compounds by 13%; citations by 18%; and the number of referenced organisms increased by 7% (currently at 2883). See Table 1 for a list of species with >20 experimentally elucidated pathways in MetaCyc, and Table 2 for the taxonomic distribution of all MetaCyc pathways. While describing in this limited space the various additions to MetaCyc data during the past two years is impossible, the following partial list of new or completely revised pathways may illustrate the breadth of topics that have been covered during this time. • Bacteriochlorophyll biosynthesis. We added pathways for the biosynthesis of all major forms of bacteriochlorophyll: bacteriochlorophyll a; bacteriochlorophyll b; bacteriochlorophyll c; bacteriochlorophyll d; and bacteriochlorophyll e. • Bioluminescence. We added five new pathways that describe bioluminescence in bacteria, jellyfish, corals, fireflies and dinoflagellates. • Heme degradation. We expanded our coverage of heme degradation from one to seven pathways. • Protein modification. We added pathways describing protein S-nitrosylation and denitrosylation; SAMPylation; NEDDylation; pupylation and depupylation and lipoylation. We also added pathways that describe the N-end, Ac/N-end, and Arg/N-end rules, which determine protein degradation. • Short-chain alkane and alkene degradation. New pathways were added for the degradation of butane; methyl tert-butyl ether; propane; ethane; isoprene and 2methylpropene. the biosynthesis of dimycocerosyl phthiocerol; dimycocerosyl triglycosyl phenolphthiocerol; mycobacterial sulfolipid; P-HBAD; -sulfo-II-dihydromenaquinone-9; phenolphthiocerol; glycogen (from ␣-maltose 1phosphate) and phosphatidylinositol mannoside. We also added pathways describing isoniazid activation, ethionamide activation and protein pupylation and depupylation. • Archaeal pathways. We added new pathways that describe different mechanisms for the regeneration of the coenzyme B/coenzyme M mixed disulfide in methanogens, for the biosynthesis of factor 420 and factor 430, and for archaeal nucleoside and nucleotide degradation. • Human metabolism. New pathways describe alternative routes for the biosynthesis of the fatty acids (4Z,7Z,10Z,13Z,16Z)-docosa-4,7,10,13,16-pentaenoate, docosahexaenoate, arachidonate, and icosapentaenoate; the metabolism of bile acids and iso-bile acids; the biosynthesis and degradation of plasmalogen; the biosynthesis of the A, B, H and Lewis epitopes from both type 1 and type 2 precursor disaccharide; the modification of terminal O-glycans; the biosynthesis of i and I antigens; and the hydroxylation and glycosylation of procollagen. We have also added several pathways describing the biosynthesis of glycosphingolipids (different pathways describe the gala, ganglio, globo, lacto and neolacto series). • Plant metabolism. We performed major revisions in the areas of glucosinolate metabolism (13 new and revised pathways); jasmonic acid metabolism (four pathways); and cyanogenic glycosides biosynthesis (four pathways describing dhurrin, linamarin, lotaustralin and taxiphyllin, respectively). We also significantly revised our coverage of bitter acids biosynthesis (three pathways); pterocarpan phytoalexins biosynthesis (two pathways); camalexin biosynthesis; Amaryllidacea alkaloids biosynthesis; prunasin and amygdalin biosynthesis; anthocyanin biosynthesis and proanthocyanidins biosynthesis. Compounds The total number of compounds grew by 13%, from 12 362 (version 19.1) to 14 003 (version 21.1). 9442 of these compounds participate in reactions, and 13 725 have structures. Most MetaCyc compounds also contain standard Gibbs free energy of formation ( f G • ) values, most of which are computed by Pathway Tools using an algorithm developed internally that is based on techniques by Jankowski et al. (11) and Alberty (12). As of August 2017, a total of 13 760 compounds include these Gibbs free energy values. Reactions The total number of enzymatic reactions grew by 13%, from 12 701 (version 19.1) to 14 347 (version 21.1). The number of total reactions (including non-enzymatic) is 15 691. MetaCyc uses a reaction-balance-checking algorithm that checks not only for elemental composition but also for electric charge. Unlike many reaction resources available online, the vast majority of MetaCyc reactions are completely bal-anced, taking into account the protonation state of the compounds (which is the state most prevalent at pH 7.3). As of August 2017, MetaCyc contains 14 302 balanced reactions. The remaining 1389 reactions cannot be balanced due to assorted reasons (for example, a reaction may describe a polymeric process, such as the hydrolysis of a polymer of an undefined length, may involve an 'n' coefficient, or may involve a substrate that lacks a defined structure, such as 'an aldose'). MetaCyc reactions also contain standard change in Gibbs free energy ( r G • ) values that are computed based on the f G • values computed for compounds. As of August 2017, a total of 13 877 reactions include these Gibbs free energy values. Enzyme Commission numbers Curation of MetaCyc is conducted in close collaboration with the Enzyme Commission (EC) (13). During the curation process, MetaCyc curators come across thousands of enzymes that have not yet been classified by the EC. In addition, curation exposes errors in older existing EC entries. While curating MetaCyc content, curators prepare and submit new and revised entries to the EC, leading to the creation of hundreds of new and modified EC entries over the past two years. Many enzymes that have not been classified in the EC system are assigned 'M-numbers' in MetaCyc (see Figure 1), which are temporary numbers that indicate a well-characterized enzymatic activity that has not yet been classified by the EC (14). Our intention is to have as many M-numbers as possible eventually replaced by official EC numbers. SOFTWARE AND WEBSITE ENHANCEMENTS The following sections describe significant enhancements to Pathway Tools (the software that powers the BioCyc website) during the past two years that affect the MetaCyc user experience. Redesigned metabolite pages We have redesigned the Web metabolite (compound) pages to use a tabbed structure. The information shown on these D636 Nucleic Acids Research, 2018, Vol. 46, Database issue pages is now divided into several tabs including a summary, ontology, reactions, and structure tabs. A 'Show All' tab displays all the information in one page (see Figure 2). Update notifications MetaCyc has a new capability to inform users of newly curated information in specified areas of interest. The update notifications are sent to users in a single email in conjunction with each of the three yearly MetaCyc releases. Users can define areas of interest in several ways: 1. By entering one or more specific pathways of interest 2. By defining a SmartTable listing pathways of interest 3. By entering a pathway class of interest. For example, after specifying the MetaCyc pathway class 'Sulfur Compounds Metabolism', users will receive updates about new or revised pathways that are classified under that class. To enter new update-notification requests, users log into their BioCyc account, navigate to the desired pathway, pathway class, or SmartTable page, and click the 'Get Email Notifications of Updates' command in the right-sidebar Operations menu. SmartTables SmartTables provide a powerful way for users to arrange and manipulate data in MetaCyc and other PGDBs. Although SmartTables are not a new feature in MetaCyc, we would like to mention them to ensure that all users are familiar with this powerful tool. SmartTables are spreadsheetlike structures that can contain both PGDB objects and other data such as numbers or text. Like a spreadsheet, a SmartTable is organized by rows and columns that users can add to or delete. A typical SmartTable contains a set of PGDB objects in the first column (e.g. a set of compounds generated by a search). The other columns contain properties of the object (e.g., the chemical composition of the compounds) or the result of a transformation (e.g. the reactions in which these compounds participate). While users can create their own SmartTables, several SmartTables are already available for users, including such tables as all compounds in MetaCyc, all pathways in Meta-Cyc, all polypeptides of MetaCyc, etc. (see Figure 3). You will find these special tables under the SmartTable menu (SmartTables → Special SmartTables). Protein sequence data Previously, one difference between the proteins curated in MetaCyc and those in organism-specific PGDBs was that MetaCyc proteins did not contain sequence information, preventing users from performing BLAST searches within MetaCyc. As of 2017, sequence data is available for all MetaCyc proteins that have links to the UniProt database (15), which in version 21.1 of MetaCyc, comprised 10,560 proteins (∼79% of all MetaCyc polypeptides). When browsing such a protein, users can now select the command 'Show Sequence at UniProt' from the Operations menu to display the sequence in FASTA format. In addition, BLAST searches have been enabled in MetaCyc, which is done by selecting the BLAST search command from the Search menu. The results of the BLAST search are provided in an html document that provides links to the MetaCyc pages of the candidate proteins, enabling users to quickly navigate their way from a protein sequence to pages describing reactions and pathways associated with related proteins. Search for reactions by substrates This command, which enables users to search for reactions by specifying one or more substrates, has been expanded to enable specifying on which side of the reaction different substrates appear (relative to each other). This type of search, which to the best of our knowledge is unavailable elsewhere, enables users to specify more complex search parameters. For example, searching separately for dechlorination reactions that utilize water (water on one side, chlorine on the other side) or dechlorination reactions that produce water (water and chlorine on the same side) is now possible. Set MetaCyc as the default database Users who employ MetaCyc most of the time (as opposed to other PGDBs) can now have MetaCyc automatically selected whenever they log into the BioCyc website. To do so, select My Account from the top right corner, click the 'Database Selection' tab, and then choose MetaCyc. SUBSCRIPTION MODEL FOR BIOCYC ACCESS In our previous papers in the database issue of Nucleic Acids Research, we described the MetaCyc database together with the BioCyc PGDB collection (5). As of 2017 SRI International has adopted a subscription-based model for BioCyc access. MetaCyc, as well as the EcoCyc PGDB [the PGDB for Escherichia coli K-12, (16)], remain freely available to all and do not require a subscription. Because of ongoing difficulties in securing government funds for database curation, we moved to a subscriptionbased model in the hope of generating funds that would permit us to curate high-quality databases for more organisms, such as important pathogens, biotechnology workhorses, model organisms and promising hosts for biofuels development. More information about the subscription model is available at http://www.phoenixbioinformatics.org/biocyc/ index.html. HOW TO LEARN MORE ABOUT METACYC AND BIO-CYC The MetaCyc.org website provides several informational resources, including an online guide for MetaCyc (http://www.metacyc.org/MetaCycUserGuide.shtml); a guide to the concepts and science behind the Pathway/Genome Databases (http://biocyc.org/ PGDBConceptsGuide.shtml); and instructional webinar videos that describe the usage of MetaCyc, BioCyc and Pathway Tools (http://biocyc.org/webinar.shtml). We routinely host workshops and tutorials (on site and at conferences) that provide training and in-depth discussion of our software for both beginning and advanced users. To stay informed about the most recent changes and enhancements to our software, please join the Bio-Cyc mailing list at https://biocyc.org/subscribe.shtml. A list of our publications is available online at https://biocyc.org/publications.shtml. DATABASE AVAILABILITY The MetaCyc database is freely and openly available to all. See https://biocyc.org/download.shtml for download information. New versions of the downloadable data files and the MetaCyc website are released three times per year. Access to the website is free; users are required to register for a free account after viewing more than 30 pages in a given month.
2018-04-03T00:00:38.067Z
2017-10-20T00:00:00.000
{ "year": 2017, "sha1": "7071b85c83035ff86c8ed3c1f3319a304bfb7fb5", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/46/D1/D633/23162794/gkx935.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7071b85c83035ff86c8ed3c1f3319a304bfb7fb5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Medicine" ] }
221014405
pes2o/s2orc
v3-fos-license
Effect of Graphene Family Materials on Multiple Myeloma and Non-Hodgkin’s Lymphoma Cell Lines The interest around the graphene family of materials is constantly growing due to their potential application in biomedical fields. The effect of graphene and its derivatives on cells varies amongst studies depending on the cell and tissue type. Since the toxicity against non-adherent cell lines has barely been studied, we investigated the effect of graphene and two different graphene oxides against four multiple myeloma cell lines, namely KMS-12-BM, H929, U226, and MM.1S, as well as two non-Hodgkin lymphoma cells lines, namely KARPAS299 and DOHH-2. We performed two types of viability assays, MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide conversion) and ATP (adenosine triphosphate detection), flow cytometry analysis of apoptosis induction and cell cycle, cell morphology, and direct interaction analysis using two approaches—visualization of living cells by two different systems, and visualization of fixed and dyed cells. Our results revealed that graphene and graphene oxides exhibit low to moderate cytotoxicity against cells, despite visible interaction between the cells and graphene oxide. This creates possibilities for the application of the selected graphene materials for drug delivery systems or theragnostics in hematological malignancies; however, further detailed studies are necessary to explain the nature of interactions between the cells and the materials. Introduction Since the discovery of graphene, the interest around it has been growing, leading to the production and synthesis of similar carbon materials, including those derived from natural graphene, also known as pristine [1]. Among them, the most broadly studied are different forms of oxidized graphene, namely graphene oxides, obtained by varying chemical synthesis methods, as well as their reduced forms, namely reduced graphene oxides. Other examples include graphite, which is the blank material for graphene and graphite nanoparticles, as well as carbon nanotubes, which have 2.2. Physicochemical Analysis of Natural Graphene, Graphene Oxide, and Nano-Sized Graphene Oxide The size and shape of the GN, GO, and nGO (at a concentration 50 mg/L in ultra-pure water) were inspected using a JEM-1220 transmission electron microscope (JEOL, Tokyo, Japan) at 80 KeV, with a Morada 11 megapixels camera (Olympus Soft Imaging Solutions, Münster, Germany). Zeta potential measurements were performed by the microelectrophoretic method with Smoluchowski approximation, and size distribution with hydrodynamic diameter was measured by the dynamic light scattering technique using a Zetasizer Nano-ZS90 analyzer (Malvern, Worcestershire, UK). Each measurement was performed after 120 s of stabilization at 25 • C and in triplicates. Fourier transform infrared spectroscopy (FT-IR) spectra were registered in the middle infrared range of 4000-400 cm −1 with use of Perkin Elmer System 2000 spectrometer (PerkinElmer Inc., Waltham, MA, USA). Initially, the spectrum of background containing bands generated by vibrations and rotations of CO 2 and H 2 O (g) always present in air was registered. Next, a drop of liquid sample (nGO, GO) was placed on round KRS plate (5 cm diameter) and pressed with another the same size plate to form a film. Then transition spectrum (ratio of working sample to background signals) of liquid sample was registered with 20 scans and 4 cm −1 resolution. In the case of solid-state samples (GN) powder was mixed with KBr crystals for 5 min in the laboratory mill in the ratio of 1:300 (m/m) to obtain fine powder. Powder obtained was placed in the pellet maker and pressed with a 5-ton hydraulic laboratory press for 1 min to obtain 13 mm diameter, as thin as possible pellet. Pellet was placed into a dedicated holder which was placed in the measuring chamber of spectrometer. Then sample was radiated with infrared radiation from 4000-400 cm −1 spectral range to collect data for spectrum. Microscopic images from scanning electron microscope (SEM) were taken using scanning electron microscopy with cold field emission (FE-SEM) model Hitachi SU8010 (Hitachi Ltd., Tokyo, Japan). Powdered natural graphene (GN) was placed directly onto conductive carbon tape. Excess powder was removed using a photographic pear. nGO and GO in the aqueous suspension were transferred with a pipette to a conductive carbon tape. Microscopic observations were carried out after evaporation of the water. None of the samples were coated with a conductive layer. The images were taken in the SE (secondary electrons) mode at an accelerating voltage of 10 kV. To determine the purity of GN, nGO, and GO, elemental point analysis (EDS) was performed using a NORAN system seven X-ray energy dispersion spectrometer and SSD detector (Thermo Scientific, Waltham, MA, USA). Cell Lines Six cell lines were used in the study: four multiple myeloma (KMS-12-BM, H929, U266, and MM.1S) and two NHL T and B lymphocyte-derived (KARPAS299 and DOHH-2, respectively) cell lines. KMS-12-BM, U266, KARPAS299, and DOHH-2 were obtained from Leibniz Institute DSMZ (German Collection of Microorganisms and Cell Cultures GmbH, Leibniz, Germany), and H929 and MM.1S were obtained from ATCC (American Type Culture Collection, Manassas, VA, USA). Cells were cultured in RPMI 1640 medium (Gibco; Thermo Fisher Scientific, Waltham, MA, USA), supplemented with 20% fetal bovine serum (FBS; Gibco) and 1% antibiotic mix (Gibco) of penicillin (100 U/mL) and streptomycin (100 mg/mL), and they were maintained at 37 • C in a humidified atmosphere containing Materials 2020, 13, 3420 4 of 21 5% CO 2 . All cells are characterized by the type of suspension culture, except for MM.1S, which is a mixed semi-adherent cell line (suspension with lightly attached cells). For all assays, cells were seeded at a density of 5 × 10 5 cells/mL. For the viability tests, they were seeded on a 96-well microplate (Corning) in 90 µL of medium per well. For flow cytometry and morphology analyses, cells were seeded on a 12-well plate in 900 µL of medium per well. Two hours after seeding 10× concentrated solutions of GN, GO and nGO were introduced into the wells (10 µL/well on a 96-well plate or 100 µL/well on a 12-well plate), obtaining final nanomaterial concentrations of 5, 10, 20, 50, and 100 mg/L directly in the culture media with cells. Cells were incubated for 24 h before the tests. Viability Assays For evaluation of cell viability after treatment with GN, GO, and nGO, two kinds of tests were employed: MTT and ATP assays. All the tests were performed in three independent experiments. The MTT colorimetric assay is based on a conversion of yellow, soluble tetrazolium salt to purple formazan crystals by mitochondrial succinate dehydrogenase of active cells. MTT (Sigma Aldrich, St Louis, MO, USA) was dissolved in PBS at a concentration of 5 mg/mL and 15 µL was added per each well. After 3 h of incubation at 37 • C, the crystals were dissolved in the solubilization detergent (isopropanol, 0.01 N HCl). Before readings, plates were centrifuged (5 min, 400× g) in order to remove clusters of nanostructures and the supernatant was transferred to a new plate. Spectrophotometer readings were performed at a wavelength of 570 nm in a microplate reader (Tecan Group Ltd., Männedorf, Switzerland). The ATP luminometric CellTiter-Glo assay (Promega, Madison, WI, USA) provides a determination of the number of viable cells by quantitation of ATP in active cells, which converts a substrate into a luminescent product. For this assay, cells were seeded on opaque-walled white multiwell plates (Promega). After equilibration to room temperature, CellTiter-Glo Buffer was mixed with the substrate and then 100 µL of the solution was added to each well. After 2 min of shaking, plates were incubated for 10 min in the dark to stabilize the signal. Then, the luminescence was measured with the microplate reader (BioTek, Winooski, VT, USA). For both assays, the results are presented as the % of control and were calculated from the formula where A is the absorbance or luminescence in the treated group and C is the mean absorbance or luminescence in the control group. Apoptosis Assay Apoptosis induction was evaluated using the Annexin V FITC Apoptosis Detection Kit (Immunostep, Salamanca, Spain) and analyzed by flow cytometry. The test relies on double fluorescent staining, where annexin V conjugated with FITC (AnnV) binds to the phosphatidylserine that is externally exposed during apoptosis, and propidium iodide (PI) binds to the nucleic acids in cells with damaged membranes, allowing for bivariate analysis. The tests were performed in duplicate and repeated in three independent experiments. After incubation with GN, GO, and nGO (20 mg/L) cells were harvested and transferred to cytometric tubes, washed twice with Annexin V Binding Buffer (10 mM HEPES/NaOH, pH 7.4; 140 mM NaCl; 2.5 mM CaCl 2 ), and then resuspended in 100 µL of the same buffer. Five microliters of AnnV and 2.5 µL of PI were added to each tube, and the cells were incubated at room temperature for 15 min in the dark. Then, 400 µL of the buffer was added to each tube, and the cells were analyzed with a BD FACSCalibur™ cytometer (Becton Dickinson, Franklin Lakes, NJ, USA), recording 20,000 events per sample. Dot plots were generated and analyzed using Flowing Software 2.5.1 (University of Turku, Turku, Finland). Cell Cycle Assay The cell cycle was analyzed using PI/RNase Solution (Immunostep) dedicated for flow cytometry. This assay is based on the quantification of the DNA content in cells and the distribution of a cell population among the three phases of the cell cycle (G0/G1, S, and G2/M). PI stains all nucleic acids in fixed cells; thus, RNase is used to eliminate the RNA for the analysis. The tests were performed in duplicates and repeated two independent times. After incubation with GN, GO, and nGO (20 mg/L) cells were harvested and transferred to cytometric tubes and centrifuged to remove the medium. Then, cells were fixed with 70% ethanol at 4 • C for 30 min (200 µL per tube) and washed in PBS. The cell pellet was resuspended in the residual liquid with an additional 0.5 mL of the PI/RNase solution and incubated for 15 min at room temperature in the dark before the analysis on a BD FACSCalibur™ cytometer (Beckton Dickinson). Two hundred thousand events were counted, and plots were generated and analyzed using Flowing Software 2.5.1. Morphology Evaluation For cell morphology evaluation and to visualize the interaction between cells and carbon nanostructures, two approaches were used: imaging of viable cells with no staining and imaging of fixed stained cells. For live imaging, two systems were used, namely the EVOS Cell Imaging Station (Life Technologies, Carlsbad, CA, USA) and the Nikon Eclipse TE2000-E microscope in phase contrast (Nikon, Tokyo, Japan). For imagining after staining, cells were transferred to the tubes attached to the microscopic slides and centrifuged in order to deposit cells on the slides using a Cytospin 4 cytocentrifuge (Thermo Scientific, Waltham, MA, USA). Cells were covered with May-Grünwald stain, and after 3 min the stain was diluted with an equal amount of PBS. After 3 min, the stain was replaced with Giemsa stain (diluted 1:20 in distilled water). After 20 min, the stain was removed and the slides were washed thoroughly with distilled water. Images were recorded using an Olympus BX51 microscope equipped with a DP70 camera (Olympus, Tokyo, Japan). Statistical Analysis Cell viability results were analyzed by mono-factorial analysis of variance with the post-hoc Dunnett test. Results from apoptosis assays were analyzed by the unpaired t-test. For p-values that were <0.05 were considered significant. The analyses were performed using Statgraphics Centurion version XVI software (Warrenton, VA, USA). Physicochemical Analysis of GN, GO, and nGO Transmission electron microscopy (TEM) analysis was performed to evaluate the morphology of the nanostructures. The analysis confirmed the data provided by the manufacturers, revealing the existence of flakes between 1 and 5 µm in GN; however, smaller, irregular flakes were also present ( Figure 1A), while the size of the particles for nGO was below 25 nm, with a rather regular shape ( Figure 1B). For GO, wide platelets were observed, forming a homogenous layer with visible folds ( Figure 1C). Different scales were applied to the pictures in order to present the morphology in the most optimal way in terms of the size, general shape, folding, and edge exposition. The zeta potential for both graphene oxides was negative: −13.7 mV for nGO, indicating moderate to low stability of the hydrocolloid, and −64.3 mV for GO (Table 1), indicating high stability of the hydrocolloid, which was in accordance with the observation of the samples during storage in the laboratory. For GN, the zeta potential was 11.0 mV, confirming the moderate to low stability. The average hydrodynamic diameter was high (>1 µm for nGO and GO; >5 µm for GN), confirming the presence of larger flakes or their aggregates, especially for nGO, because it was clear from the TEM analysis that the flakes were below 25 nm. However, it should be noted that measurements of the average hydrodynamic diameter are most suitable for globular nanostructures. Therefore, they are not accurate for graphenoid flakes, giving high deviations but a general insight into their nature in hydrophilic medium (Table 1, Figure 2). The zeta potential for both graphene oxides was negative: −13.7 mV for nGO, indicating moderate to low stability of the hydrocolloid, and −64.3 mV for GO (Table 1), indicating high stability of the hydrocolloid, which was in accordance with the observation of the samples during storage in the laboratory. For GN, the zeta potential was 11.0 mV, confirming the moderate to low stability. The average hydrodynamic diameter was high (>1 µ m for nGO and GO; >5 µ m for GN), confirming the presence of larger flakes or their aggregates, especially for nGO, because it was clear from the TEM analysis that the flakes were below 25 nm. However, it should be noted that measurements of the average hydrodynamic diameter are most suitable for globular nanostructures. Therefore, they are not accurate for graphenoid flakes, giving high deviations but a general insight into their nature in hydrophilic medium (Table 1, Figure 2). In FT-IR spectrum of GN ( Figure 3A) there were four characteristic bands, two located in spectral region~1600 cm −1 , and two in region~1360 cm −1 . Bands at 1628 and 1592 are assigned to stretching of C=C bond (double carbon-carbon bonds with sp 2 hybridized carbon atoms). The energy of such bonds is around 600 kJ × mol −1 . Both bands in lower spectral region, i.e., 1375 and 1352 are assigned to stretching of C-C bonds (single carbon-carbon bonds with sp 3 hybridized carbon atoms). Energy of this type of bonds is around 350 kJ × mol −1 . Bands around 770-400 cm −1 are not assigned to any specific vibrations. The wide band of low intensity located around 3500 cm −1 is most probably generated by O-H stretches. FT-IR spectra for GO and nGO were highly similar ( Figure 3B,C). Two characteristic bands were located at 2131 and 1651 cm −1 respectively. Bands at higher location are assigned to C=C (triple bonds between carbon atoms that both are of sp hybridization) stretches. This band is very characteristic and it is located around 2200-2100 cm −1 . Energy of triple bond is around 850 kJ × mol −1 . The band located at 1651 cm −1 clearly presented in Figure 3 is generated by stretches of C=C bonds. Wide band at around 3500 cm −1 is most probably generated by stretches of O-H bonds. A shoulder around 3000 cm −1 is again most probably due to C-H stretches. Those stretches are characteristic for C-H when carbon is of sp 2 hybridization (connected to another carbon by double bond). If the C-H bond is of the sp hybridized carbon (involved in triple bond) bands are located rather more left around 3300 cm −1 that is not clearly seen in spectrum registered as covered by intense and wide O-H stretches generated band. In FT-IR spectrum of GN ( Figure 3A) there were four characteristic bands, two located in spectral region ~1600 cm −1 , and two in region ~1360 cm −1 . Bands at 1628 and 1592 are assigned to stretching of C=C bond (double carbon-carbon bonds with sp 2 hybridized carbon atoms). The energy of such bonds is around 600 kJ × mol −1 . Both bands in lower spectral region, i.e., 1375 and 1352 are assigned to stretching of C-C bonds (single carbon-carbon bonds with sp 3 hybridized carbon atoms). Energy of this type of bonds is around 350 kJ × mol −1 . Bands around 770-400 cm −1 are not assigned to any Images taken under SEM present additionally the general morphology of the studies materials ( Figure 4). Images show characteristic sharp edges of GN flakes, the formation of large GO surface with thin layers and the formation of clusters formed by nanoflakes of nGO after water evaporation. Elemental analysis showed the dominant presence of carbon and oxygen in GN, nGO, and GO. In GN and GO sulphur was present, additionally in GO sodium and chlorine were detected ( Figure 5). Hydrogen was not possible to detect using this method. rather more left around 3300 cm −1 that is not clearly seen in spectrum registered as covered by intense and wide O-H stretches generated band. Images taken under SEM present additionally the general morphology of the studies materials ( Figure 4). Images show characteristic sharp edges of GN flakes, the formation of large GO surface with thin layers and the formation of clusters formed by nanoflakes of nGO after water evaporation. Elemental analysis showed the dominant presence of carbon and oxygen in GN, nGO, and GO. In GN and GO sulphur was present, additionally in GO sodium and chlorine were detected ( Figure 5). Hydrogen was not possible to detect using this method. Viability Test The performed viability tests revealed variable effects of graphene family materials on multiple myeloma and lymphoma cell lines. Moreover, the results differed between the MTT and ATP tests. Interestingly, the MTT tests showed almost noncytotoxic effect ( Figure 6) and even an increase in the GN and GO sulphur was present, additionally in GO sodium and chlorine were detected ( Figure 5). Hydrogen was not possible to detect using this method. Viability Test The performed viability tests revealed variable effects of graphene family materials on multiple myeloma and lymphoma cell lines. Moreover, the results differed between the MTT and ATP tests. Interestingly, the MTT tests showed almost noncytotoxic effect ( Figure 6) and even an increase in the Viability Test The performed viability tests revealed variable effects of graphene family materials on multiple myeloma and lymphoma cell lines. Moreover, the results differed between the MTT and ATP tests. Interestingly, the MTT tests showed almost noncytotoxic effect ( Figure 6) and even an increase in the MM.1S cell line. On the contrary, the ATP tests revealed a significant decrease in the viability of the cells after incubation with the nanostructures (Figure 7). In general, regardless of the cell line, GO was the most potent in the viability decrease among the used graphene family materials. Apoptosis Assay No significant effect of GN, GO, or nGO on the induction of apoptosis in any of the cell lines was observed. The number of apoptotic cells in the treated and control groups was comparable in all cell lines, varying between 5% and 12%, expect for GO in the H929 cell line, which reached 20% (Figure 8). Interestingly, during the analysis of apoptosis induction, shifts in the granularity of the cells were noticed on the side scatter/forward scatter (SSC/FSC) graphs for the groups treated with nGO. Selected examples are presented in Figure 9. (Figure 7). In general, regardless of the cell line, GO was the most potent in the viability decrease among the used graphene family materials. Results are presented as the % of control (mean with standard variation). A statistically significant difference between a group and the control was marked as * p < 0.05, ** p < 0.01, *** p < 0.001. Blue bars, GN; orange bars, nGO; grey bars, GO. Apoptosis Assay No significant effect of GN, GO, or nGO on the induction of apoptosis in any of the cell lines was observed. The number of apoptotic cells in the treated and control groups was comparable in all cell lines, varying between 5% and 12%, expect for GO in the H929 cell line, which reached 20% ( Figure 8). Interestingly, during the analysis of apoptosis induction, shifts in the granularity of the cells were noticed on the side scatter/forward scatter (SSC/FSC) graphs for the groups treated with nGO. Selected examples are presented in Figure 9. Cell Cycle Analysis The cell cycle analysis did not reveal any effects on the cell cycle after 24 h of incubation with graphenes. The number of cells in GN-, GO-, and nGO-treated groups was comparable to the control groups within each cell line ( Figure 10). A slight increase in the cell number in the G2/M phase was noticeable in GO groups in the three myeloma cell lines (KMS-12-BM, H929, and MM.1S). Cell Cycle Analysis The cell cycle analysis did not reveal any effects on the cell cycle after 24 h of incubation with graphenes. The number of cells in GN-, GO-, and nGO-treated groups was comparable to the control groups within each cell line ( Figure 10 Morphology Evaluation The morphology of the cells after incubation with GN, GO, and nGO was evaluated on living cultures without fixation using two different visualization systems and after fixation, and staining was done by May-Grünwald and Giemsa (MGG) method. In all cell lines, cells were attached to the GO flakes, which was especially visible in the living cultures, while cells were suspended ( Figures 11-16). After analyzing the live images in phase contrast, it also seems that the cells favorably attached to the thinnest flake rather than to the thickest, clearly visible ones (Figures 11 and 12). The Morphology Evaluation The morphology of the cells after incubation with GN, GO, and nGO was evaluated on living cultures without fixation using two different visualization systems and after fixation, and staining was done by May-Grünwald and Giemsa (MGG) method. In all cell lines, cells were attached to the GO flakes, which was especially visible in the living cultures, while cells were suspended (Figures 11-16). After analyzing the live images in phase contrast, it also seems that the cells favorably attached to the thinnest flake rather than to the thickest, clearly visible ones (Figures 11 and 12). The effect was not visible for larger GN flakes, only relatively smaller flakes were attaching, what was visible after MGG staining (Figure 13). The effect was also less visible in nGO than in the GO group. Moreover, characteristic shades were present in all GO-treated groups after MGG staining, confirming attachment of the cells to the GO flakes (Figures 11 and 14). Very thick GO flakes were rarely visible in the cultures (Figure 15). In the DOHH-2 cell line, where cells grew in clumps, there were visible aggregates of all the graphene (Figure 16). Overall, attachments of the natural graphene were not as specific as the oxidized forms of graphene. Selected characteristic features are presented in the additional panels in Discussion In the presented studies, we examined the effect of three nanomaterials belonging to the graphene family-namely GN, GO, and nGO-on six different cell lines derived from two divergent hematological malignancies. KMS-12-BM, H929, U266, and MM.1S cell lines are derived from MM patients with different genetic backgrounds, while KARPAS299 and DOHH-2 are derived from NHL lymphoma patients. The cell lines arise from either clonal plasma cells in the case of myelomas or from T/B lymphocytes in the case of lymphomas, resulting in their non-adherent characteristics during in vitro culturing. While the effect of carbon nanomaterials on cells cultured in monolayers or 3D tumor models has been broadly studied, the effects on non-adherent cells, derived from hematological cancers, are barely known. Based on the performance of basic metabolic tests, flow cytometric analysis of apoptosis induction and cell cycle, and broad morphological examination, we have demonstrated that GN, GO, and nGO have low to moderate cytotoxic effects, despite visible direct interactions between the nanostructures and the cells. The most interesting results were obtained for the oxidized forms of graphene, GO and nGO. To evaluate the cytotoxic effect of GN, GO, and nGO against the cells, we performed two types of viability tests, obtaining different results. The first was the colorimetric MTT test, where yellow, soluble tetrazolium salt is converted into purple formazan crystals by mitochondrial dehydrogenases, predominantly succinate dehydrogenase, and the formation of crystals reflects the number of viable cells. The second was the luminometric ATP assay, where luciferin is oxidized by luciferase to oxyluciferin in the presence of ATP and light is generated. The luminescent signal is directly proportional to the amount of ATP released from viable cells during the lysis step. According to the results obtained from the MTT test, GN, GO, and nGO showed mild cytotoxic effects in the highest concentrations (50 and 100 mg/L) in the lymphoma cell lines and no toxic effect in myeloma cells, except for GO in MM.1S ( Figure 6). MM.1S was the only cell line representing semi-adherent growth, meaning that the cells grew in a suspension, but part of them was lightly attached to the culture dish. The obtained results might be correlated with doubling time for particular cell lines since for lymphoma cell lines the time is 30 to 40 h (for KARPAS299 [33] and DOHH-2 [34], respectively), while for the myeloma cells it is from 55 to 80 h (55 h for U266 [35], 60 h for KSM-12-BM [36], 70-80 h for H929 [37], and 72 h for MM.1S [38]). Therefore, it could indicate a slight antiproliferative effect of graphene family materials. On the other hand, the results obtained from the ATP assay suggested a high toxic effect in all groups, starting from the lowest used concentration (Figure 7). Interestingly, the highest cytotoxic effect was observed for GO, while oxidized forms of graphene are generally more biocompatible, and GO had a low toxic effect against other cell lines, such as liver-derived cells, in our previous studies [23,39]. Furthermore, natural graphene or reduced forms of graphene oxides exhibited not only a cytotoxic effect against other cancer cells [17,40], but also more hemolytic character when compared to GO [41]. However, it was also found that GO and other carbon nanostructures, such as carbon nanotubes, might non-specifically inhibit enzymatic reactions by binding to the structure of a protein [24,39,42]. Considering the nature of the ATP assay, where the reaction should occur very rapidly [43], it may be possible that the obtained results are due to the mentioned interactions between large graphene oxide platelets and luciferase, which inhibited the oxidation of luciferin, or between the luciferin and GO. Another possibility is signal quenching by the presence of GO, which was highly hydrophilic and stable in a water medium (zeta potential close to −60 mV; Table 1); thus, it was not possible for it to be completely removed by simple centrifugation techniques. It was previously demonstrated by other authors that graphene and graphite oxides can quench fluorescence [44,45]. The results were not conclusive, and we further performed an analysis of apoptosis induction ( Figure 8) and cell cycle evaluation ( Figure 10) after treatment with a moderate dose of GN, GO, and nGO. The results confirmed no specific toxicity, which supports the results obtained from the MTT assays rather than from the ATP assays, indicating non-specific physicochemical interactions between graphene materials and ATP test components. Moreover, this confirms that carbon nanostructures have unique physicochemical properties, which results in difficulties and sometimes false results, while applying widely used standard cytotoxicity assays. For example, Jiao et al. indicate that the optical properties of graphene may be interference for many in vitro assays, including absorption and reflection effect, because graphene has prominent light absorption property [46]. Thereby, the results from the MTT assay could be overestimated, while for the ATP assay underestimated. To minimize the possibility of the assay's compound interactions with the studied graphene family materials, we performed cell-free control experiments, obtaining similar results similar, verifying that MTT and WST-8 assays were reliable and free of interference [47]. Nevertheless, the possibility of the mentioned overestimation for MTT and underestimation for ATP cannot be completely excluded. Therefore, the apoptosis assay, cell cycle evaluation, and morphology evaluation helped to interpret the possible results. Interestingly, even though we did not observe significant apoptosis induction, we noticed great change in cellular granularity in nGO groups while analyzing diagrams for cytometric assays (Figure 9), which suggests nGO adhesion to the cell membrane or internalization of nGO within the cells. Low toxicity was also confirmed visually by morphology evaluation, which was presented in this article through pictures taken during live imaging and after staining by the MGG method. We did not observe significant differences between control cells and cells incubated with graphene family materials, when considering the cell density and general morphology (Figures 11-16). However, we observed differences between cell lines and treatment groups for cell distribution within the culture, which might explain differences in toxicity in non-adherent and adherent cell cultures. In the GN group, we observed that most of the thickest flakes settled on the bottom and only for the DOHH-2 cell line, which grows in small clumps. We also observed increased amounts of GN platelets in cells ( Figure 16). In GO groups, cells were attached to the flakes while growing in the suspension. However, it was difficult to demonstrate in living cells that the attachment is still visible, especially on phase-contrast images (Figures 11 and 12). Characteristic patterning around cell clumps was also observed after MGG staining. Interestingly, in DOHH-2 cells we observed a slight dispersion of the cells on GO flakes in comparison to the cell clumps observed in the control (Figure 16). In the nGO group, we observed both nanoplatelet settling on the bottom and attaching to cells. The observation corresponds to the general physicochemical properties of the examined materials. GN, being hydrophilic with low stability (Table 1), settled mostly on the bottom, creating a concentration gradient. Thus, it is probably more toxic to the cells that are adhered to the culture dish than cells in a suspension. GO is highly hydrophilic and can create a very stable dispersion, with a high surface area available for cells to interact, while nGO creates a less stable dispersion (Table 1) and the platelets are of far smaller size ( Figure 1). Interestingly, studies conducted by Russier et al. using graphene suggested that the toxic effect of graphene might be related to the immune cell type since the authors demonstrated that graphene caused necrosis only in myelomonocytic cells (arisen from monocytes) but not in other immune cell populations [18]. Since multiple myelomas and lymphomas arise from plasma cells and B/T/lymphocytes or natural killer (NK) cells, it is possible that they are not susceptible to graphene. Our results are also in accordance with previous findings by Wu et al., who demonstrated that GO alone had no toxic effect on multiple myeloma cells, but it significantly increased the toxic effect of doxorubicin [48]. Pelin et al. suggested that graphene and GOs are cytotoxic only in high concentrations (above 20 mg/mL) and after long exposure times, what is similar to our findings [49]. Low cytotoxicity and the interactive surface of graphene oxides create promising possibilities for use in drug delivery platforms. In some of the previous experiments, graphene oxide used as a scaffold stimulated the growth of some cells and inhibited other cells (cell cultures of different origin from chicken embryos) [50,51]. In other studies, it was demonstrated that graphene was up taken by cells [52,53]. Still, there is a remaining concern of systemic toxicity and biodistribution of graphene family materials. Distribution of nanostructures to bone marrow is barely studied so far; however, it has been proven that some nanocarriers can be accumulated in bones [54][55][56]. Recent studies by Tang et al. showed that TiO 2 nanoparticles can be used as an effective, targeted platform for multiple myeloma imaging and treatment [57]. However, it has also been demonstrated that the biokinetics of nanoparticles significantly differ between widely studied rodent models and primates [58], considering bone marrow distributions. Therefore, further advanced studies are needed. Conclusions We studied the effect of graphene, graphene oxide, and nanographene oxide on multiple myeloma and lymphoma cell lines, revealing low to moderate cytotoxicity against the selected cell lines. Low toxicity, an active surface, and the possibility for cell binding make graphene oxide a promising component for drug delivery systems. We also revealed that the choice of metabolic assay has an important impact on the test outcome when investigating the cytotoxicity of graphene and graphene oxides. Further advanced studies are needed to explain the nature of interactions between graphene family materials and cells, consequently, to establish the potential applications for hematological malignancies.
2020-08-06T09:03:18.954Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "ca8798a3c28c46e884c5dad26d0e63c25a065b94", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/13/15/3420/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dca12728182107f40c7e34a928175d694752a3a9", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251323604
pes2o/s2orc
v3-fos-license
Parents can reliably and accurately detect trunk asymmetry using an inclinometer smartphone app Purpose An inclinometer smartphone application has been developed to enable the measurement of the angle of trunk inclination (ATI) to detect trunk surface asymmetry. The objective was to determine the reliability and validity of the smartphone app in the hands of non-professionals. Methods Three non-professional observers and one expert surgeon measured maximum ATI twice in a study involving 69 patients seen in the spine clinics to rule out scoliosis or for regular follow-up (10-18 y.o., Cobb [0°-58°]). Observers were parents not familiar with scoliosis screening nor use of an inclinometer. They received training from a 4-minute video. Intra and inter-observer reliability was determined using the generalizability theory and validity was assessed from intraclass correlation coefficients (ICC), agreement with the expert on ATI measurements using Bland-Altman analysis, and correct identification of the threshold for consultation (set to ≥6° ATI). Results Intra-observer and inter-observer reliability coefficients were excellent ϕ = 0.92. The standard error of measurement was 1.5° (intra-observer, 2 measurements) meaning that a parent may detect a change of 4° between examinations 95% of the time. Comparison of measurements between non-professionals and the expert resulted in ICC varying from 0.82 [0.71-0.88] to 0.84 [0.74-0.90] and agreement on the decision to consult occurred in 83 to 90% of cases. Conclusion The use of a smartphone app resulted in excellent reliability, sufficiently low standard error of measurement (SEM) and good validity in the hands of non-professionals. The device and the instructional video are adequate means to allow detection and regular examination of trunk asymmetries by non-professionals. Introduction Adolescent idiopathic scoliosis (AIS) is a 3D deformity of the spine that affects 2 to 4% of the pediatric population [1]. The 3D rotational deformity of the trunk creates a visible posterior protuberance of the ribs and/or of the flank. Identifying and measuring this trunk asymmetry represents a potential for early detection of scoliosis in youth. Nevertheless, scoliosis screening, especially systematic school screening of asymptomatic children as a preventive program, has been the subject of discussion. The trunk asymmetry can become visually apparent while performing the Adams Forward Bending Test Page 2 of 8 Beauséjour et al. BMC Musculoskeletal Disorders (2022) 23:752 (AFBT), where the child is asked to bend forward at 90 o , with arms hanging down and head relaxed [2]. The AFBT is a simple and non-invasive screening exam that was used in the initial scoliosis screening programs. It was considered to offer a high rate of detection since the examiners were well-trained to identify even mild trunk asymmetry [3]. As a consequence, many children were sent for orthopaedic evaluation, but many (even up to 80%) did not present with a clinically significant curve (Cobb angle < 11°) and/or never needed treatment [4][5][6][7]. In addition, even if spinal braces were largely included in orthopaedic practice, and different reports had demonstrated potential curve stabilization, at that time the available evidence of an effective treatment for the cases detected was considered insufficient [8]. For these reasons, school screening programs using the AFBT as a detection method were not considered to be a cost-effective preventive measure by the Canadian Task Force on the Periodic Health Examination (CTFPHE). This led to a recommendation against scoliosis screening in 1979 [8], and scoliosis screening in schools was officially discontinued in Canada, including in Quebec in the early 1980's. The same decision was also taken in other countries based on task forces recommendations. These policy decisions presumably had an impact on the management of patients with progressive scoliosis. Retrospectively studying the referral patterns of suspected cases of AIS in orthopaedic clinics, our team [5,9,10] demonstrated that after discontinuation of school screening programs, 20% of patients were referred "late" to a scoliosis clinic to benefit from appropriate and timely conservative management with a spinal brace. Thomas et al. reported that in a US county, the number of referrals to orthopaedic clinics for scoliosis in areas without school screening decreased, as well as the number of spinal brace prescriptions [11]. The Scoliosis Research Society Task Force on Screening (SRS Task Force) conducted a review and re-examined the evidence [12], based on the WHO classical criteria for screening [13]. This report identified the scoliometer (Orthopedic Systems Inc., Hayward, CA) in combination with the AFBT [12,14] as the reliable and valid tool recommended to measure the external trunk asymmetry. The SRS Task Force also concluded on the effectiveness of the brace treatment, notably from strong evidence provided by a multicenter international trial, the BrA-IST study [15]. Similar conclusions regarding use of the scoliometer (adequate evidence that screening tests can accurately detect AIS when used in combination) and the effectiveness of the brace treatment (adequate evidence that bracing may decrease curve progression in adolescents with mild or moderate curve severity) were drawn by the US Preventive Services Task Force in 2018 [16]. The scoliometer is used to quantify the trunk asymmetry, the angle of trunk inclination (ATI), which corresponds to the angle between the horizontal and the plane across the back at the greatest elevation of a rib prominence or lumbar prominence (left and right sides). An ATI between 5 and 7 degrees has been determined as a reference threshold for further medical investigation [12,17]. Good intra and inter-observer reliability of the scoliometer have been demonstrated in several studies [18][19][20]. The scoliometer was also shown to improve the specificity of the detection method in comparison to the AFBT alone (for example, 83% [73%-93%] using a scoliometer, in comparison to 60% [47%-74%] with the AFBT alone, for scoliosis curves that were above 20°) [18]. However, the scoliometer is almost exclusively used in orthopaedic clinics and is not accessible to the general public. Thus, alternative solutions are being proposed to facilitate early detection for AIS. The inclinometer applications developed for use on a smartphone appear as a solution to this limited accessibility. Taking advantage of embedded inclinometers enabling angle measurement with a smartphone, these apps reproduce the functions of a traditional scoliometer [2]. They can be used by firmly holding the smartphone between the thumbs and index fingers ( Fig. 1), or in combination with a scolioscreen, a device made of medical grade thermoplastic rubber sized to hold a smartphone, and designed to mimic the undersurface of a scoliometer [2]. Previous studies have shown similar reliability and validity between the scoliometer and an inclinometer smartphone app [2,[21][22][23] but the ATI measurements were mostly taken by health professionals. A systematic review [24] on evaluation methods to screen for back asymmetry supported the reliability of an inclinometer smartphone app in the hands of experimented observers. There is a need to further investigate its value in non-professional users. Given the general accessibility of smartphones, an inclinometer smartphone app becomes an accessible means for primary care providers or non-professionals such as educators and parents to contribute to AIS screening. Such a shared responsibility for scoliosis detection may reduce the number of late referrals to orthopaedic clinics, favor timely initiation of conservative management and, in turn, reduce the risk of surgery [5]. This could also reduce the number of unnecessary referrals to orthopaedic clinics by improving the specificity of the AFBT with an objective measure. Our hypothesis is that non-professionals may reliably and validly measure the trunk asymmetry using an inclinometer smartphone app. Thus, the purpose of this study is to evaluate the intra-and inter-observer reliability and validity of the inclinometer smartphone app in the hands of non-professionals. Methods A sample of 69 young volunteer participants, aged between 10 and 18 y.o., were recruited at the CHU Sainte-Justine orthopaedic clinic between May 2017 and August 2018. These patients were either referred for suspected AIS or followed at the clinic for a confirmed AIS diagnosis. Three non-professional (but non-familial) observers (adult parents, employees from the CHU Sainte-Justine Research Center without clinical training or education) and one expert orthopaedic surgeon (35 years of experience) were mandated to take ATI measurements from all the participating patients using the inclinometer smartphone app. The three observers were shown a 4-minute training video on the use of the inclinometer smartphone app on the first day of data collection ( Fig. 1). This video created by our research team in collaboration with the educational services of CHU Sainte-Justine describes and demonstrates the procedures to use the inclinometer smartphone app: standard instructions to be delivered to guide patient's execution of the forward bending test, position of the observer, demonstration on how to slide the smartphone on the child's back with both thumbs underneath, how to look for the maximum ATI value along the back. The three non-professional observers and the expert measured the ATI from all patients at two occasions. A delay of 15 to 45 minutes separated the two measurement sessions. Patients were encouraged to move and take a few steps between each measurement, as each observer was, in turn, entering the examination room. Assessments were blinded to other observers and the order of measurement was randomly assigned for each patient. The observers and the expert were instructed to record the maximum ATI value measured along the back at each trial. In addition to performing the series of measurements by holding the smartphone between the thumbs and index fingers, one observer and the expert also tested the use of the scolioscreen. In total, for each patient in the sample, twelve measurements of the maximum ATI were taken. Statistical analyses were performed using the theory of generalizability (G theory) [25,26] and the Bland-Altman method [27,28] to assess intra-and inter-observer reliability, as well as the validity of the smartphone app in the hands of non-professionals. The G theory was used to identify sources of variance in the data, and for estimating the proportion of variance explained by the patients (P), the observers (O) and the measurement sessions (S) facets as well as interactions between these facets (PO, PS, OS) and the residual error (POS,e) [26]. We also studied the optimization of the measurement modalities by conducting a "D-study" with either keeping the observer facet fixed (intra-observer) or the session facet fixed (inter-observer). The dependability coefficient (φ) was calculated in these two contexts along with the standard error of measurement (SEM) and the minimally detectable change (MDC= √ 2 * SEM * 1.96 , for a 95% confidence interval (CI)) [25]. For intra-observer reliability, we also plotted the differences between the values of the two measurements as a function of the means, for each observer. We estimated the systematic bias (which is the average of the differences among patient's data) and the proportional bias (which is the slope of the regression line of a Bland-Altman plot) [29]. For inter-observer reliability, we calculated the intraclass correlation coefficient (ICC), with 95%CI, of the form "two-way random effects, single rater, absolute agreement" across the 3 non-professional observers. Finally, the validity assessment relied on the Bland-Altman method comparing the measurements for the first observation of the expert with those of each of the nonprofessional observers, as well as agreement in the correct identification of patients with ATI ≥6 o . All participants and/or their parents as well as nonprofessionals signed informed consent/assent forms, and the project was approved by the ethics committee of CHU Sainte-Justine. Statistical analyzes were carried out with Genova software [30] and IBM SPSS Statistics for Windows (Version 25.0. Armonk, NY: IBM Corp.). Results The study sample was composed of 17 boys and 52 girls. The study of the components of variance from the G theory [26] identified the inter-patient variance as the main source of variance (82% of total variance). The variance component associated with the observers was low, 1%, while 3% of the variance came from the interaction between the patients and the observers (PO). The variance attributed to the sessions was 0%, as well as the interaction between the observers and the sessions (OS). The variance associated with the interaction between the patients and the sessions (PS) was also low at 2%. However, the interaction between the patients, the observers and the measurement sessions, explained 11% of the variance. Intra-observer reliability In the D-study (observer-fixed design), the intra-observer reliability was excellent (φ = 0.92) and the SEM was 2.1° for one ATI measurement taken, but decreased to 1.5° if two measurements were taken by the same observer. Thus, when taking two measurements of the ATI, a nonprofessional may detect a change of 4° between examinations 95% of the time. The Bland-Altman method revealed no statistically significant bias for intra-observer measurements when the smartphone is firmly held between the thumbs and index fingers or used in combination with the scolioscreen, both for the non-professional observers and the expert, with biases between 0.1° and 0.4°. The smallest bias (0.1°) was obtained by the expert using the thumbs and index fingers. There was also no proportional bias for all observers and for the expert in the two measurement conditions, with regression coefficients between 0.02 and 0.07. A typical plot is presented in Fig. 2. Inter-observer reliability In the D-study (session-fixed design) the inter-observer reliability was also excellent (φ = 0.92) with a SEM of 2.1° for one measurement. For two measurements taken by, for example, the two parents of a child, the SEM would reduce to 1.5°. They would be able to detect a 4° difference in measurements 95% of the time. The overall intraclass correlation coefficient for the 3 observers was: 0.88; 95%CI [0.82-0.92]. Validity A statistically significant systematic bias (slight overestimation of 0.8° and 1.1°) was identified for 2 of the 3 observers when compared to the expert while the smartphone is used firmly held between the thumbs and index fingers. A proportional bias was also identified when the Thus, an agreement between non-professionals and the expert on the identification of the threshold to seek medical advice (ATI ≥ 6 °) was reached in 83% to 90% of cases (Fig. 3). Discussion In the literature, the reliability and validity of an inclinometer smartphone app used among non-professionals remained an unanswered question [24]. In this study, we investigated the intra and inter observer reliability by two methods: the G theory and the Bland-Altman analysis. The results obtained in this study demonstrated excellent intra-observer reliability. Systematic biases between two measurements taken by the same observer were not statistically significant in all cases, and these mean differences were all below the clinically acceptable threshold of 0.5° average difference for ATI that was consensually a priori established by our team [2]. The results were even slightly improved when the first observer used the smartphone in combination with the scolioscreen. In this study, no difference was observed in the expert reliability assessments with or without the scolioscreen. There was also no proportional bias, meaning that the error was very stable between measurement sessions, regardless of the magnitude of the measured ATI, and especially with the use of the scolioscreen. The results from the G theory also confirm that the intra-observer error is low, with variance O=1% and variance S=0%. Plausible explanations for the PO variance of 3% may come from some variability in the instructions for AFBT that were given to the patient, as well as from the relative height and shape of the patients and observers. The variance component related to the interaction POS at 11% may come from the non-standardized elements of the protocol, such as patient movement or fatigue, and distraction. Inter-observer reliability was excellent from the results of the D-study. Mean differences between observer measurements were also lower than the 2° average difference for ATI that was consensually a priori established by our team as suitable for detection in primary care settings or family use. The validity was satisfactory to recommend the use of the inclinometer app as a detection tool in non-professionals. Systematic biases were below the clinically acceptable threshold of 2° average difference between measurements taken by a non-professional observer and those taken by an expert surgeon, for both measuring conditions (with and without the scolioscreen). There was also a significant proportional bias for one of the comparisons (observer 1 vs expert when using the scolioscreen). However, the regression coefficient was very small (0.15), indicating that the proportional bias for this comparison did not have a serious impact on the results. Agreement between non-professional observers and the expert on the identification of an ATI ≥ 6° is satisfactory. Our study results indicate that non-professionals, such as parents, may use the inclinometer smartphone app to reliably detect a significant rib/flank hump and seek medical opinion and clinical assessment for appropriate use of medical resources. Previous studies of reliability and validity comparing the scoliometer and a smartphone app have been carried out in experienced observers [21][22][23]. For example, the study by Qiao et al. with 64 patients compared the scoliogauge smartphone application and the scoliometer. All measurements were performed by surgeons. The study found an overall intra-observer ICC of 0.954 for the scoliometer and 0.965 with the app. The overall interobserver ICC was 0.943 for the scoliometer and of 0.964 with the app [21]. It is interesting to note that Qioa et al. found a lower ICC = 0.819 for small curves; in our study, the variability was stable among ATI values. In another study from Québec, Canada, Balg et al. showed excellent results with a smartphone application in comparison to the scoliometer: no systematic bias and 95%CI of ±4.4°. In their study, carried out with 34 patients with AIS and whose measurements were taken by healthcare professionals (without an adaptor such as the scolioscreen), the intra and inter observer ICC were respectively 0.961 and 0.901 [31]. The study by Driscoll et al., a rare study involving a non-professional observer, showed similar results. In fact, in 39 patients with AIS, the authors showed good intra observer reliability ICC=0.89 and satisfactory interobserver reliability (lower than in the current study) with ICC=0.75 using the smartphone alone and 0.89 using the scolioscreen [2]. A plausible hypothesis to explain better performance of the non-professionals in the current study than what was expected from a previous study [2] may reside in the use of a standardized training in the form of an educational video as corroborated by the low variance associated with observers (1%). This study also differs from the study of Driscoll et al. in its results demonstrating that reliable and satisfactorily valid measurements can be obtained even without the use of the scolioscreen device (although the results are improved by the use of the scolioscreen). This makes it even easier and more convenient for non-professionals to start using the smartphone app. This study demonstrated that non-professionals would be able to learn how to manipulate an inclinometer smartphone app to properly follow-up on a child's trunk asymmetry. This is different from several previous studies where results were available for trained experts only [21,23,31]. In addition, non-professional observers in the current study measured all participating patients (n=69) as opposed to non-professional observers in the study from Driscoll et al. [2] who have only measured their child. One may hypothesize that the observers in this study have improved their performance over time. It is important to consider however that the three non-professional observers did not receive any feedback on their measurement techniques. They could have changed their methods over time or gained confidence in what they were doing but they were never told if their technique was adequate. We compared the results obtained on the first versus the second half of the data collected and no temporal trends was found in the data. The use of an educational video has the advantage to provide standardized directions and visual demonstration of the technique. It may be paused or repeated for best comprehension. Interviewed observers confirmed that the self-training was appropriate and considered sufficient to perform the ATI measurement after two views of the video, as corroborated by the low sources of O and S variances and of the interactions PO, PS. This training method facilitates the dissemination and wide use of the tool by primary health care providers, physical educators and parents who would be interested to monitor the trunk asymmetry in a child. This study has some limitations. One potential bias comes from the experimental set-up where observers were not blind to their own measurements. Even if several minutes separated the two measurements from a given observer with a given patient, and that more than one patient were included in the study from the same half day of clinic, often interleaved, there is a possibility that the observer remembered the first measurements and adjusted his/her observation to match the second measurements. However, the observers were not aware of the objectives and hypotheses of the project. Interviewed observers said that this bias was unlikely since they needed to remain concentrated on the good quality of measurement at each occasion and they were more preoccupied with doing the task properly than trying to "cheat" or to copy their previous result. Our methodological choice to rely on a full design (all patients are measured by all observers) allowed us to generalize our results to the "universe" of similar non-professionals, according to the G theory. As previously mentioned, this may have caused the observers to improve during the study. But the available data do not support a learning curve trend, probably mostly because the observers never received feedback on their execution of the technique, and they were blind to other observers' (including the expert's) measurements. Another possible bias in this study concerns the period of patient recruitment which lasted more than a year. However, in this study, the samples from two different time periods in 2017-2018 were compared and no significant differences (data not shown) in the reliability and validity results were identified over the study period. We acknowledge the lack of comparison with the non reference standard, the scoliometer, 23:752 in this same study. As mentioned, this comparison was shown to be appropriate in several previous studies. Our research question was not to duplicate this comparison but to investigate the reliability and validity of the tools in the hands of non-professionals. Doing all these measurements and comparisons in the same study would have inappropriately increased the burden for the participants, who already had to execute the AFBT twelve times. The personal skills of the observers may also have influenced the results, as some people are naturally more skilled in handling an electronic device. However, our results indicated that these differences appear negligible and may not have an impact on the use of the smartphone app. Finally, since the patients in the sample in this study were not the children of the observers, the observers may have felt less comfortable taking measurements of the ATI than would be the case for their parents. There is an added value of confirming the measurements with a second observer (for example, the two parents) and/or encouraging the observer to repeat the measurements twice. According to the D-study, the recommendation is to take the average of two measurements recorded by the observers at two sessions separated by a 15 minute pause to get the best evaluation, and to set a 6-degree ATI threshold. Our study shows that when taking 2 measurements of the ATI, a non-professional may detect a change of 4° between examinations 95% of the time. This means that an observed variation between two measurements could be associated with a certain progression of the curve. It could be used at home to follow-up on potential changes in the trunk asymmetry between scoliosis management visits, or for periodic evaluation of mild/non clinically significant curves that were discharged from clinic. Finally, the question about the value of school scoliosis screening program is beyond the scope of this study. As described elsewhere [12,32,33], classical criteria were established to assess the effectiveness of a screening program. They have recently been reviewed and discussed in light of public health concerns such as coordination of the program components, impacts on the healthcare system, as well as societal acceptability. In particular, in a recent systematic review followed by a Delphi expert consensus protocol [34], twelve consolidated principles for screening were elaborated. The present study contributes to principle #4 Screening test performance characteristics, looking at the key components specific to the test: accuracy and reliability. A limitation of the current study is that it does not contribute much to the evidence about the benefits to harms ratio of screening for trunk asymmetry, notably in terms of health services overuse. Research evidence on this aspect was considered insufficient in the most recent report from the US Preventive Services Task Force [16]. In addition, this observation suggests, as per principle #6 post screening test options [34], that health care pathways for children with positive screening tests should be carefully examined and properly evaluated before elaborating recommendations for test implementation. Conclusion This study demonstrated the intra-and inter-observer reliability and the validity of an inclinometer smartphone app when used by non-professionals. Non-professionals could learn to use the inclinometer smartphone app with a training video in order to take reliable and valid measurements of a child's trunk asymmetry. Thus, with a reference threshold of 6°, a non-professional would be able to make an early detection and favor timely and appropriate medical assessment of back deformities.
2022-08-05T13:21:36.115Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "a29b8666790608bb5d54d495f235aee0a2c3c9e3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "9c81f10f4fec8cf5f8d550c62d5e7b1c06025f3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256414838
pes2o/s2orc
v3-fos-license
Implementation of advanced triage in the Emergency Department of high complexity public hospital: Research protocol Abstract Aim To evaluate the efficacy of advanced nurse triage based on the quality of care outcomes of patients attending the Emergency Department of a high‐complexity hospital. To analyse the concept of advanced triage and the essential elements of the construct. Design Mixed longitudinal study, divided into 4 steps; which will include an initial qualitative step, two observational studies and finally, a quasi‐experimental study. Clinical Trial Registration Number: NCT05230108. Methods Step 1 will consist of a concept analysis. Step 2 will include a mapping of advanced practice protocol terminologies. Step 3 will analyse the opinion of health professionals on advanced triage. In step 4: in the retrospective phase (n = 1095), sociodemographic and clinical variables and quality indicators such as waiting time will be analysed. After that, in the prospective phase (n = 547), advanced triage will be implemented and the two cohorts will be compared. The whole study will be carried out from January 2022 to January 2024. Discussion Patients classified as low complexity at triage are more vulnerable to emergency department overcrowding. The implementation of advanced triage would make it possible to respond to patient needs by offering equitable and quality healthcare, facilitating accessibility, safety and humanization of the emergency department. | INTRODUC TI ON Advanced Practice Nursing (APN), which first appeared in the United States around the 1960s, took almost 20 years to spread to Europe and was first introduced in Great Britain. Subsequently, it was used in other countries such as Australia and Ireland. This practice emerged due to changes in the healthcare system and the need to adapt to new models of healthcare due to a shortage of medical professionals, an increase in the demand for care in the provision of services and an improvement in the professional development of nurses (Woo et al., 2017). The International Council of Nurses defines the APN as one who has acquired, through further education, the expert knowledge base, and complex decision-making skills and clinical competencies for expanded nursing practice the characteristics of which are modelled on the context in which they are accredited to practice (International Council of Nurses, 2020). The United Kingdom National Health Service (NHS) defines advanced practice as a level of practice characterised by a high degree of autonomy and complexity in decisionmaking. The degree of advanced practice is obtained with a master's degree or equivalent (Crouch & Brown, 2018). | BACKG ROU N D Hospital emergency departments (EDs) are one of the gateways to the healthcare system and play a critical role, as they are often unable to guarantee an efficient and high-quality response to citizens. Of particular concern is the fact that most ED visits are considered non-urgent or inappropriate and classified as low complexity in different countries around the world (Berchet, 2015). One of the main challenges in recent times has been to avoid ED overcrowding. Overcrowding is caused by multiple factors, the most relevant of which are: poor outflow of the hospital ED, insufficient beds, which do not allow patients requiring hospital admission to be conveyed, and a shortage of healthcare staff. The collapse of the ED causes delays in diagnosis and in the initiation of treatment, which is related to an increase in morbidity and mortality because it causes a delay in the initiation of prescriptions and the administration of antibiotics and analgesia. It should be added that ED overcrowding leads to errors and increases hospital stay, and costs. Overcrowding in the ED is the consequence of poor quality of attention in patient care (Austin et al., 2020;Bittencourt et al., 2020;Di Somma et al., 2015). In the hospital setting, triage was introduced in the 1960s due to an increase in the population attending the ED. A five-level categorisation system was created, as this allowed for very precise patient triage (Zachariasse et al., 2017). Triage provides the patient with a level of prioritisation in clinical care with the aim of identifying the most severe patients, who require the most appropriate and quickest diagnostic or therapeutic interventions and tests to resolve the health problem (Hinson et al., 2018). There are different validated scales that allow patients to be classified on arrival at the ED such as: Australasian Triage Scale (ATS), Canadian Triage & Acuity Scale (CTAS), Emergency Severity Index (ESI), Manchester Triage System (ETS) and Andorran Triage Model / Spanish Triage System (MAT/ SET; Hinson et al., 2018;Sarria-Guerrero et al., 2019). Generally, five-level risk stratification scales have been recommended for use because they are more reliable and valid for assessing the clinical status of patients (Silva et al., 2017). Of all the existing five-level prioritisation scales, the Emergency Nurses Association recommends the use of the ESI. The ESI makes the same classification as the MAT/ SET and classifies into: level I, immediate resuscitation, where there is no waiting time for the patient to be seen; level II, an emergency where the patient can wait up to 15 min to be seen; level III, an urgency, where the patient can wait up to one hour; level IV, a minor urgency, where the patient can wait up to two hours to be seen; level V, a non-urgent case where the patient can wait up to 4 h or more to be seen (Innes et al., 2017;Zachariasse et al., 2017). Triage is one of the area's most susceptible to improvement in the ED. One of these improvements is the implementation of advanced triage (AT). AT is a poorly defined concept in the literature, but in principle, AT is understood as the application of protocols or clinical practice guidelines, previously agreed by the entire multidisciplinary team, where the nurse acts autonomously after an initial triage in which patients have been assigned a level of priority. These protocols can be applied in hospital triage, in emergency fast-track (FT) wards and also in Primary Health Care (PHC; Butti et al., 2017;Cabilan & Boyde, 2017;Doetzel et al., 2016;Innes et al., 2017). This means that two actions can be given from these protocols, providing safe and quality care to patients. The first health action is contemplated in the scope of nursing responsibility and without medical action. The nurse performs a comprehensive and focused assessment to orientate the health problem (nursing diagnosis), and performs interventions (pharmacological and non-pharmacological) appropriate to completely solving the health problem. The second action is a comprehensive assessment of the patient that allows requesting diagnostic tests and orienting the health problem (diagnostic suspicion), but its approach requires not only nursing but also medical intervention. Referrals to the physician will only be made when, according to the nurse's criteria, a medical evaluation is needed. This mainly occurs when the patient presents diagnostic or therapeutic complexity. (Barksdale et al., 2016;Bittencourt et al., 2020;Butti et al., 2017;Innes et al., 2017;Van Donk et al., 2017;Vara Ortiz & Fabrellas-Padrés, 2019). Several studies have shown that AT in the ED decreases drug administration time, increases patient comfort and satisfaction, reduces delay in diagnosis and treatment delivery in patients who are less critical, and reducing patient and family distress related to waiting time (Austin et al., 2020;Bittencourt et al., 2020;Cabilan & Boyde, 2017). Applying AT has also improved efficiency of care by reducing waiting lists and patient length of stay, both in the ED, by ordering complementary tests such as blood tests and X-rays more quickly (Van Donk et al., 2017;Woo et al., 2017). However, this growing need to implement AT has not always been associated with increased training for nurses. It is for that, nurses often have the perception that they lack sufficient knowledge and clinical skills to assign priority levels correctly in triage. This leads them to make errors in their practice, causing collapses in triage units and ED waiting rooms. . For this reason, it is necessary to answer the following questions: 1. What are the attributes that define AT? 2. What is the conceptual coverage of the standardised AT languages? 3. Are the nurse demand management protocols useful for AT in the hospital setting? 4. What is the profile of patients who are candidates for AT? 5. Does the implementation of AT decrease waiting time and increase the satisfaction of the client consulting the hospital emergency department? | Aims The main objective of this study is to evaluate the effectiveness of advanced triage in improving the quality of care outcomes of patients attending the emergency department of a high-complexity hospital. The secondary objectives are: • Analyse the concept of advanced triage nurse and identify the essential elements of the construct (step 1). • Determine the quality of care based on quality indicators, waiting time, level of satisfaction and pain control in patients attending the Emergency Department care unit of a high-complexity hospital, treated with standard care or with advanced triage (step 4). | Research hypothesis The study aims to demonstrate that implementing advanced triage increases quality of care, decreases waiting time and increases satisfaction of patients attending the Emergency Department care unit in a high complexity public hospital when compared to usual care. | Design and methodology Mixed, longitudinal study, divided into 4 stages. Firstly, designing a strategy for action without a prior conceptual model would lead to a doubtful and possibly inaccurate interpretation of data in the following stages of this protocol. Secondly, it is necessary to know which terminology is the most appropriate to establish an assessment and diagnosis before carrying out an intervention. Thirdly, it will be necessary to assess the action protocols and get the nurses' opinion. Finally, advanced triage will be implemented ( Figure 1). | Study design A concept analysis will be carried out based on a systematic analysis method by which it details and specifies the element to be studied, in this case, the AT. This study is based on the method that was outlined by Wilson (Wilson, 1969) and later evolved by Avant (Walker & Avant, 1989). This technique consists of eleven stages: (1) isolating questions concerning the concept by formulating the questions into three categories: concept, fact and values; (2) finding suitable answers through the scientific literature, and the common uses of the concept in encyclopaedias and dictionaries and in grey literature; (3) developing cases to analyse an exemplary case, to describe the AT; (4) developing cases to analyse an opposite case of the AT; (5) developing cases to analyse a related case, such as describing advanced nursing interventions; (6) developing cases to analyse a borderline case, such as nurse demand management; (7) developing a fictitious case, only if necessary, to clarify a concept; (8) determining the social context that ED has experienced in recent years and describing the most effective interventions to try to decrease the oversaturation of the service; (9) identifying the underlying emotions by other authors, describing how nurses perceive the implementation of AT; (10) establishing the practical results resolving the issues raised in the study; (11) defining the results in language that facilitates the clarification of the controversial concept. The use of this methodology provides the construction of different definitions that reach all variants of the AT concept. | Data analysis Not applied. Concept analysis is a formal and rigorous process using a systematic method, in which an abstract concept is explored, made transparent, defined and differentiated from similar concepts to be used in the formulation of theories. It serves to clarify a concept that is often subject to controversy. Advanced triage can be interpreted in an ambiguous way: only as an advance of tests or as a finalist resolution of the reason for consultation by the nurse. | Study design Observational, descriptive, cross-sectional, retrospective study of interobserver concordance of bidirectional cross-mapping The sampling technique will be simple random probability sampling. Assuming an inter-observer discordance of 0.05, an intra-observer discordance ratio of 0.05, a Confidence Level (CL) of 95% (α = 0.05), without any loss (as they are concepts), the sample correction formula Na = n 1 (1 − R) is applied, which results in a total of 416 NDM protocol concepts to be evaluated. | Data collection The data collector will proceed to design an ad hoc form. That will be a structured tool that allows the collection and connection of the three terminologies in each concept (assessment, diagnosis and intervention). The identification of these equivalences will be carried out by the participation of three nurses independently of each of the researchers. Several interobserver consensus sessions will be held according to the results of the concordance. Discrepancies will be resolved, if appropriate, through the agreement processes. To make the equivalences of the concepts included in the NDM protocols with the different languages (ATIC, NANDA, NIC and ICD-10), with trichotomous variables: Yes/No/Partially; and nominal quantitative variables, to facilitate the understanding of the mapping, the codes representing the type of equivalence are included (Table 1). | Data analysis The collected data will be processed and checked for processing errors and analysed with EXCEL. Interobserver reliability will be calcu- | Study design Observational, descriptive, cross-sectional and prospective study of the different protocols already existing in the PHC of the Catalan Institute of Health and assess whether they are suitable for the hospital setting. | Study setting The study will be carried out in the ED of a high complexity public hospital in the southern metropolitan area of XX. This stage will consist of two phases: Phase 1 (P1): a committee of experts will be formed, which will evaluate the existing protocols. Phase 2 (P2): surveys will be passed to several health professionals working in the ED of a high-complexity hospital, in order to know their opinion about the AT. | Participants In Phase 1, it will consist of 12 healthcare professionals (8 nurses and 4 doctors) who will work in the ED of a high-complexity hospital. In Phase 2, the study population will be all healthcare professionals who work in the ED in an assisting or management capacity in a highcomplexity hospital. Selection criteria: Inclusion criteria: nurses and physicians, assistants and managers of the emergency departments of public hospitals in Catalonia. Exclusion criteria: Auxiliary nursing care technicians, orderlies, nurses and administrative staff. Sample size (phase 2): Starting from a maximum indeterminacy with an expected proportion (p) = 0.05, a 95% confidence interval (α = 0.05) and a precision (y) = 0.05. The number of professionals to be included will be 385. Assuming 15% of possible losses, the corrected sample formula will be applied, which gives a total sample of 490 individuals. The sampling technique in this phase will be non-probabilistic consecutive sampling. | Data collection The variables for P1 and P2 of stage 3 will be the usefulness and applicability of the NDM, protocols by the hospital AT and two lines will be considered: the first, the evaluation of the perception of usefulness and the second, of applicability by the healthcare professionals, from the two phases. The data collection in phase 1 will be carried out on the basis of grids, where the 33 protocols of the NDM will be analysed. In each protocol, 12 questions will have to be analysed through an ad hoc questionnaire with a qualitative ordinal scale. Each question will be answered using a 10-point Likert-type scale (1 = strongly disagree to 10 = strongly agree). In phase 2, an ad hoc questionnaire consisting of four blocks will be administered: (1) socio-demographic data will be asked; (2) knowledge of the AT will be asked; (3) the opinions held by the different health professionals will be asked; (4) nursing competences will be asked. The questionnaire contained 15 items. Each item will be also answered using a 10-point Likert-type scale (1 = strongly disagree to 10 = strongly agree). | Data analysis Qualitative variables will be described by proportions, calculating 95% confidence intervals. Quantitative variables will be described with mean and median as measures of centrality, and standard deviation and interquartile range as measures of dispersion. The normality of the distribution of the quantitative variables will be checked with the Kolmogorov-Smirnov test, and the parametric and non-parametric tests indicated in each case in the bivariate analysis will be applied; normally, the most commonly used tests will be the Chi-square, the Mann-Whitney U, the Student's t and Pearson's correlation-regression tests. Finally, multivariate analysis will include logistic regression techniques. p-values of less than 0.05 will be considered statistically significant. | S TEP 4 7.1 | Study design Quasi-experimental study consisting of 2 phases: the first, a retrospective control phase, and the second, a prospective intervention phase in which advanced triage will be implemented. In phase 1, control, quality of care indicators will be assessed, based on quality indicators in the emergency department, waiting time and satisfaction, and the level of pain. In addition, different epidemiological, socio-cultural and clinical variables will be measured. In phase 2, the intervention phase, advanced triage will be implemented based on advanced practices nurse and the indicators of quality of care and pain will be assessed. • Phase 1 (P1): retrospective control (no intervention). Will include all patients who attended the ED from January 2018 to December 2022. • Phase (P2): prospective intervention, (experimental) where the ED will be implemented AT based on the advanced nurse practitioner. Will include patients from January 2023 to December 2024. | Study setting The scope of the study will be the ED of a highly complex hospital in a centre in Catalonia (northeastern Spain), which serves as a com- | Participants Phase 1, retrospective, will include all patients who attended the ED from January 2018 to December 2022. Phase 2, prospective, will include patients from January 2023 to December 2024. In both phases they will have to meet the following selection criteria. Selection criteria: Inclusion criteria: over 18 years of age, admitted to EDs classified with ESI severity levels III, IV and V. Exclusion criteria: patients over 70 years of age, pregnant women, patients with a Glasgow score of less than 15, patients with more than 3 chronic pathologies and/or 1 complex chronic disease or patients reconsulting the ED for the same reason for consultation. Sample size: assuming a 95% NC, an alpha risk (α) of 5%, a precision (β) of 20% and a precision (p) of 50%, a sample of 1095 patients in cohort 1 and 547 in the intervention cohort will be required (a total of 1642 patients). | Data collection The main independent variable to be collected will be the AT. As Satisfaction: to assess satisfaction, a survey will be carried out to evaluate the degree of this care; it will be measured at the time the patient is discharged, either as a home discharge or admission. The ad hoc survey will be carried out on an ordinal scale of 0-10; (c) Pain: the visual analogue scale (VAS) will be used to assess the intensity of the patient's pain on arrival and departure from the ED; (d) Epidemiological variables: age and sex, obtained from the computer system and by asking the patient verbally or by means of a survey; (e) Clinical variables, pathological history, reason for triage consultation, diagnoses (nursing or medical) and type of discharge. These data will be obtained from the computer system and through a survey of ED patients. Complexity indicators such as language limitations will also be obtained. Therefore, data collection in the first phase will be carried out from the computer system and through the different patient medical records retrospectively in 2019, 2020 and 2021. In phase 2, the collection will be through the computer system and the ad hoc survey. This survey will be administered after the patient is discharged from the ED, either at home or admitted to hospital. | Data analysis Qualitative variables will be described by proportions, calculating 95% confidence intervals. Quantitative variables will be described with mean and median as measures of centrality, and standard deviation and interquartile range as measures of dispersion. The normality of the distribution of quantitative variables will be checked with the Kolmogorov-*Smirnov test, and the parametric and nonparametric tests indicated in each case in the bivariate analysis will be applied; normally, the most commonly used tests will be the Chi-square, the Mann-Whitney U, the Student's t and Pearson's correlation-regression tests. Finally, multivariate analysis will include logistic regression techniques. p-values of less than 0.05 will be considered statistically significant. All data will be analysed using SPSS v26.0 software. | Ethical considerations This project has been approved by the Ethics Committee of the University Hospital of Bellvitge (PR085/20). The participants of step 3, phase 2 will be surveyed in order to know the opinion of the professionals with respect to the AT. Therefore, prior to the delivery of the survey, voluntary participation will be requested and, if accepted, the corresponding informed consent form (IC) will be signed. The IC will also be signed for patients in step 4 phase 2 (intervention), on whom the AT will be implemented on the basis of the protocols decided by the committee of experts. Before initiating step 4, approval will have to be obtained from the management of the centre and services involved. Data collection for step 4, Phase 1 will be pseudo-anonymized by means of an alphanumeric code and only the principal investigator (PI) of the study will be able to relate these data to the clinical history. Therefore, the identification of the participants will not be disclosed to any person. The data will be kept by the corporate network of the University Hospital of Bellvitge and by the PI for a minimum of 10 years. The Research Ethics Committee has been requested to make an exception of the informed consent procedure, due to the fact that the data will be collected retrospectively respecting the legal and current data protection regulations. The information provided both orally and in writing is linked to the Law 16/2010 of 3 June, on the rights to information concerning the health and autonomy of the patient, and clinical documentation. As far as personal data are concerned, they will be protected by the the processing of data that are incorrect or request a copy or that they be transferred to a third party, for which you will have to contact the study nurse. To exercise these rights, or if you wish to know more about confidentiality, you will have to contact the principal investigator of the study. The study will be carried out in accordance with current legislation on research projects in our country, Biomedical Research Act 14/2007. | Validity and reliability Scientific rigour is guaranteed on the basis of reliability, credibility and clinical safety. In Phase 2, Step 4, the AT will be implemented and will always be performed by the same people. Patients will be cared for by only 3 nurses and these nurses will have a minimum of 10 years Of experience in the ED, the necessary academic training, an official master's degree and adequate training to be able to apply the AT. | DISCUSS ION There is no consensus on the best intervention to reduce crowding in the ED, due to the fact that a large variability of indicators comes into play. However, there are several interventions that influence the length of stay of patients, thus reducing waiting times. Prolonged waiting times for medical care are known to negatively affect the quality of healthcare and increase the risk of undesirable consequences for patients Shen & Lee, 2018). Different studies have implemented a wide range of interventions to improve this collapse of EDs (Austin et al., 2020;De Freitas et al., 2018;Morley et al., 2018). One such intervention that has shown promise is to allow triage nurses to order diagnostic tests even before a physician sees the patient (Bittencourt et al., 2020). This is a valid option especially for patients with traumatic pathologies. In another study, FT areas were implemented to help reduce length of stay in services and costs. This intervention decreased the number of repeat visits, reduced the mortality rate and increased patient satisfaction (Woo et al., 2017). Applying new work processes, such as the use of Leanbased organisational methodology, has also proven useful in reducing length of stay in ED (Allaudeen et al., 2017;Austin et al., 2020). All these interventions can be effective, but applying APN in both ED and primary healthcare should not be forgotten as one of the central pillars of AT (Brugués et al., 2017;Vara-Ortiz & Fabrellas Padrés, 2022). In order for this to be done, all nurses involved in triage must be properly prepared. In some studies, it is the nurses themselves who feel that they do not have sufficient knowledge and skills to assign triage levels, thus making mistakes in their practice, which, in turn, leads to patient dissatisfaction and overcrowding in hospital ED triage units (Bijani & Khaleghi, 2019). For this reason, to identify nurses' perception of professional capability is essential . It should be noted that the triage process is complicated (Bijani et al., 2020). Variability in triage classification has been described as multifactorial. The level of triage assignment has been correlated with the professional profile of the nurse and the number of triages performed by the nurse (Gómez-Angelats et al., 2018). Several studies have shown that the lack of knowledge and professional skills to perform triage influences the level of triage assigned (Bijani & Khaleghi, 2019;Mohammadi et al., 2022). Errors are made in prioritizing patients and placing them in a lower or higher level than their actual condition, resulting in over-or undertriage. However, one study found that the concordance between triage levels assigned to the same patient was higher for more severe patients. Although, there were still some multifactorial discordances at less severe levels (Sarria-Guerrero et al., 2019). The correct classification of low-urgency patients increases the efficiency of ED flow and reduces waiting times for high-urgency visits (Zachariasse et al., 2019). Patients classified as Priority I and II on triage scales are given higher quality of care, but as the degree of priority decreases, the level of quality also decreases proportionally (Morley et al., 2018;Zachariasse et al., 2019). It is necessary to give a satisfactory response to health needs, especially in patients classified as low complexity, who are more vulnerable to overcrowding, and to apply specific circuits for the care of these patients, such as AT (Hinson et al., 2018;Lauks et al., 2016). AT has been one of the most important strategies implemented in EDs to decrease waiting times in some countries such as Italy, UK, USA (Michigan), Canada, Australia (Melbourne; Barksdale et al., 2016;Butti et al., 2017;Innes et al., 2017;Lauks et al., 2016;Morley et al., 2018;Van Donk et al., 2017). It has been carried out by the APN figure. | Limitations The present project has certain limitations. The first is the nonrandomization of patients in either of the two phases of step 4. The second limitation could be in the comparison of the results obtained between the two cohorts (retrospective and prospective) since they are differentiated by a time interval. The third limitation is the results are from a single care centre. Another limitation is this study may have some missing information related to the retrospective phase data collection. The fourth is that the study may present some missing information related to the data collection part of the retrospective phase of step 4. And finally, the study does not present any validated scale to assess AT satisfaction specifically in the ED, and therefore, a Likert-type scale made ad hoc for this project will be used. | CON CLUS ION The execution of this project will try to demonstrate that implementing AT in the ED, mediated by APNs, can increase the quality of care indicators in the ED and increase citizen satisfaction. At the same time, this intervention shall eventually improve the waiting times and reduce ED collapses in the short term. Our results may modestly help to address the APN skills focused on AT and give greater importance to TA, as a model that helps to reduce emergencies by treating minor injuries based on the autonomous role of the nurse. This could respond to the current demand of Health Care System and make it more sustainable. AUTH O R CO NTR I B UTI O N S All authors have agreed on the final version and meet at least one of the following criteria [recommended by the ICMJE (www.icmje.org)]: • Substantial contributions to the conception or design of the work; or the acquisition, analysis or interpretation of data for the work; • Drafting the work or revising it critically for important intellectual content;
2023-02-01T06:17:57.297Z
2023-01-31T00:00:00.000
{ "year": 2023, "sha1": "b34c4e3089a3ab9309d9865727a7def755fca0a7", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nop2.1622", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "d493b0dec3a9738bc838a93ef261a608321311a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15931931
pes2o/s2orc
v3-fos-license
Dynamic Similitude Design Method of the Distorted Model on Variable Thickness Cantilever Plates Zhong Luo 1,*, Yunpeng Zhu 2, Haopeng Liu 1 and Deyou Wang 3 1 School of Mechanical Engineering & Automation, Northeastern University, Shenyang 110819, China; hfliuhaopeng@163.com 2 Department of Automatic Control and System Engineering, University of Sheffield, Sheffield S13JD, UK; yzhu53@sheffield.ac.uk 3 AVIC Shenyang Aero-engine Design Institute, Shenyang 110042, China; wangdy606@163.com * Correspondence: zhluo@mail.neu.edu.cn; Tel.: +86-24-8368-0540 Introduction Variable thickness (VT) plates have been widely applied in engineering practice such as, for example, advanced gas turbines, high-powered aircraft jet engines and high-speed centrifugal separators [1][2][3].The vibration problems of the plate structures in these engineering machines are important for consideration in the design process [4][5][6]. By using the finite element method, vibration problems of VT plates are widely studied.Huang et al. [7] investigated the free vibration problem of orthotropic rectangular VT plates by using a discrete method.Based on the Green function, Sakiyama et al. [8] discussed an approximate method for analyzing the free vibration of rectangular VT plates, and Guo et al. [9] introduced a dynamic function in the finite strip method, where numerical analysis was used to demonstrate the application of the approaches by analyzing simply supported stepped thickness (ST) plates.It has been shown that the numerical solutions of the approximate method had good accuracy for various types of rectangular plates with uniform or non-uniform thickness. Moreover, Eisenberger and Jabareen [10], in 2001, computed exact axisymmetric vibration frequencies of VT circular and annular plates by approximating the variation in thickness into an infinite power series.Jiang and Redekop [11] studied the free vibration characteristics of linear elastic orthotropic toroidal VT shells based on the Sanders-Budiansky shell equations, and Kang and Leissa [12] also discussed the free vibration on VT paraboloids shells by using a three-dimensional method.Recently, based on the polynomial fitted thickness, VT plates' various combinations of boundary conditions were investigated by Shufrin and Eisenberger [13], where two shear deformation plate theories were applied to provide accurate results in solving natural frequencies.By using the Generalized Differential Quadrature (GDQ) method, most recently, Tornabene et al. [14] and Bacciocchi et al. [15] investigated the free vibration of doubly-curved shells, singly-curved shells and plates with continuous thickness variation, showing that the GDQ method provides an accurate, stable and reliable numerical tool in analyzing variable thickness thin walled structures. However, although theoretical studies have been widely discussed, experimental tests of the actual structure are still necessary.The issue is that, in practice, experimental investigation on thin walled structures like VT plates are actually expensive and time-consuming.Consequently, a scaled down model, which is usually designed based on similitude theory, is employed to reflect the prototype behavior.In general, due to the lack of the availability of materials, or the unavailability of members' specified dimensions, researchers often use thin plates for the analysis [16,17], but this may limit the application of the test results. Dynamic similitude design of thin walled plate structures is important in engineering practice and has been discussed by many researchers.For example, De Rosa et al. [18] investigated the distorted scaling laws in predicting the dynamic response of rectangular flexural plates based on the analysis of the vibration energy.Ramu et al. [19] considered the scaled model made of different materials to predict the dynamic behavior of the prototype by using a scaling law, which has been established based on dimension analysis, for free vibration.Qian et al. [20] established the scaling laws of laminated plates based on the governing equation analysis to predict the impulse response of the prototype.The results indicate that scaling laws can accurately predict the undamaged response of impact.Moreover, scaling laws of isotropic laminated plates have been studied by Ungbhakorn et al. [21].In their study, governing equations of buckling and frequency were used to derive the scaling laws, and partial similitude was also considered, recommending the scaling laws with good accuracy.Rezaeepazhand et al. [22] studied the scaling laws of distortion models for predicting the laminate plate's buckling and free vibration, deriving the scaling laws on different material and geometrical properties by using the governing equations of laminated plates and shells. Basically, dynamical similitude design of complex structures, especially the design of distorted models, still focus on the distortion of materials.However, due to the structure's complexity of the prototype, the study of geometrically distorted models is a real need.Most recently, Luo and Zhu et al. [23][24][25] presented a series of methods in the design of geometrically distorted models of plates and shells.In their work, the sensitivity analysis was employed in deriving accurate distorted scaling laws to predict the dynamic characteristics of the prototype [26,27].In this study, in order to address the problem in designing a scaled model for a VT plate, a simplified ST plate, which has the same dynamic properties of the VT plate, is introduced.By using the transfer matrix method, the equivalent thickness of the corresponding thin plate is derived for each vibration modals.Then, a unified thickness of the Model Thin (MT) plate is selected and the corresponding scaling law is proposed such that the dynamic properties of the prototype VT plate can be predicted by using the MT plate. The manuscript is organized as follows.In Section 2, the distorted scaling law of thin walled plates is derived based on the governing equation.The simplified ST plate is then proposed in Section 3, where the transfer matrices of both ST plates and thin plates with the cantilever boundary condition are discussed, and the equivalent thicknesses of different vibration modals are computed.In Section 4, the scaled down model of the VT plate is designed with a unified thickness, and the corresponding scaling laws are derived to predict the dynamic properties of the prototype VT plate.A case study is also provided to validate the proposed design method and a general process of designing the MT plate is summarized.Finally, conclusions are presented in Section 5. Distorted Scaling Law of Thin Cantilever Plates Considering a cantilever plate and the coordinate system oxyz, a and b are the length and the width along x and y directions, as shown in Figure 1.u(x, y, t), v(x, y, t) and w(x, y, t) represent the displacement of x, y and z directions, respectively.The Young's modulus, Poisson's ratio and the density of the plate's material are separately denoted by E, µ and ρ.The governing equation of the thin plate is [28] where 2  is the Laplace operator, x y The cantilever boundary condition can be found in [23] for more details.Denote the deflection of the plate by using the equation where ω is the natural frequency of the plate.Substituting Equation (2) into Equation (1) yields: where 4 2 h D     . Considering that Equation ( 3) is satisfied by both the model and the prototype: where subscript p represents the prototype; subscript m represents the model, which can be rewritten as where is used to represent the scaling laws, j represents the symbol of each physical quality, for example, , , , j a b W   , and so on. According to the similitude theory [22]: is obtained from Equation ( 5), and according to The governing equation of the thin plate is [28] where ∇ 2 is the Laplace operator, The cantilever boundary condition can be found in [23] for more details.Denote the deflection of the plate by using the equation w(x, y, t) = W(x, y)sin(2πωt + ϕ), ( where ω is the natural frequency of the plate.Substituting Equation (2) into Equation (1) yields: where Considering that Equation (3) is satisfied by both the model and the prototype: where subscript p represents the prototype; subscript m represents the model, which can be rewritten as where λ j = λ j,p /λ j,m is used to represent the scaling laws, j represents the symbol of each physical quality, for example, j = a, b,E, W, and so on. According to the similitude theory [22]: is obtained from Equation ( 5), and according to α 4 = ω 2 (ρh/D) and D = h 3 /12(1 − µ 2 ), there is Let λ a = λ b = λ, and can be derived by substituting Equation ( 7) into Equation (6), where (8) is the distorted scaling laws with respect to the material parameters and the thickness h.By using the scaling law (8), the natural frequency of a cantilever thin plate can be predicted by using an MT plate, under the same boundary condition, made of different materials in arbitrary thickness.However, if the prototype is a VT plate, its dynamic properties cannot be predicted by using such a simple scaling law, and an effective approach of designing the scaled down model in predicting the prototype VT plate is needed. Simplification of the VT Plate The cross section of a VT plate with the length and the width of a and b, respectively, is shown in Figure 2, where h(x) is the thickness.A Clamped-Free (C-F) boundary condition is satisfied in the present study, the edge A * in Figure 2 is clamped and the edge B * is free. Let a b      , and can be derived by substituting Equation ( 7) into Equation (6), where (8) is the distorted scaling laws with respect to the material parameters and the thickness h.By using the scaling law (8), the natural frequency of a cantilever thin plate can be predicted by using an MT plate, under the same boundary condition, made of different materials in arbitrary thickness.However, if the prototype is a VT plate, its dynamic properties cannot be predicted by using such a simple scaling law, and an effective approach of designing the scaled down model in predicting the prototype VT plate is needed. Simplification of the VT Plate The cross section of a VT plate with the length and the width of a and b , respectively, is shown in Figure 2, where It is obvious that a VT plate can be simplified as a Stepped Thickness (ST) plate with q steps, where the VT plate and the ST plate are equivalent when q   , as shown in Figure 3 The thickness of each step is It is obvious that a VT plate can be simplified as a Stepped Thickness (ST) plate with q steps, where the VT plate and the ST plate are equivalent when q → +∞ , as shown in Figure 3 The thickness of each step is where Appl.Sci.2016, 6, 228 5 of 16 ... a/q a/q a/q ... H s1 Hs(q-2) For example, consider the VT plate made of 42CrMo, whose geometric and material parameters are shown in Table 1, the previous six orders' natural frequencies, including the flexural vibration (F), the torsional vibration (T) and the chordwise bending vibration (EB), are compared in Table 3 with the simplified ST plate of 5 q  , whose geometric and material parameters are shown in Table 2.The relative simplification error η s between the VT plate and the ST plate is defined as For example, consider the VT plate made of 42CrMo, whose geometric and material parameters are shown in Table 1, the previous six orders' natural frequencies, including the flexural vibration (F), the torsional vibration (T) and the chordwise bending vibration (EB), are compared in Table 3 with the simplified ST plate of q = 5, whose geometric and material parameters are shown in Table 2.The relative simplification error η s between the VT plate and the ST plate is defined as ... a/q a/q a/q ... H s1 Hs(q-2) For example, consider the VT plate made of 42CrMo, whose geometric and material parameters are shown in Table 1, the previous six orders' natural frequencies, including the flexural vibration (F), the torsional vibration (T) and the chordwise bending vibration (EB), are compared in Table 3 with the simplified ST plate of 5 q  , whose geometric and material parameters are shown in Table 2.The relative simplification error η s between the VT plate and the ST plate is defined as ... a/q a/q a/q ... H s1 Hs(q-2) For example, consider the VT plate made of 42CrMo, whose geometric and material parameters are shown in Table 1, the previous six orders' natural frequencies, including the flexural vibration (F), the torsional vibration (T) and the chordwise bending vibration (EB), are compared in Table 3 with the simplified ST plate of 5 q  , whose geometric and material parameters are shown in Table 2.The relative simplification error η s between the VT plate and the ST plate is defined as ... a/q a/q a/q ... H s1 Hs(q-2) For example, consider the VT plate made of 42CrMo, whose geometric and material parameters are shown in Table 1, the previous six orders' natural frequencies, including the flexural vibration (F), the torsional vibration (T) and the chordwise bending vibration (EB), are compared in Table 3 with the simplified ST plate of 5 q  , whose geometric and material parameters are shown in Table 2.The relative simplification error η s between the VT plate and the ST plate is defined as , especially in the torsional vibration and the chordwise bending vibration, indicating that the equivalent method is applicable. Equivalence Design Based on Transfer Matrix In order to address the issue of predicting the VT cantilever plate by using an MT plate, the transfer matrix method is introduced in this section to establish the equivalent relationship between the ST plate and the thin plate. Transfer Matrices of the Plate Structures For a thin walled plate structure, the transfer matrix, which is used to analytically calculate the dynamic properties, can be derived based on the Kirchhoff hypothesis of the thin walled plate element as below. According to the Hamilton theory [28], the governing equations of the plate element are: where   respectively, where , especially in the torsional vibration and the chordwise bending vibration, indicating that the equivalent method is applicable. Equivalence Design Based on Transfer Matrix In order to address the issue of predicting the VT cantilever plate by using an MT plate, the transfer matrix method is introduced in this section to establish the equivalent relationship between the ST plate and the thin plate. Transfer Matrices of the Plate Structures For a thin walled plate structure, the transfer matrix, which is used to analytically calculate the dynamic properties, can be derived based on the Kirchhoff hypothesis of the thin walled plate element as below. According to the Hamilton theory [28], the governing equations of the plate element are: where   respectively, where Table 3 indicates that the simplified ST plate has the same vibration modal as the corresponding VT plate.The natural frequency results show small relative errors between the two types of plates as η s ≤ 5%, especially in the torsional vibration and the chordwise bending vibration, indicating that the equivalent method is applicable. Equivalence Design Based on Transfer Matrix In order to address the issue of predicting the VT cantilever plate by using an MT plate, the transfer matrix method is introduced in this section to establish the equivalent relationship between the ST plate and the thin plate. Transfer Matrices of the Plate Structures For a thin walled plate structure, the transfer matrix, which is used to analytically calculate the dynamic properties, can be derived based on the Kirchhoff hypothesis of the thin walled plate element as below. According to the Hamilton theory [28], the governing equations of the plate element are: where ρh ∂ 2 w/∂t 2 is the inertia term; subscripts x and y represent the directions along x and y axis, respectively.M x , M y and M xy , M yx are denoted as bending and twisting moments as: respectively, where D = Eh 3 /12(1 − µ 2 ) represents the flexural rigidity of the plate element.Furthermore, an equation of the rotation angle θ x , which is related to the boundary conditions, is introduced as such that seven equations are established with seven variables of w θ x M x M y M xy Q x Q y .According to governing Equations ( 11) to (13), denote the variable vector as: such that Equations ( 11) to (13) can be written into a matrix form as where U is a 7 × 7 matrix given by with and U 76 = ∂ ∂y . It is worth noting that the transfer matrix of the flexural vibration can be established by using the one-dimensional method, meaning only two boundary conditions are considered (x = 0 and x = a), to lead a much simpler form than the other two vibration modals (T and EB).Consequently, the flexural vibration is taken as an example in the following work.The other two types of vibrations can be discussed through the same process, and their transfer matrices are shown in references [25,29]. In a flexural vibration, the displacement functions and the corresponding stresses are assumed as follows: where denotation of the wave number along the y direction is m = 1, and w (x) θ x (x) M x (x) M y (x) are undefined functions only related to x. Substituting Equation (17) into Equation ( 15) yields: where and where and m = 1. Considering that the thin plate is divided into K sub-sections along the x direction, for the kth sub-section, the solution of ( 18) can be written as [30] where a k is the length of the kth sub-section, and T(a k ) = e ( Ua k ) is the transfer matrix of the plate element.By combining Eqution (21) along the length of the plate, there is where e Ua k = e Ua is the transfer matrix of the thin plate. Equivalent Thickness of the Thin Plate Assume that the simplified ST plate contains N steps with each length of a si 0 and the thickness of h si 0 (i 0 = 1, 2, • • • , N), as shown in Figure 4. where , and Considering that the thin plate is divided into K sub-sections along the x direction, for the k th sub-section, the solution of ( 18) can be written as [30]   where k a is the length of the k th sub-section, and is the transfer matrix of the plate element.By combining Eqution (21) along the length of the plate, there is Each step of the plate, by applying the transfer matrix method, can be divided into r sections, such that, in total,   R N r subsections are applied in analyzing the ST plate, where the boundary of each section is defined as ξ i , ( 0,1, , For the ST plate, Equation ( 22) can be written as: Each step of the plate, by applying the transfer matrix method, can be divided into r sections, such that, in total, R = N × r subsections are applied in analyzing the ST plate, where the boundary of each section is defined as For the ST plate, Equation ( 22) can be written as: for the n th step, where ) are the elements of the transfer matrix.According to Equation ( 21), the n th transfer matrix in ( 23) is calculated as: where Combining all N steps of the ST plate by using the transfer matrix ( where T = N ∏ n=1 e ( U n a sn ) represents the transfer matrix of the ST plate. Substituting the boundary condition of the cantilever plate, and into Equation ( 25), yields 0 0 where T ij are the elements of the transfer matrix T. According to Equation ( 27), the natural frequency of the ST plate can be obtained by letting det The specific algorithm of calculating Equation ( 28) is discussed in [19] in details. Similarly, the natural frequency of a thin plate can be calculated as det In Equation (28), each order's natural frequencies of the ST plate's flexural vibration, defined as ω st,n , are obtained by ordering the results in a small to large array.Consequently, Equation ( 28) can be expressed as While, for an equivalent thin plate, the equivalent thickness corresponding to each order's vibration, h e,n , is the variable need to be obtained, and Equation ( 29) is where the frequency ω st calculated by Equation ( 30) is substituted into Equation (31).Consequently, the equivalent thickness of the thin plate to an ST plate is obtained. Distorted Models and the Scaling Law It has been discussed in the previous sections that for each vibration modal, an equivalent thickness of a VT plate can be calculated as h e,n via the simplified ST plate by using the transfer matrix method.However, in practice, it is obviously impossible to design an MT plate with a specific thickness h e,n for each modal of the prototype VT plate.Usually, a unified thickness, h Uni , is chosen in the design of the scaled model. In order to address this issue, denote the ration of the thickness as According to the frequency scaling law (8), the frequency relationship between the prototype equivalent thin plate and its scaled down model is while the scaling law between the scaled model thin plate and the distorted model plate with the unified thickness h Uni is calculated as where h (e,n), m represents the model thickness against the nth modal of the thin plate.Consequently, the scaling law between the unified distorted model and the prototype equivalent thin plate, as well as the VT plate, is derived as Next, a similitude design case study is provided to illustrate the design process of the scaled down plate.In this case study, the simplified steps of the ST plate are given as N = 5, and the material and the geometrical parameters are shown in Table 2, where in the ANSYS simulation (ANSYS 14.0, ANSYS, Pittsburgh, PA, USA), the element Solid 186 is used.The equivalent thickness of the previous fourth flexural vibration modal defined as h eiF , i = 1, . . ., 4, and the relative prediction error is calculated by as shown in Table 4, where the width, length and material parameters of the equivalent thin plate are the same as the ST plate as shown in Table 2, and ω e and ω s are the natural frequencies of the equivalent thickness thin plate and the ST plate, respectively.The results in Table 4 show that the ST plate has the same order and shape of the vibration modal as the equivalent thin plate, and the natural frequencies of each order's vibration are also close to a small relative error of η e < 2%.This indicates that the design algorithm on calculating the equivalent thickness is applicable, and, therefore, the connections between the VT plate, ST plate and the thin plate are established. Consider that, in a scaled down model, the material of the model plate is NO.45 steel and the geometrical parameters of the plate are shown in Table 5 with λ = 2.The unified thickness of the model plate is h Uni = 2 mm, and the predicted natural frequencies obtained by using the scaling law (35) are shown in Table 6, where η ave is the relative error of predictive values.Table 6 shows accurate predicted results that in the same vibration modal, the predicted natural frequencies are close to that of the prototype with a relative error η ave < 5%, indicating that the method of predicting a VT plate's vibration characteristics by using an MT plate is applicable. In this example, it can be seen that the dynamic properties of a VT plate can be accurately predicted by using a designed MT plate of different materials.It is worth noting that the present proposed method can be simply achieved by using the MATLAB program (MATLAB 2010b, MathWorks, Natick, MA, USA), such that the design of the MT plate can be easily conducted.Moreover, two additional cases are discussed as below. Firstly, only flexible vibration is considered in the proposed example.It is noticeable that the coefficient Ξ of the torsional vibration and the chordwise bending vibration can be obtained by replacing T with transfer matrices shown in Reference [29]. Secondly, the length and width of the MT plate are completely scaled down as λ a = λ b = λ, a more complex case, where the geometric parameters are all distorted as λ a = λ b = λ h e , can be discussed by using the similar process by referring to the scaling laws present in [22][23][24] between the equivalent thin plate and the MT plate. A General Design Process According to the discussion in the above sections, a distorted scaled model of the VT plate is designed and the method of reducing the predicted errors is investigated.To be clear, the process of the similitude design is summarized as below and illustrated in Figure 5. , a more complex case, where the geometric parameters are all distorted as λ λ λ   e a b h , can be discussed by using the similar process by referring to the scaling laws present in [22][23][24] between the equivalent thin plate and the MT plate. A General Design Process According to the discussion in the above sections, a distorted scaled model of the VT plate is designed and the method of reducing the predicted errors is investigated.To be clear, the process of the similitude design is summarized as below and illustrated in Figure 5. Step 1: Simplify the VT plate by using an ST plate with finite steps N , such that the simplified ST plate can reveal the dynamic properties of the VT plate in a certain order. Usually, this process can be facilitated by using a numerical analysis method.For example, in the case study of this paper, the frequency errors of different vibration modals between the VT plate and the ST plate with different N steps are shown in Figure 6.Fitting the curves with a four-order polynomials yields: Step 1: Simplify the VT plate by using an ST plate with finite steps N, such that the simplified ST plate can reveal the dynamic properties of the VT plate in a certain order. Usually, this process can be facilitated by using a numerical analysis method.For example, in the case study of this paper, the frequency errors of different vibration modals between the VT plate and the ST plate with different N steps are shown in Figure 6.h , can be discussed by using the similar process by referring to the scaling laws present in [22][23][24] between the equivalent thin plate and the MT plate. A General Design Process According to the discussion in the above sections, a distorted scaled model of the VT plate is designed and the method of reducing the predicted errors is investigated.To be clear, the process of the similitude design is summarized as below and illustrated in Figure 5. Step 1: Simplify the VT plate by using an ST plate with finite steps N , such that the simplified ST plate can reveal the dynamic properties of the VT plate in a certain order. Usually, this process can be facilitated by using a numerical analysis method.For example, in the case study of this paper, the frequency errors of different vibration modals between the VT plate and the ST plate with different N steps are shown in Figure 6.Fitting the curves with a four-order polynomials yields: where the adjusted determination coefficient R 2 is given as: where n is the sample size; k is the order of the polynomial; Ŷ, Y and Y are the fitted value, the average value and the actual value (simulation value), respectively.Substituting relevant parameters into Equation (38) yields: where Equation (39) indicates that the fitted curves can be used to determine η s,1F , η s,1T and η s,1EB .Assuming η s ≤ 5% is acceptable, the step number N is solved: N = 4. Similarly, calculating the frequency errors of high order vibrations (previous six modals), N = 5 is chosen as the number of the steps in the case study. Step 2: Calculate the equivalent thickness of each order's vibration by using the transfer matrix method, which is obtained as h e,1 , h e,2 , ..., h e,n . Step 3: Unify the equivalent thickness obtained in Step 2 into any thickness of interest as h Uni , and design the similitude model of the thin plate with the thickness h Uni . Step 4: Derive the distorted scaling laws (35) according to the equivalent thickness h e,1 , h e,2 , ..., e,n and the simplified thickness , as well as the different materials of the model and the prototype. Step 5: Test the distorted model and predict the dynamic characteristics of the prototype VT plate. It is worth noting that the design process summarized above is a general process on designing a distorted similitude model of thin walled structures with continuous thickness variations.Based on the sketch shown in Figure 5, two main issues are addressed in the present study.Firstly, find a simple equivalent structure that has the same dynamic properties of interest to the prototype complex structure.Then, design the similitude model of the simple equivalent structure according to the requirement of the designer, where, in this step, dynamic similitude design approaches that have been studied by the authors can be well applied.The proposed technique in the present study can be extended to dealing with, i.e., thin walled shells, annular plates, etc. that have the characteristic of variable thickness. Conclusions Much research has been done on the similitude design of thin walled structures, such as plates and cylindrical shells.However, in engineering practice, the real structures do not always have the same thickness characteristic.The variable thickness can significantly affect the dynamic properties of the structure such that the prediction of the simple scaled down model may fail in its accuracy. In order to address this issue, a new technique for the similitude design of a cantilever VT plate is proposed in the present study based on the transfer matrix method.In this approach, a simplified ST plate is introduced as a connection between the VT plate and the equivalent thin plate, such that the distorted similitude design method of the thin walled structure can be directly applied in designing the scaled model.The transfer matrices of the ST plate and the thin plate have been derived in this study, and the equivalent thickness h e,n of the thin plate corresponding to the specific vibration modal of the VT plate are calculated according to these transfer matrices.Moreover, a unified model plate is defined as the Model Thin (MT) plate such that different orders' dynamic properties of the prototype VT plate can be predicted by using only one scaled model with the thickness h Uni , and the scaling law between the scaled model and the prototype are also derived as (35).Finally, a case study is employed to validate the proposed approach in the present study, and a general process of determining the scaled model and its scaling laws are summarized to emphasis its significance in engineering practice. Appl.Sci.2016, 6, 228 3 of 16 represent the displacement of x, y and z directions, respectively.The Youngʹs modulus, Poissonʹs ratio and the density of the plate's material are separately denoted by E, μ and  . ( ) h x is the thickness.A Clamped-Free (C-F) boundary condition is satisfied in the present study, the edge * A in Figure 2 is clamped and the edge * B is free. Figure 2 . Figure 2. The cantilever variable thickness plate.(a) parameters of the plate; (b) the cross section of the plate. Figure 2 . Figure 2. The cantilever variable thickness plate.(a) parameters of the plate; (b) the cross section of the plate. Figure 3 . Figure 3. Rectangular elements that approach the variable thickness. Figure 3 . Figure 3. Rectangular elements that approach the variable thickness. Figure 3 . Figure 3. Rectangular elements that approach the variable thickness. Figure 3 . Figure 3. Rectangular elements that approach the variable thickness. Figure 3 . Figure 3. Rectangular elements that approach the variable thickness. term; subscripts x and y represent the directions along x and y axis, term; subscripts x and y represent the directions along x and y axis, flexural rigidity of the plate element.Furthermore, an equation of the rotation angle x  , which is related to the boundary conditions, is introduced asm = 1, n = 3 matrix of the thin plate.3.2.2.Equivalent Thickness of the Thin PlateAssume that the simplified ST plate contains N steps with each length of Figure 4 . Figure 4.The sub-sections of the stepped thickness (ST) plate. Figure 4 . Figure 4.The sub-sections of the stepped thickness (ST) plate. Figure 5 . Figure 5.The process of the simplified design. Figure 6 . Figure 6.The error curves of ST plates. Figure 5 . Figure 5.The process of the simplified design. , MA, USA), such that the design of the MT plate can be easily conducted.Moreover, two additional cases are discussed as below.Firstly, only flexible vibration is considered in the proposed example.It is noticeable that the coefficient  of the torsional vibration and the chordwise bending vibration can be obtained by replacing T with transfer matrices shown in Reference[29].Secondly, the length and width of the MT plate are completely scaled down as λ case, where the geometric parameters are all distorted as λ λ λ   e a b Figure 5 . Figure 5.The process of the simplified design. Figure 6 . Figure 6.The error curves of ST plates. Figure 6 . Figure 6.The error curves of ST plates. Table 1 . Parameters of the variable thickness (VT) plate. Table 2 . Parameters of the stepped thickness (ST) plate. Length a/ mm Width b/mm Young's modulus E/Pa Density /(kg/m Table 3 . Comparison between the VT and the ST plates. v i VT plate vibration model VT plate ω v /Hz ST plate vibration model ST plate ω s /Hz Errors η s /% Table 1 . Parameters of the variable thickness (VT) plate. Table 2 . Parameters of the stepped thickness (ST) plate. Table 3 . Comparison between the VT and the ST plates. Table 1 . Parameters of the variable thickness (VT) plate. Table 2 . Parameters of the stepped thickness (ST) plate. Table 3 . Comparison between the VT and the ST plates. Table 1 . Parameters of the variable thickness (VT) plate. Table 2 . Parameters of the stepped thickness (ST) plate. Table 3 . Comparison between the VT and the ST plates. Table 1 . Parameters of the variable thickness (VT) plate. Table 2 . Parameters of the stepped thickness (ST) plate. Table 3 . Comparison between the VT and the ST plates. Table 3 . Comparison between the VT and the ST plates. Table 3 . Comparison between the VT and the ST plates. Table 3 . Comparison between the VT and the ST plates. Table 3 . Comparison between the VT and the ST plates. Table 3 . Comparison between the VT and the ST plates. Table 3 . Comparison between the VT and the ST plates. Table 3 indicates that the simplified ST plate has the same vibration modal as the corresponding VT plate.The natural frequency results show small relative errors between the two types of plates as η 5%  Table 3  represents the flexural rigidity of the plate element.Furthermore, an equation of the rotation angle x  , which is related to the boundary conditions, is introduced as m = 1, n = 3 3810.30indicates that the simplified ST plate has the same vibration modal as the corresponding VT plate.The natural frequency results show small relative errors between the two types of plates as η 5%  s 12 T 13 T 14 T 15 T 16 T 17 T 21 T 22 T 23 T 24 T 25 T 26 T 27 T 31 T 32 T 33 T 34 T 35 T 36 T 37 T 41 T 42 T 43 T 44 T 45 T 46 T 47 T 51 T 52 T 53 T 54 T 55 T 56 T 57 T 61 T 62 T 63 T 64 T 65 T 66 T 67 T 71 T 72 T 73 T 74 T 75 T 76 T 77 12 T 13 T 14 T 15 T 16 T 17 T 21 T 22 T 23 T 24 T 25 T 26 T 27 T 31 T 32 T 33 T 34 T 35 T 36 T 37 T 41 T 42 T 43 T 44 T 45 T 46 T 47 T 51 T 52 T 53 T 54 T 55 T 56 T 57 T 61 T 62 T 63 T 64 T 65 T 66 T 67 T 71 T 72 T 73 T 74 T 75 T 76 T 77 1ST plate represents the stepped thickness plate. Table 5 . Parameters of the model thin (MT) plate. Table 6 . Prediction of the VT plate. Appl.Sci.2016, 6, 228 12 of 16 MathWorks, Natick, MA, USA), such that the design of the MT plate can be easily conducted.Moreover, two additional cases are discussed as below.Firstly, only flexible vibration is considered in the proposed example.It is noticeable that the coefficient  of the torsional vibration and the chordwise bending vibration can be obtained by
2016-08-24T23:09:51.855Z
2016-08-13T00:00:00.000
{ "year": 2016, "sha1": "e3e0d322ad3ba2ec07b963a6f7eb78882d137a41", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/6/8/228/pdf?version=1471068113", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "e3e0d322ad3ba2ec07b963a6f7eb78882d137a41", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
234107530
pes2o/s2orc
v3-fos-license
Aspects of assessing the quality of education at a university . The present research is aimed at identifying the quality of education from the perspective of students’ vision as participants in this educational process (according to the following criteria: fundamentality, sufficiency, modernity, level of digitalization of the quality of education), its impact on their educational level, and the study of students’ motivation to learn, based on their assessment of the content level of educational curricula. The main research method is a comparative sociological study, which compares the research data of students of the M. State Pedagogical University (BSPU) with the results of other monitoring studies conducted in the Russian Federation and the Republic of Bashkortostan. In addition to the questionnaire method, the focus group method was also used. The study revealed students’ attitudes to the quality of education, motivation to study, assessment of the content and quality of education, attitude to technologies, and learning conditions. According to the results of the 2019 study, 75% of students of BSPU were generally satisfied with the level of education quality. The novelty of the research is based on the following facts: quantitative parameters dominate over qualitative ones, which, as a result, leads to the information overload of the educational process subjects and a decrease in the level of competencies that determine the ability of students to think independently and analyze information. Introduction In this article, the term of quality of education is revealed from the sociological standpoint, whose essence is determining the state and result of the education process in society from the perspectives of the needs and expectations of various socio-professional groups. The quality of education is determined not only through assessment of general indicators but it also involves aspects, such as levels of educational content, efficiency and effectiveness of the applied forms and methods of training, human resources potential, and other factors that determine professional and competence level of students. To correctly assess the quality of education, it is necessary to analyze two of its components: to assess the quality of knowledge received by students and the activity component of an educational institution at the municipal, regional, and federal levels [1, p. 4]. The quality of education is a measure of identifying the correspondence of the level of education and the level of training obtained by a specialist with both his expectations and expectations of the consumer [2, p. 78]. The research goal is to identify the specifics of students' attitude to the quality of education acquired (its fundamentality, sufficiency, and modernity), as well as to study the motivation for higher education, and the respondents' assessment of the content of educational programs. Research objectives aim at revealing the content of the concept of quality of education as a sociological category; identifying students' ideas concerning the quality of education; analyzing motivation for learning; investigating the relationship between the content of education and respondents' assessment of the quality of education; revealing students' attitude to technologies and learning conditions; as well as showing the relationship between research activities of students and the quality of their education. The research hypothesis is formulated as follows: the quality of education is a systembased formation that depends on the following set of factors: educational technologies, educational activity conditions, motivation to obtain higher education, the content of the educational program, list of academic subjects, professional qualification level of teachers, the possibility of free choice of disciplines, teachers and textbooks, professional inclinations and vocation, and the cognitive activity of students. Methods This study was conducted in March-April of 2019 at the BSPU. The respondents were university students. A two-stage sample was used in the sociological study, combining elements of probabilistic and targeted sampling. The focus of the study at the first stage was to select students of all five years of study from 11 faculties. This made it possible to ensure the selection of typical representatives of the general population. At the second stage, study groups of students of a particular faculty were randomly selected, where a continuous study was conducted to ensure the principle of equal probability of any of the respondents being included in the sample. The total number of respondents was 344 students. The total sample size was as follows: first-year students -25%, second-year students -20%, third -year students -24%, fourth-year students -22%, and fifth-year students -9%. The study covered both undergraduate and specialty students. At the stage of the pilot study, the focus group method was used, as well as the method of comparative sociological research. Results and discussion Students' satisfaction with the quality of education is shown in Table 1 and Figure 1. As can be seen from Table 1, approximately three-quarters of BSPU students were generally satisfied with the quality of education. About 25% of the students were not satisfied. In the current education system, it is important to find out one of the main components, namely, the ability of students to choose academic disciplines and courses. The results of the sociological survey on the possibility of free choice of disciplines and courses are presented in Table 2 and Figure 2. About two-thirds of the BSPU students believe that the procedure of free choice is available at their faculty. In fact, the situation is different, which was revealed as a result of research based on the focus group method, used involving students of three faculties. First, it was revealed that the possibility of selecting certain disciplines was unknown to a significant part of students. Second, the existing choice of disciplines in practice was often formal. Third, in practice, elective subjects were suggested by the heads of general education programs rather than by the students themselves. Fourth, some students chose certain disciplines on the advice of the Dean's office or teachers, to pass exams more easily. It should be assumed that the procedures for free choice in Russian universities, in the form provided for by the Bologna Convention, are poorly implemented in practice. The following research results have shown that only 27% of students fully supported the use of electronic technologies and resources in the educational process. Another 39% of students approved their use as an additional information resource and technology in the learning process. At that, students pointed to the phenomenon of information overload associated with working with electronic resources, while 11% of respondents noted a decrease in the quality of training. Previously conducted sociological studies of students in the Republic of Bashkortostan (2015-2018) have shown similar attitudes of students to electronic resources and distance learning technologies. The following answers were received to the question concerning the preferred teaching methods and techniques: -91% of respondents preferred live discussion-based forms of learning, providing the most complete understanding of the essence of the subject matter under consideration; -training through the use of the Internet, computer, and remote forms of work was approved by only 9% of respondents; at the same time, only one-fifth of students expressed satisfaction with the quality of the Internet capabilities and the operating system of computer equipment. The same conclusions were drawn by researchers of the Institute for Social Analysis and Prediction (ISAP) of the Russian Presidential Academy of National Economy and Public Administration (RANEPA), who surveyed 12,201 students and 4,000 teachers from 53 branches. The results have shown that the majority of respondents believed that the quality of education had significantly decreased due to the implementation of distance learning, which turned out to be less effective than the traditional full-time form. At that, 69.9% of students and 85.5% of teachers noted a full-time education as preferred [3]. A joint study by the National Research University Higher School of Economics and Tomsk State University, which surveyed 35,000 students from 400 universities in Russia from March to June 2020, also noted that more than half of the students surveyed (65%) believed that distance learning was less effective than in-person learning. Consequently, the technocratic illusions that computer technology will solve all the problems of contemporary education have not been justified. The formal and technical aspect of education does not negate but rather increases the importance of the content aspect of education. Continuing the research, it should be noted that all-Russian sociological studies show that the quality of education is determined by the priority social characteristics, namely, the attitude of teachers to their work and the vocation to the basic procedural content of the work performed [4][5][6][7][8][9][10][11]. "...On average, 75% of teachers feel psychological comfort associated with their work, because it corresponds to their professional vocation. The remaining 25% of teachers feel cool about their work, experience psychological discomfort, which significantly affects the quality of work and the overall quality of education" [12, p. 58]. Hence, the presence of professional inclinations and abilities in future teachers is a factor contributing to improving the quality of education. Conclusion According to the results of the study, the following conclusions have been drawn: -75% of BSPU students were generally satisfied with the level of quality of education. -The orientation of students to work in the specialty after completing their studies in pedagogical universities is significantly lower than that of students of technical universities. -Administration of higher education institutions and the teaching staff should be encouraged to pay special attention to the applied significance of the content of the subjects taught. Students can be greatly assisted by self-education skills, working with sources, meta-subject skills, and turning to theory in the course of practical activities. -A mechanism and a procedure for selecting subjects and training courses at the university should be developed and implemented. -The necessary balance between theory and practice, lectures, and practical classes should be formed. -A balance between science and education should be provided by increasing the share of free scientific activity in the total workload of teachers and students, as well as a mechanism for choosing innovative subjects and courses should be created.
2021-05-11T00:07:03.423Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ba144889edfe1e84d66ecb213c892749c08db0b8", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/09/shsconf_ec2020_01021.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d2e56fdd222dac7e7b3fd761bd00b9e09968f185", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
247075751
pes2o/s2orc
v3-fos-license
The relationship between psychological capital, organisational commitment and job satisfaction in the fisheries sector-A case study of fisheries sub-department of Thai Binh Province job satisfaction; organisational commitment; psychological capital The study is done to analyse the serious relationship between psychological capital, organisational commitment, and job satisfaction in the fisheries sector, a case study of Fisheries SubDepartment of Thai Binh Province. The empirical data collects through 237 questionnaires surveyed by civil servants working at the Fisheries Sub-Department of Thai Binh Province. The study applies exploratory factor analysis, confirmatory factor analysis, structural equation modeling to evaluate the relationship between psychological capital, organisational commitment, and job satisfaction of civil servants. The analysis results show a direct impact of psychological capital on organisational commitment, and psychological capital has an indirect effect on job satisfaction. The results also show a correlation between organisational commitment and job satisfaction. The study suggested significant policy implications for the Fisheries Sub-Department of Thai Binh Province to increase organisational commitment and job satisfaction. Introduction Over the past decade, Vietnam has undergone an important economic, social and organisational change. Due to the fast transformation of a market economy, public organisations and private sectors face complex and ever-changing competitive environments. Therefore, leaders and members of the organisation must be adapted to the rapidly changing work environment to pursue sustainable development (Spreitzer & Porath, 2012). Hence, maintaining high-quality human resources is imperative during this period. However, maintaining human resources requires the organisation to meet the needs of employees, and it is recognized through employee job satisfaction (Youssef & Luthans, 2007). The studies of Nguyen (2015), Vu and Nguyen (2018) focus on studying the component factors affecting job satisfaction, therefrom achieving commitment. These studies focus on studying the impact of factors such as salary, benefits, leadership, colleagues on job satisfaction and organisational commitment. There are very few studies that discuss the psychological capital effect on organisational commitment and job satisfaction. Psychological capital is a relatively new field in the study on job satisfaction and organisational commitment, such as the study of Carver, Lehman, and Antoni (2003), Meng, Qi, and Li (2011), Mirkamali and Narenji (2011). Karatepe and Karadas (2015) argue that psychological capital and job engagement will promote employee satisfaction in the organisation. Thai Binh Province has a coastline of 54 kilometers, with many potentials and strengths to develop the fishery economy. According to Fisheries Sub-Department of Thai Binh Province (2020), the total fishery output reached more than 260 thousand tons; increased by 6.53% compared to 2019, revenue reached 5,300billion VND, contributing 6% of GDP. Thus, a fishery is a key economic sector of Thai Binh Province. The Fisheries Sub-Department of Thai Binh Province plays an important role in making plans and orientations for people in developing the province's fishery economy. With the outbreak of the Covid-19 pandemic, the economic situation is facing difficulties and challenges. The Fisheries Sub-Department of Thai Binh Province is coping with job displacement of highly qualified employees, which affects the level of services and the province's socio-economic development goals. If an individual has good psychological capital, that individual will have significant advantages in life and work. Any individual who has good characteristics of psychological capital can activate work motivation and increase work efficiency. In addition, the studies of Nguyen (2015), Vu and Nguyen (2018) on job satisfaction do not come from the perspective of psychological capital and empirical surveys in the state sector. Thus, the study aims to discover the relationship between psychological capital, organisational commitment, and job satisfaction of civil servants working at the Fisheries Sub-Department of Thai Binh Province. Based on the analysis results, the study suggests some policy implications to improve organisational commitment and job satisfaction of civil servants working at the Fisheries Sub-Department of Thai Binh Province in the future. Psychological capital Psychological capital is an important phenomenon of the new concepts of positive psychology and positive organisational behaviour (Ngwenya & Pelser, 2020). Avey, Luthans, and Jensen (2009) define psychological capital exists in humans and a positive emotional state in personal development. Psychological capital has a positive impact on human nature supports individuals to achieve high job performance, and psychological capital is different from human capital (What you know?), social capital (Who you know?), financial capital (What do you have?). In terms of positive development, psychological capital answers the question of who you are and what you want (Luthans & Youssef, 2004). Psychological capital is a state of positive emotional development of an individual, and it is a quadratic concept including four components: selfefficacy, hope, optimism, resiliency. Self-efficacy is an awareness or belief regarding one's abilities to perform a task well in a work environment (Wood & Wood, 1996). Self-efficacy related to five behaviours are (i) set higher goals; (ii) ready to overcome challenges to improve ability; (iii) full of energy and selfmotivation; (iv) strive to overcome difficulties to achieve good results; (v) try to persevere to overcome the challenge (Luthans, Avolio, Avey, & Norman, 2007). Hope is an individual's positive motivation based on the relationship between factors to achieve success, including thinking (intention towards a goal) and plans to achieve that goal (Snyder et al., 1991). Hope is the driving force that motivates individuals to try and strive to achieve their desires combined with methods to achieve goals even facing difficulties (Luthans & Youssef, 2004) and different from wishful thinking . Seligman (1998) defines optimism as an individual's self-interpretation of situations positively occurring in everyday life. Optimism is an individual's state of expecting positive results (Scheier, Carver, & Bridges, 2001). Luthans and Youssef (2004) argue that optimism in each individual is a belief and a positive attitude in life, knowing one's abilities and could get out of negative situations. Optimism is the most important component of psychological capital, and optimists believe that in the future, good things will come to them . Resiliency expresses through the persistence of each individual to overcome difficulties and quickly recover back to the original mental state or reach a higher level to achieve the desired results . Luthans and Youssef (2004) illustrated that resilient people could change for the better through complexity. Meyer and Hersovitch (2001) suggest that organisational commitment is a multidimensional structure and focuses on employees' attitudes towards the organisation. Meyer (1990, 1991) define organisational commitment as the emotional or affective attachment to the organisation, thereby realizing the individual's strong commitment, voluntary, participating, dreaming of being a member of the organisation. The theoretical model of the three components of organisational commitment was born to explain that organisational commitment creates by three distinct components expressing the psychological state of employees towards the organisation. Affective commitment is an emotional connection between an employee and the organisation, and the employee agrees with the targets and values of the organisation as well as the desire to stay in the organisation. Normative commitment is the employee's perception of responsibility and obligation to the organisation, and the employee has the perception to stay in the organisation. Continuance commitment is the comparison between the benefits and costs that an employee will lose if they decide to leave the organisation. Meyer, Allen, and Smith (1993) concluded that organisational commitment is an expression of a psychological state that links employees to the organisation and affects the decision to stay or intention to leave the organisation. Besides, employees with a high organisational commitment are more motivated to work and contribute more to the organisation than others (Meyer & Allen, 1997). The concept and the three components of organisational commitment by Meyer and Allen are widely cited and used in the studies on organisational commitment (Benkhoff, 1997;D. K. Tran, 2006). Some studies focused on looking at overall organisational commitment, not delving into the components (Dang, 2018;Vu & Nguyen, 2018). Job satisfaction Job satisfaction is an attitude variable, and it has attracted a lot of attention from researchers on organisational behaviour (Judge & Kammeyer-Mueller, 2012). Locke (1976) defines job satisfaction as an employee's positive emotions through evaluating work or personal experience. Job satisfaction is the employee's general attitude towards the work process (Robbins, 2003). Luddy (2005) argues that job satisfaction is an employee's emotional or affective state towards various aspects of the work. Spector (1997) emphasized that job satisfaction is an employee's psychological state towards the job and different aspects of the job expressed through feelings of liking or disliking the current work. If employees feel that the job is suitable for their ability, they will have more job satisfaction. Employee job satisfaction is an important factor for organisational success (Spector, 1985). So, job satisfaction approaches in two different directions consist of overall job satisfaction and different aspects of job satisfaction (Friday & Friday, 2003;D. K. Tran, 2005). Some studies suggest that the different aspects of job satisfaction will give managers more insight and make it easier to see which aspects bring job satisfaction and which aspects bring job dissatisfaction of employees (Deconinck & Stilwell, 2004;Smith, Kendall, & Hulin, 1969). However, some studies emphasize the importance of overall job satisfaction (Cronin, Brady, & Hult, 2000), and overall job satisfaction is a better measure of job satisfaction than measuring different aspects of job satisfaction (Yu & Dean, 2001). Aminikhah, Khaneghah, and Naghdian (2016) show that psychological capital has a direct impact on organisational commitment. Etebarian, Tavakoli, and Abzari (2012) examine the components of psychological capital and organisational commitment.The results show that psychological capital correlates with organisational commitment, with hope having a strong positive impact on organisational commitment. Resiliency has a negative impact on organisational commitment. Optimism and self-efficacy have a weak link with organisational commitment. A. T. H. Tran (2018) demonstrated four components of psychological capital that have a direct impact on organisational commitment, and she emphasized that hope has the most positive effect on organisational commitment. Hence, studies show that hope is an important factor in increasing engagement between organisations and employees. Self-efficacy will help employees set their own goals and appreciate the organisational targets, so employees will strive for work and desire to be committed to the organisation. Optimism will give employees positive feelings towards their current life and work. So, they will feel more committed to the organisation. Resilience brings employees the ability to face difficulties, and employees always have the organisational commitment to overcome any challenge at work. Therefore, the first hypothesis group proposed in this study is: Peterson and Seligman (2003) show a direct relationship between psychological capital and job satisfaction. If employees have high hopes, they will have higher job satisfaction. Through empirical testing in different fields Abbas, Raja, Darr, and Bouckenooghe (2014), Luthans et al. (2007) conclude that four components of psychological capital have a positive impact on job satisfaction. In Vietnam, Ngo (2020) demonstrates that four components of psychological capital are positively correlated with job satisfaction. If employees have a high level of psychological capital will have positive beliefs about their future work and confidence in their abilities at work to face challenges at work (Newman, Ucbasaran, Zhu, & Hirst, 2014). So, employees will strive to complete the work and increase job satisfaction. Hence, through elements of psychological capital starting from employees' positive emotions about the job and the organisation that they are working, employees will self-perceive the level of job satisfaction. Therefore, the second hypothesis group proposed in this study is: The relationship between organisational commitment and job satisfaction There are many studies exploring the relationship between job satisfaction and organisational commitment. The current results are still inconsistent between the studies, and there are two opposing views. The first opinion shows that the studies emphasize job satisfaction as a prerequisite for organisational commitment (Fu, Deshpande, & Zhao, 2011;Mowday, Porter, & Steers, 1982). The second opinion shows that organisational commitment has a positive impact on job satisfaction. If employees are put in the correct position and fit for the organisation, they will feel organisational commitment before job satisfaction emerges (Vilela, González, & Varela, 2008). According to Vandenberg and Lance (1992), Shahid and Azhar (2013) show that the more employees want to work for the organisation, the easier it is to create job satisfaction. In addition, Daneshfard and Ekvaniyan (2012) studied the relationship between job satisfaction and organisational commitment of employees working in the public sector. The results show that organisational commitment has a positive correlation with job satisfaction, in which affective commitment and normative commitment have the strongest impact on job satisfaction. Therefore, the third hypothesis proposed in this study is: H3: Organisational commitment has a positive effect on job satisfaction From the hypotheses, the authors propose the study framework by following: Scale design Preliminary scale is built based on the factors in the study framework and inherited from domestic and foreign studies. The Psychological Capital Questionnaire (PCQ) includes twentyfour observed variables of Luthans et al. (2007). In addition, the scale of organisational commitment of Dang (2018) includes four observed variables, and the scale of job satisfaction of Spector (1997) includes three observed variables. To be relevant to the field of study, the authors discussed with ten senior civil servants working at the Sub-Department of Fisheries in Thai Binh Province to carefully review the content related to factors add or remove inappropriate observed variables. Besides, in-depth interviews were conducted with five experts on human resources Job Satisfaction Organisational Commitment Self-efficacy Psychological Capital Hope Optimism Resilience management to understand the relationships between factors affecting, adjust the study framework and solve problems arising during the discussion, including agree on opinions, carefully reviewing the variables in the scale, and clarifying the meaning of the survey. The results obtained everyone agreed that the factors in the proposed study framework are appropriate, and the psychological capital factor scale is kept unchanged. For the element of organisational commitment, 08/10 discussion participants and 04/05 experts think that it is necessary to adjust 03/04 observed variables because the meaning is not clear when researching in the public sector, easily confused with the factor of job satisfaction. For the factor of job satisfaction, 08/10 discussion participants think that it is necessary to add an observed variable to better the content. In addition, the authors adjusted words to be consistent for the public sector and the education level of survey participants. R6 I always complete all the assigned tasks. OC1 I am always enthusiastic and ideal for working at the government agency. OC2 I accept all assignments to continue working at the agency. Dang (2018), Research by the authors themselves. OC3 I am proud to be a member of the agency. OC4 I am willing to work long-term at the agency. JS1 I love my current job. Spector (1997), Research by the authors themselves. JS2 I have found a suitable job. JS3 I feel satisfied when working at the agency. JS4 I feel my work is very interesting. Source: Compiled by authors Hair, Sarstedt, Hopkins, and Kupplewieser (2014) said that the minimum sample size to use exploratory factor analysis is 50, preferably 100 or more. The ratio of observations on an analytic variable of 5:1 or 10:1 will provide the minimum sample size of the study to ensure reliability. In this study, the authors use the 5:1 rule. The study has 32 observed variables, so the number of needed samples size is 32 * 5 = 160. Sample size The study uses a convenient sampling method for civil servants working at the Sub-Department of Fisheries in Thai Binh Province. To avoid the case of invalid answer sheets, the authors take the sample size of 275 respondents. After cleaning the data, the study collects 237 valid answer sheets with a return rate of 86.2%. The survey period is from March 1 st to March 31 st , 2021. Survey forms send directly email to civil servants working at the Sub-Department of Fisheries in Thai Binh Province. Data analysis The obtained data will be screened and analysed with SPSS version 26 and AMOS version 20 software. The study used analytical methods including descriptive statistics, reliability test of the scale by Cronbach's Alpha coefficient, Exploratory Factor Analysis (EFA), Confirmatory Factor Analysis (CFA). Structural Equation Modeling (SEM) uses to show the relationship between psychological, organisational commitment, and job satisfaction. To evaluate the reliability test of scale through two tools as Cronbach's Alpha and exploratory factor analysis. Cronbach's Alpha coefficient uses to eliminate "junk" items. The items with a total correlation coefficient less than 0.3 will not retain. The items will be selected if Cronbach's Alpha coefficient is more than 0.6 (Tabachnick & Fidell, 2013). Exploratory factor analysis considers observed variables with transmission coefficients less than 0.5 and extracts two factors with a difference of less than 0.3 that will not retain. Eigenvalue (representing the variation explained by a factor) is greater than 1, and the total variance extracted is more than 50%. Besides, the KMO test and Bartlett test use to evaluate the reasonableness of the data (Hair, Anderson, Tatham, & Black, 1998). For CFA and SEM analysis, the research model is suitable and good with the research data if Pvalue < 0.05; CMIN/df ≤ 2; TLI và CFI ≥ 0.9; RMSEA ≤ 0.08 (Hair, Black, Babin, & Anderson, 2010). Sample characteristics The results show that the prominent characteristics of civil servants working at the Sub-Department of Fisheries in Thai Binh Province participating in the survey are male, accounting for 57.9%. The age group is from 30 to 50 years old, accounting for 81.9%. The education level is mainly university, accounting for 87.9%, and the seniority of work from five years to more than ten years, accounting for 94.1%. The characteristics of the survey sample are consistent for the public sector in Vietnam. The employees in the public sector are almost men. According to the General Statistics Office (2020), the total number of employees working in the public sector is about four million people, with 70% men. They are between the ages of 30 and 50 years old, with high levels of education and many years of work experience. Table 2 summarizes the result of sample characteristics. Reliability test of scale The results show that the lowest Cronbach's Alpha coefficient is 0.767, and the highest is 0.887. Compared with standard 0.6, all observed items of the scale are satisfactory. Corrected itemtotal correlation greater than 0.3. Cronbach's alpha if items deleted of all 32 observed items is smaller than Cronbach's Alpha coefficient, so no items are excluded. All scales achieve two reliability and discriminant validity. Hence, the scale is good and meets the reliable requirement for exploratory factor analysis. Exploratory Factor Analysis (EFA) The study uses the Principal Axis Factoring extraction method along with Promax rotation. The study analyses the overall scale includes all observed items of psychological capital, job satisfaction, and organisational commitment. The study obtained results with the coefficient KMO = 0.886; Bartlett Test is statistically significant with Sig. = 0.000 (< 0.05), and six factors were extracted with Eigenvalue = 1.375; Sums of Squared Loadings = 79.253% (greater than 50%). Six factors explained 79.253% of the variability of the data. Table 3 summarizes the results of the reliability test of scale and exploratory factor analysis. Table 3 The results of the reliability test of scale and exploratory factor analysis Confirmatory Factor Analysis (CFA) The results of CFA of the overall model scale show that the weights of the observed variables are all standard (≥ 0.5). Hence, the scales reach convergent validity. The results show that the model has 568 degrees of freedom, the test value CMIN (Chi-square) = 521.135 with Pvalue = 0.000; CMIN/df (Chi-square/pdf) = 2.543 < 3 and the Goodness of Fit index = 0.901; Tucker-Lewis index = 0.917; Comparative Fit index = 0.929 greater than 0.9; the root mean square error of approximation = 0.030 less than 0.08. So, the research model is consistent with the research data. Figure 2 summarizes the results of the confirmatory factor analysis. Structural Equation Modeling (SEM) Based on the outcomes of the confirmatory factor analysis of the overall model scale, the results of the structural equation modeling are consistent with the research data. That is shown by the CMIN/df (Chi-square/pdf) = 2.874 < 3 and the Goodness of Fit index = 0.905, Tucker-Lewis index = 0.922, Comparative Fit index = 0.933 greater than 0.9; the root mean square error of approximation = 0.035 is less than 0.08. At the same time, based on the analysis results, the Pvalue of the impact relationships between the factors is less than 0.05. Hence, the relationship between psychological capital, organisational commitment, and job satisfaction is statistically significant in the Structural Equation Modeling (SEM). Table 4 summarizes the model results. Table 4 The results of the reliability test of scale and exploratory factor analysis The coefficients of direct effect, indirect effect, and synthetic effect are use to evaluate the impact of factors on job satisfaction. The results show that the biggest impact is organisational commitment (λ = 0.501), followed by hope (λ = 0.483), optimism (λ = 0.477), self-efficacy (λ = 0.385), and the least impact factor is resiliency (λ = 0.362). Table 5 summarizes the results of direct and indirect impact factors on job satisfaction. The results show that psychological capital has a positive effect on organisational commitment and job satisfaction of civil servants working at the Sub-Department of Fisheries in Thai Binh Province. Four factors of psychological capital, including self-efficacy, resilience, hope, and optimism have a positive effect on organisational commitment and job satisfaction. The results are similar to the studies of Aminikhah et al. (2016), Etebarian et al. (2012), A. T. H. Tran (2018), Peterson and Seligman (2003), Abbas et al. (2014), Luthans et al. (2007). Optimism has the strongest impact on organisational commitment and job satisfaction. Civil servants with optimism and flexibility will help them promote efficiency in their working process. Optimism will help civil servants continue working hard if they face difficulties and calmly accept reality and develop a next work plan without sadness or despair. The barriers always appear in the working process of officials and civil servants, forcing them to face and overcome. Along with optimism, hope will help civil servants to be active in their work and create the right way to do their jobs successfully. In addition, self-efficacy has a significant impact on organisational commitment and job satisfaction of civil servants working at the Sub-Department of Fisheries in Thai Binh Province. Self-efficacy is one's ability brings resilience at work. The result also shows that organisational commitment has a positive effect on job satisfaction of civil servants working at the Sub-Department of Fisheries in Thai Binh Province. The result is similar to the studies of Vilela et al. (2008), Vandenberg and Lance (1992), Shahid and Azhar (2013). Thus, five direct and indirect impact factors on job satisfaction include self-efficacy, hope, optimism, resiliency, and organisational commitment. That is the difference from the studies of Nguyen (2015), Vu and Nguyen (2018). Because the previous studies did not measure organisational commitment and job satisfaction of employees in the public sector based on psychological capital. Conclusion and policy implication In this study, the relationship between psychological capital, organisational commitment, and job satisfaction analysis through the data set obtained by the direct survey method of civil servants working at the Sub-Department of Fisheries in Thai Binh Province. Confirmatory factor analysis, structural equation modeling analyses were performed to determine the relationship between the constructs in the research model. The analysis results show that a direct impact of psychological capital on organisational commitment and psychological capital has an indirect impact on job satisfaction. Besides, organisational commitment has a positive effect on the job satisfaction of civil servants working at the Sub-Department of Fisheries in Thai Binh Province. Based on the obtained results, the study provides some policy implications to help the Sub-Department of Fisheries in Thai Binh Province improve organisational commitment and job satisfaction of civil servants: Firstly, leaders of the Sub-Department should try to accurately assess the psychological capital of civil servants through the regular implementation of multiple-choice tests on employees' psychological capital to increase organisational commitment and job satisfaction. Second, periodically evaluate and organise short-term psychological training courses for employees. In addition, it is necessary to develop psychological counseling sessions with psychologists to help civil servants deal with psychological problems to reduce the negative impact on organisational commitment and job satisfaction. Leaders of the Sub-department need to pay attention to the training of civil servants to have positive thoughts, reduce stress, and lower the morale of civil servants. Third, the Sub-Department should create a friendly working environment with many cohesive activities and development programs. For example, the Sub-Department could organise parties assign new roles for civil servants. And the Sub-Department should conduct programs to promote the ideas of civil servants. Thereby creating an organisational commitment of civil servants and increasing job satisfaction of civil servants. Limitations and future research There are still some limitations of this study, including (i) the small limited sample size. This study was conducted only at the Sub-Department of Fisheries in Thai Binh Province; (ii) the study tested the hypothesis by collecting data from civil servants working at the Sub-Department of Fisheries in Thai Binh Province with a convenient sampling method. Therefore, some implications for future research could include: (i) increase the sample size or extend the scope; (ii) future studies should consider the impact of psychological capital on organisational commitment and job satisfaction using the probability sampling method to increase the generalizability of the study.
2022-02-24T16:14:57.010Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "a06e7c0d4997989cdaf113e5fd95ed9401aa0105", "oa_license": "CCBYNC", "oa_url": "https://journalofscience.ou.edu.vn/index.php/econ-en/article/download/2013/1615", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b7b9f35aa1e2135a935f93ba8cb31b0e0aea5dbc", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
6820545
pes2o/s2orc
v3-fos-license
Evaluation of antifungal susceptibility testing in Candida isolates by Candifast and disk-diffusion method With the increase in invasive fungal infections due to Candida species and resistance to antifungal therapy, in vitro antifungal susceptibility testing is becoming an important part of clinical microbiology laboratories. Along with broth microdilution and disk diffusion method, various commercial methods are being increasingly used for antifungal susceptibility testing, especially in the developed world. In our study, we compared the antifungal susceptibility patterns of 39 isolates of Candida to three antifungal drugs (fl uconazole, amphotericin B, ketoconazole) by Candifast and disk diffusion method. The following resistance pattern was found by Candifast: Fluconazole (30.8%), ketoconazole (12.8%), amphotericin B (0%). The results obtained by disk diffusion method were in complete agreement with Candifast results. INTRODUCTION Candida species are one of the leading causes of invasive fungal infections world-wide. [1,2]espite the introduction of newer antifungal drugs for the treatment of infections by Candida species, the occurrence of invasive fungal infections and resistance to antifungal therapy is on the rise. [3]Hence, in vitro antifungal drug susceptibility testing for Candida species has become important in the detection of resistance as well as in effective patient management. [3,4]Standardized methods for in vitro antifungal susceptibility testing of yeasts by broth microdilution and disk diffusion methods are now available from the Clinical and Laboratory Standards Institute (CLSI). [5,6]Disk diffusion method is a commonly used method for antifungal susceptibility testing in busy clinical microbiology laboratories.Commercial systems like Candifast, Fungitest, Sensititre-Yeast One, etc., are now being used by laboratories for antifungal susceptibility testing, especially in the developed world. The present study was undertaken to compare Candifast with disk diffusion method for the detection of resistance pattern in clinical isolates of Candida.The Candifast kit can be used for the identification as well as susceptibility testing of Candida isolates. MATERIALS AND METHODS Antifungal susceptibility testing was performed for 39 clinical isolates of Candida (29 Candida tropicalis, 4 Candida albicans, 3 Candida parapsilosis, 2 Candida krusei, and 1 Candida glabrata).All the isolates were collected from blood samples of patients at a University teaching hospital in Chennai.The Candida isolates were identified to species level using various phenotypic tests such as fermentation, assimilation, tetrazolium reduction medium, Candifast, and CHROMagar Candida (France) medium.The methods used for susceptibility testing were Candifast and disk diffusion. Candifast The Candifast kit (International Microbio, France) allows the identification of Candida species and testing of their susceptibility to various antifungal agents.It is a 20well tray with two rows, one for identification and the other for susceptibility testing.The determination of the resistance of yeasts to antifungal agents is based on growth or absence of growth of the yeasts in the presence of various antifungal agents.An isolated colony of Candida was inoculated into reagent-1 (R1) bottle and mixed well.The turbidity of the suspension was compared with the turbidity control.100 μl of inoculated R1 was added to R2. 100 μl of R2 was then added to each of the wells, 2 drops of paraffin oil was added to each well and the test tray was sealed and incubated at 37°C for 24 hours.Reading was taken once the yeast grew in the control well.The indicator used is phenol red.A yellow or orange-yellow color in the susceptibility test row, due to glucose fermentation, indicated that the yeast was able to grow in the presence of the antifungal agent and hence was resistant to that drug.If the color in the well was red or pink, the isolate was inhibited by the drug in that well [Figure 1] and so was sensitive to that drug.The susceptibility testing by Candifast kit can be done for the following anti-fungal drugs: Amphotericin B (4 μg/ml), fluconazole (16 μg/ml), ketoconazole (16 μg/ml), nystatin (200 units/ml), flucytosine (35 μg/ml), econazole (16 μg/ml), and miconazole (16 μg/ml).Quality control testing was also performed using C. albicans ATCC (American Type Culture Collection) 90028 to check the efficacy of the kit. Disk-diffusion method Antifungal susceptibility testing by disk diffusion method was performed according to CLSI guidelines (CLSI document M44-A2) and manufacturer's instructions. [6]The media and antifungal disks used in the testing were from HiMedia Laboratories, Mumbai.The standard medium used for disk diffusion test was Mueller-Hinton agar supplemented with 2% dextrose and 0.5 μg/ml methylene blue [Figure 2].Incorporation of methylene blue in the medium has been found to improve the yeast growth and provide sharp zones of inhibition for the azole group of drugs. [6]The colonies were suspended in 5 ml of sterile 0.85% saline, and the turbidity was adjusted to yield 1 × 10 5 -1 × 10 6 cells/ml (0.5 McFarland standard).The antifungal disks tested were amphotericin B (100 units), fluconazole (25 mcg) and ketoconazole (10 mcg).Quality control tests using reference strain C. albicans (ATCC 90028) were set up on each day that susceptibility tests were performed to check for the precision and accuracy of the results of disk diffusion testing. RESULTS The antifungal susceptibility testing of the 39 Candida isolates by Candifast method showed the following resistance pattern: 12 Candida isolates (30.8%) were resistant to fluconazole, while 5 (12.8%) and 0 (0%) isolates were resistant to ketoconazole and amphotericin B, respectively [Table 1].The resistance pattern found with the disk diffusion method was in complete concordance with the Candifast results.Out of the 12 (30.8%)Candida isolates resistant to fluconazole, there were 7 isolates of C. tropicalis, 2 each of C. parapsilosis, C. krusei, and 1 C. glabrata.Ketoconazole resistance was found in 2 isolates each of C. tropicalis and C. parapsilosis and 1 isolate of C. glabrata.All the isolates of C. albicans were susceptible to the three antifungal drugs.Both the methods showed 100% agreement in antifungal susceptibility patterns for C. albicans and non-C.albicans isolates. DISCUSSION Antifungal susceptibility testing in vitro is playing an important role in antifungal drug selection, as an aid in drug development and as a means of tracking the development of resistance.Disk diffusion is the most commonly used method for susceptibility testing of Candida isolates and is preferred in clinical microbiology laboratories.The M44-A2 document of the CLSI gives guidelines for susceptibility testing of yeasts using this method. [6]Many clinical laboratories, especially those in the developed world, prefer to use commercial systems for determining susceptibility to various antifungal agents.Many of these methods are more rapid and easier to perform compared to the microdilution method of susceptibility testing which is the gold standard.Among the commonly used commercial methods for antifungal susceptibility testing are Candifast, Fungitest, Sensititre Yeast One method etc. [7] These methods have been evaluated for their agreement with the microdilution method and intralaboratory reproducibility in various studies. [8] our study, we evaluated the antifungal susceptibility pattern of 39 Candida isolates from blood samples of patients using the Candifast and disk diffusion method.There was complete agreement in the susceptibility pattern of Candida isolates by both the methods.All Candida isolates were found to be susceptible to amphotericin B. Fluconazole and ketoconazole resistance was seen in 30.8% and 12.8% of Candida isolates, respectively.There was 100% agreement between disk diffusion method and Candifast when the susceptibility pattern of C. albicans and non-C.albicans were compared.Candifast has been used for antifungal susceptibility testing of Candida isolates in some studies on candidemia. [9]There are very few studies which have compared the susceptibility pattern of isolates of various Candida species by different methods.In a recent study from Egypt, it was found that for amphotericin B, the agreement of Candifast with standard broth microdilution method was 100% in all species except C. glabrata (85.7%). [4]The same study also showed a 100% agreement for fluconazole between the two methods in all species except C. albicans, C. glabrata which was 92.8% and 85.7%, respectively. [4]In another study by Schmalreck et al., it was found that the correlation between Candifast and microdilution method of testing susceptibility of clinical yeast isolates to fluconazole was 83%. [10]However, another multicenter study by Morace et al., comparing six commercial systems (Candifast, disk diffusion, Etest, Fungitest, Integral System Yeasts, and Sensititre-Yeast One) with the broth microdilution method for fluconazole susceptibility testing of Candida species did not find a good agreement between Candifast and the broth microdilution method. [8]Hence, further studies involving the evaluation of various commercial methods for anti-fungal susceptibility testing are necessary.An added advantage of using Candifast for susceptibility testing is that it can also simultaneously be used for speciation of Candida isolates. [11] CONCLUSION Our study found Candifast to be an easy and rapid method for the identification and susceptibility testing of Candida isolates and can be used in busy clinical microbiology laboratories. Figure 1 : 2 : Figure 1: An fungal suscep bility tes ng by Candifast Figure 2: An fungal suscep bility tes ng by disk diff usion method
2018-04-03T03:34:09.691Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "227f477b643d78c198df5cb29deda830f60751f6", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0377-4929.142680", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "651d1f1c7a368f452496b6d09d4bd9d60cb77e64", "s2fieldsofstudy": [ "Biology", "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257983006
pes2o/s2orc
v3-fos-license
The mitochondrial ATP synthase as an ATP consumer—a surprising therapeutic target The mitochondrial F1Fo‐ATP synthase uses a rotary mechanism to synthesise ATP. This mechanism can, however, also operate in reverse, pumping protons at the expense of ATP, with significant potential implications for mitochondrial and age‐related diseases. In a recent study, Acin‐Perez et al (2023) use an elegant assay to screen compounds for the capacity to selectively inhibit ATP hydrolysis without affecting ATP synthesis. They show that (+)‐epicatechin is one such compound and has significant benefits for cell and tissue function in disease models. These findings signpost a novel therapeutic approach for mitochondrial disease. The mitochondrial F 1 F o -ATP synthase uses a rotary mechanism to synthesise ATP. This mechanism can, however, also operate in reverse, pumping protons at the expense of ATP, with significant potential implications for mitochondrial and age-related diseases. In a recent study, Acin-Perez et al (2023) use an elegant assay to screen compounds for the capacity to selectively inhibit ATP hydrolysis without affecting ATP synthesis. They show that (+)-epicatechin is one such compound and has significant benefits for cell and tissue function in disease models. These findings signpost a novel therapeutic approach for mitochondrial disease. The EMBO Journal (2023) 42: e114141 See also: R Acin-Perez et al (May 2023) T he mitochondrial F 1 F o -ATP synthase (also called complex V) is a remarkable enzyme and is fundamental for eukaryotic life. The enzyme sits in the mitochondrial inner membrane and acts as a turbine, harnessing the energy generated by respiration in the form of ATP. The accumulation of ATP in the mitochondrial matrix is avoided through the equilibration of ATP with the cytosol through the Adenine Nucleotide Translocator (ANT), which brings ADP into the matrix and ensures ATP supply in the cytosol where it is used to power everything from ion homeostasis to protein synthesis and from secretion to motility (Duchen, 2004). This molecular motor will run in either direction, depending on the balance between the free energies of the (ATP/ADP+P i ) ratio and the proton motive force generated by the respiratory chain. The prevailing view would state that normally respiring mitochondria generate a mitochondrial membrane potential of around 180 mV. These conditions should favour the activity of complex V as a synthase, while the possibility of reversal as key variable change has been described by a mathematical model (Metelkin et al, 2009). The capacity of the molecular motor to run in "reverse" as an ATP-consuming proton pump is most readily seen after inhibition of the respiratory chain or exposure to uncouplers. In cells with a strong capacity to upregulate glycolysis such as astrocytes (Almeida et al, 2004), exposure to respiratory chain inhibitors such as cyanide or rotenone induces only small and slow changes in membrane potential, as the mitochondrial membrane potential is maintained by the reversal of the F 1 F o -ATPase: ATP generated by glycolysis is consumed, and protons are pumped across the mitochondrial inner membrane at a rate that is sufficient to maintain the potential. Repeat of the experiment following inhibition of the ATPase with oligomycin induces a rapid and complete mitochondrial depolarisation (Jacobson & Duchen, 2002), illustrating the power of the ATPase as a proton pump. The impact of the ATPase as an ATP consumer is most dramatically illustrated by rat cardiomyocytes, a cell type with a very high density of mitochondria. Following mitochondrial depolarisation, at reperfusion injury or in response to uncoupler, ATP is consumed and completely depleted within minutes and the cell will go into a rigour contracture (Bowers et al, 1992). However, in the presence of oligomycin, the cell will just sit with depolarised mitochondria for an hour without any significant loss of ATP (Allue et al, 1996;Leyssens et al, 1996). Cells have evolved a mechanism to defend themselves against ATP depletion mediated by the mitochondrial ATPase. This mechanism involves the protein ATPIF1 or the "endogenous inhibitor protein." Discovered in the 1960s (Pullman & Monroy, 1963), ATPIF1 binds to the ATPase when the matrix pH falls, as happens if respiration is compromised. Acting like a stick in the spokes of a bicycle wheel, it prevents the rotation of the molecular motor and limits the rate of ATP hydrolysis (Bason et al, 2011). ATPIF1 appears not to be expressed strongly in small rodents-hence our specific earlier reference to rat cardiomyocytes-but it is strongly expressed in the heart of larger mammals (Rouslin & Broge, 1996). We see the power of ATPIF1 in cardiomyocytes from the human atrial appendage, which shows no rapid progression to rigour contracture in response to uncoupler (Shanmuganathan et al, 2005). Recent work of the Shirihai group demands some refinement of these views and points to a more subtle model. They first showed that super-resolution imaging of the distribution of the potential-dependent probe tetramethylrhodamine methyl ester (TMRM) revealed that membrane potentials may vary between different cristae even within a single mitochondria (Wolf et al, 2019 of 3 The EMBO Journal 42: e114141 | 2023 Ó 2023 The Authors discovery led to the idea that, even at the level of a single mitochondrion, the efficiency of ATP production may vary between cristae. Some cristae may even be running the ATPase "in reverse," simultaneously consuming ATP that is being generated by their neighbours. This would imply that the net rate of ATP production by a mitochondrion or a cellular mitochondrial population could represent a balance between populations of ATP synthases running either in "forward" or in "reverse" modes. The net efficiency of such a process in healthy cells would then depend on the expression of ATPIF1-which seems to vary between cell types (Campanella et al, 2009). In disease models in which oxidative respiration, and hence mitochondrial membrane potential, is compromised, net ATPase activity can reverse. As explained above, this maintains mitochondrial membrane potential at the expense of ATP consumption (Fig 1). This seems to happen in many disease models and is again revealed experimentally as, under these conditions, exposure to oligomycin leads to the progressive loss of membrane potential (oligomycin normally causes a small increase in potential). Now, Acin-Perez et al (2023) propose that this mitochondrial ATP consumption may be a maladaptive response, driving energy depletion and contributing to the disease phenotype and progression. They therefore sought to target the reverse activity of the molecular motor, developing a screening protocol to identify a compound, (+)epicatechin (EPI), which appears selectively to inhibit ATPase activity without affecting ATP synthase activity. In their paper published in this issue of The EMBO Journal, Acin-Perez et al (2023) show that exposure of cells to EPI increased net ATP production in a number of cell models. Furthermore, they show that treatment of mdx mice, which express a form of muscular dystrophy, with EPI resulted in improved exercise performance and increased life span. These data are interesting on many levels. In diseases that cause impaired respiration, it has been unclear which is more important in terms of pathophysiology: to preserve ATP levels (especially in cells with a limited capacity to upregulate glycolysis) or to maintain mitochondrial membrane potential (at the expense of cellular ATP). Which strategy might have more severe long-term consequences? The appearance of ATPIF1 argues for the former, and this is supported by the work reported by Acin-Perez et al (2023), as the long-term treatment of cells, or even animals, with a compound that limits mitochondrial ATP consumption seems to mitigate dysfunction associated with disease. These findings also require us to reconsider the processes that dictate net ATP production by cells, and to take into account the possibility of ATP consumption by the ATPase itself, even in healthy cells. We know almost nothing about how ATPIF1 expression is regulated, but these authors also show that levels change in some of the disease models they have explored, suggesting that this is another variable that can be modulated and that should be considered in attempts to understand mitochondrial disease and cellular energy homeostasis. There are many emerging aspects of mitochondrial research that show us how much we still have to learn in terms of cell signalling and metabolite processing and in understanding mitochondrial disease, and these data highlight that, even in terms of fundamental cellular bioenergetic homeostasis, there remain intricacies and subtleties that we have yet to explore.
2023-04-07T06:18:05.087Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "d348022f77ef95d447bb65fcf4a18614c295298c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.15252/embj.2023114141", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "e2ffe973c0954487972ec20a0d57240700345423", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260865914
pes2o/s2orc
v3-fos-license
Software Doping Analysis for Human Oversight This article introduces a framework that is meant to assist in mitigating societal risks that software can pose. Concretely, this encompasses facets of software doping as well as unfairness and discrimination in high-risk decision-making systems. The term software doping refers to software that contains surreptitiously added functionality that is against the interest of the user. A prominent example of software doping are the tampered emission cleaning systems that were found in millions of cars around the world when the diesel emissions scandal surfaced. The first part of this article combines the formal foundations of software doping analysis with established probabilistic falsification techniques to arrive at a black-box analysis technique for identifying undesired effects of software. We apply this technique to emission cleaning systems in diesel cars but also to high-risk systems that evaluate humans in a possibly unfair or discriminating way. We demonstrate how our approach can assist humans-in-the-loop to make better informed and more responsible decisions. This is to promote effective human oversight, which will be a central requirement enforced by the European Union's upcoming AI Act. We complement our technical contribution with a juridically, philosophically, and psychologically informed perspective on the potential problems caused by such systems. Introduction Software is the main driver of innovation of our times. Software-defined systems are permeating our communication, perception, and storage technology as well as our personal interactions with technical systems at an unprecedented pace. "Software-defined everything" is among the hottest buzzwords in IT today [76,119]. At the same time, we are doomed to trust these systems, despite being unable to inspect or look inside the software we are facing: The owners of the physical hull of 'everything' are typically not the ones owning the software defining 'everything', nor will they have the right to look at what and how 'everything' is defined. This is because commercial software typically is protected by intellectual property rights of the software manufacturer. This prohibits any attempt to disassemble the software or to reconstruct its inner working, albeit it is the very software that is forecasted to be defining 'everything'. The use of machine-learnt software components amplifies the problem considerably by adding opacity of its own kind. Since commercial interests of the software manufacturers seldomly are aligned with the interest of end users, the promise of 'software-defined everything' might well become a dystopia from the perspective of individual digital sovereignty. In this article, we address two of the most pressing incarnations of problematic software behaviour. Diesel emissions scandal A massive example of software-defined collective damage is the diesel emissions scandal. Over a period of more than 10 years, millions of diesel-powered cars have been equipped with illegal software that altogether polluted the environment for the sake of commercial advantages of the car manufacturers. At its core, this was made possible by the fact that only a single, precisely defined test setup was put in place for checking conformance with exhaust emissions regulations. This made it a trivial software engineering task to identify the test particularities and to turn off emission cleaning outside these particular conditions. This is an archetypal instance of software doping. Software doping can be formally characterised as a violation of a cleanness property of a program [10,31]. A detailled and comparative account of meaningful cleanness definitions related to software doping is avaialable [15,Chapter 3]. One cleanness notion that has proven suitable to detect diesel emissions doping is robust cleanness [15,18]. It is based on the assumption that there is some well-defined and agreed standard input/output behaviour of the system which the definition extends to the vicinity around the inputs and outputs close to the standard behaviour. The precise specification of "vicinity" and of "standard behaviour" is assumed to be part of a contract between software manufacturer and user. That contract entails the standard behaviour, distance functions for input and output values, and distance thresholds to define the input and output vicinity, respectively. With this, a system behaviour is considered clean, if its output (is or) stays in the output vicinity of the standard, unless the input (is or) moves outside the standard's input vicinity. Example 1 Every car model that is to enter the market in the European Union (and other countries) must be compliant with local regulations. As part of this homologation process, common to all of these regulations is the need for executing a test under precisely defined lab conditions, carried out on a chassis dynamometer. In this, the car has to follow a speed profile, which is called test cycle in regulations. At the time when the diesel scandal surfaced, the New European Driving Cycle (NEDC) [126] was the single test cycle used in the European Union. It has by now been replaced by the Worldwide harmonized Light vehicles Test Cycle (WLTC) [122] in many countries. We refer to previous work for more details [15,18,21]. From a perspective of fraud prevention, having only a single test cycle is a major weakness of the homologation procedure. Robust cleanness can overcome this problem. It admits the consideration of driving profiles that stay in the bounded vicinity of one of several standardised test cycle (i.e., NEDC as well as WLTC), while enforcing bounds on the deviations regarding exhaust emission. Discrimination mitigation Another set of exemplary scenarios we consider in this article are high-risk AI systems, systems empowered by AI technology whose functioning may introduce risks to health, safety, or fundamental rights of human individuals. The European Union is currently developing the AI Act [39,40] that sets out to mitigate many of the risks that such systems pose. Application areas of concern include credit approval ( [93]), decisions on visa applications ( [82]), admissions to higher education ( [26,131]), screening of individuals in predictive policing ([57]), selection in HR ( [90][91][92]), juridicial decisions (as with COMPAS [3,29,33,70]), tenant screening ( [113]), and more. In many of these areas, there are legitimate interests and valid reasons for using well-understood AI technology, although the risks associated with their use to date is manifold. It is widely recognised that discrimination by unfair classification and regression models is one particularly important risk. As a result, a colourful zoo of different operationalisations of unfairness has emerged [94,129], which should be seen less as a set of competing approaches and more as mutually complementary [51]. At the same time, a consensus is emerging that human oversight is an important piece of the puzzle for mitigating and minimising societal risks of AI [58, 81,127]. Accordingly, that principle made it into recent drafts of legislation including the European AI Act [39,40] or certain US state laws [130]. The generic approach we develop for software-doping analysis turns out to be powerful enough to provide automated assistance for human overseers of high-risk AI systems. Apart from spelling out the necessary refocusing we illustrate the challenge that our work helps to overcome by an exemplary, albeit hypothetical admission system for higher education (inspired by [26,131]). Example 2 A large university assigns scores to applicants aiming to enter their computer science PhD program. The sores are computed using an automated, modelbased procedure P which is based on three data points: the position of the applicant's last graduate institution in an official, subject-specific ranking, the applicant's most recent grade point average (GPA), and their score in a subject-specific standardised test taken as part of the application procedure. The system then automatically computes a score for the candidate based on an estimation of how successful it expects them to be as students. A dedicated university employee, Unica is in charge of overseeing the individual outcomes of P and is supposed to detect cases where the output of P is or appears flawed. The university pays especial attention to fairness in the scoring procedure, so Unica has to watch out to any signs of potential unfairness. Unica is supposed to desk-reject candidates whose scores are below a certain, predefined threshold -unless she finds problems with P's scoring. Without any additional support, Unica, as human overseer in the loop, must manually check all cases for signs of unfairness as they are processed. This can be a tedious, complicated, and error-prone task and as such constitutes an impediment for the assumed scalability of the automated scoring process to high numbers of applicants. Therefore, she at least requires tool support that assists her in detecting when something is off about the scoring of individual applicants. This support can be made real by exploiting the technical contributions of this article, in terms of a runtime monitor that provides automated assistance to the human oversight and itself is based on the probabilistic falsification technique we develop. As we will explain, func-cleanness, a variant of cleanness, is a suitable basis for rolling out runtime monitors for such high-risk systems, that are able to detect and flag discrimination or unfair treatment of humans. The contributions made by this article are threefold. Detecting software doping using probabilistic falsification. The paper starts off by developing the theory of robust cleanness and func-cleanness. We provide characterisations in the temporal logics HyperSTL and STL, that are then used for an adaptation of existing probabilistic falsification techniques [1,48]. Altogether, this reduces the problem of software doping detection to the problem of falsifying the logical characterisation of the respective cleanness definition. Falsification-based test input generation. Recent work [18] proposes a formal framework for robust cleanness testing, with the ambition of making it usable in practice, namely for emissions tests conducted with a real diesel car on a chassis dynamometer. However, that approach leaves open how to perform test input selection in a meaningful manner. The probabilistic falsification technique presented in this article attacks this shortcoming. It supports the testing procedure by guiding it towards test inputs that make the robust cleanness tests likely to fail. Promoting effective human oversight. We discuss and demonstrate how the technical contributions of this paper contribute to effective human oversight of high-risk systems, as required by the current proposal of the AI act. The hypothetical university admission scenario introduced above will serve as a demonstrator for shedding light on the applicability of our approach as well as the the principles behind it. On a technical level, we provide a runtime monitor for individual fairness based on probabilistic falsification of func-cleanness. On a conceptual level, we consider it important to clarify which duties come with the usage of such a system; from a legal perspective, particularly considering the AI Act, substantiated by considering the ethical dimension from a philosophical perspective, and from a psychological perspective, particularly deliberating on how the overseeing can become effective. This paper is based on a conference publication [16]. Relative to that paper, the development of the theory here is more complete and now includes temporal logic characterisations for func-cleanness. On the conceptual side, this article adds a principled analysis of the applicability of func-cleanness to effective human oversight, spelled out in the setting of admission to higher education. We live up to the societal complexity of this new example and provide an interdisciplinary situation analysis and an interdisciplinary assessment of our proposed solution. Accordingly, although the technical realisation is based on the probabilistic falsification approach outlined in this article, our solution is substantially more thoughtful than a naive instantiation of the falsification framework. This article is structured as follows. Section 2 provides the preliminaries for the contributions in this article. Section 3 develops the theoretical foundations necessary to use the concept of probabilistic falsification with robust cleanness and func-cleanness. Section 4 demonstrates how the probabilistic falsification approach can be combined with the previously proposed testing approach [18] for robust cleanness, with a focus on tampered emission cleaning systems of diesel cars. Section 5 develops the technical realisation of a fairness monitor based on func-cleanness for high-risk systems. Section 6 evaluates the fairness monitor from the perspective of the disciplines philosophy, psychology, and law. Finally, Section 7 summarises the contributions of this article and discusses limitations of our approaches. The appendix of this article contains additional technical details, proofs, and further philosophical and juridical explanations. Software Doping After early informal characterisations of software doping [10,12], D'Argenio et al. [31] propose a collection of formal definitions that specify when a software is clean. The authors call a software doped (w.r.t. a cleanness definition) whenever it does not satisfy such cleanness definition. We focus on robust cleanness and func-cleanness in this article [31]. We define by R ≥0 := {x ∈ R | x ≥ 0} the set of non-negative real numbers, by R := R ∪{−∞, ∞} the set of extended reals [102], and by R ≥0 := R ≥0 ∪{∞} the set of the non-negative extended real numbers. We say that a function d : X × X → R ≥0 is a distance function if and only if it satisfies d(x, x) = 0 and d(x, y) = d(y, x) for all x, y ∈ X. We let σ[k] denote the kth literal of the finite or infinite word σ. Reactive Execution Model We can view a nondeterministic reactive program as a function S : In ω → 2 (Out ω ) perpetually mapping inputs In to sets of outputs Out [31]. To formally model contracts that specify the concrete configuration of robust cleanness or func-cleanness, we denote by StdIn ⊆ In ω the input space of the system designated to define the standard behaviour, and by d In : (In × In) → R ≥0 and d Out : (Out × Out) → R ≥0 distance functions on inputs, respectively outputs. For robust cleanness, we additionally consider two constants κ i , κ o ∈ R ≥0 . κ i defines the maximum distance that a non-standard input must have to a standard input to be considered in the cleanness evaluation. For this evaluation, κ o defines the maximum distance between two outputs such that they are still considered sufficiently close. Intuitively, the contract defines tubes around standard inputs and there outputs. For example, in Figure 1, i is a standard input and d In and κ i implicitly define a 2κ i wide tube around i. Every input i ′ that is within this tube will be evaluated on its outputs. Similarly, d Out and κ o define a tube around each of the outputs of i. An output for i ′ that is within this tube satisfies the robust cleanness condition. Together, the above objects constitute a formal contract C = ⟨StdIn, d In , d Out , κ i , κ o ⟩. Robust cleanness is composed of two separate definitions called l-robust cleanness and u-robust cleanness. Assuming a fixed standard behaviour of a system, l-robust cleanness imposes a lower bound on the non-standard outputs that a system must exhibit, while u-robust cleanness imposes an upper bound. Such lower and upper bound considerations are necessary because of the potential nondeterministic behaviour of the system; for deterministic systems the two notions coincide. We remark that in this article we are using past-forgetful distance functions and the trace integral variants of robust cleanness and func-cleanness (see Biewer [15,Chapter 3] We will in the following refer to Definition 1.1 for l-robust cleanness and Definition 1.2 for u-robust cleanness. Intuitively, l-robust cleanness enforces that whenever an input i ′ remains within κ i vicinity around the standard input i, then for every standard output o ∈ S(i), there must be a non-standard output o ′ ∈ S(i ′ ) that is in κ o proximity of o. Referring to Figure 1, every i ′ in the tube around i must produce for every standard output o ∈ S(i) at least one output o ′ ∈ S(i ′ ) that resides in the κ o -tube around o. In other words, for non-standard inputs the system must not lose behaviour that it can exhibit for a standard input in κ i proximity. For u-robust cleanness the standard and non-standard output switch roles. It enforces that whenever an input i ′ remains within κ i vicinity around the standard input i, then for every output o ′ ∈ S(i ′ ) the system can exhibit for this non-standard input, there must be a standard output o ∈ S(i) that is in κ o proximity of o ′ . Referring to Figure 1, every i ′ in the tube around i must only produce outputs o ′ ∈ S(i ′ ) that are in the κ o -tube of at least one o ∈ S(i). In other words, for non-standard inputs within κ i proximity of a standard input the system must not introduce new behaviour, i.e., it must not exhibit an output that is further than κ o away from the set of standard outputs. A generalisation of robust cleanness is func-cleanness. A cleanness contract for func-cleanness replaces the constants κ i and κ o by a function f : R ≥0 → R ≥0 inducing a dynamic threshold for output distances based on the distance between the inputs producing such outputs. Definition 2 A nondeterministic reactive system S is func-clean w.r.t. contract C = ⟨StdIn, d In , d Out , f ⟩ if for every standard input i ∈ StdIn and input sequence i ′ ∈ In ω it is the case that 1. for every o ∈ S(i), there exists o ′ ∈ S(i ′ ), such that for every index k ∈ N, We will in the following refer to Definition 2.1 for l-func-cleanness and Definition 2.2 for u-func-cleanness. For the fairness monitor in Section 5 we will use a simpler variant of funccleanness for deterministic sequential programs. Since P is deterministic, the lower and upper bound requirements coincide, yielding the following simplified definition. Mixed-IO System Model The reactive execution model above has the strict requirement that for every input, the system produces exactly one output. Recent work [17,18] instead considers mixed-IO models, where a program L ⊆ (In ∪ Out) ω is a subset of traces containing both inputs and outputs, but without any restriction on the order or frequency in which inputs and outputs appear in the trace. In particular, they are not required to strictly alternate (but they may, and in this way the reactive execution model can be considered a special case [15]). A particularity of this model is the distinct output symbol δ for quiescence, i.e., the absence of an output. For example, finite behaviour can be expressed by adding infinitely many δ symbols to a finite trace. The new system model induces consequences regarding cleanness contracts. Every mixed-IO trace is projected into an input, respectively output domain. The set of input symbols contains one additional elementi , that indicates that in the respective steps an output was produced, but masking the concrete output. Similarly, the set of output symbols contains the additional element o to mask a concrete input symbol. Projection on inputs ↓ i : (In ∪ Out) ω → (In ∪ {i }) ω and projection on outputs ↓ o : (In ∪ Out) ω → (Out ∪ {o }) ω are defined for all traces σ ∈ (In ∪ Out) ω and k ∈ N as follows: The distance functions d In and d Out apply on input and output symbols or their respective masks, i.e., they are functions ( Finally, instead of a set of standard inputs StdIn, we evaluate mixed-IO system cleanness w.r.t. to a set of standard behaviour Std ⊆ L. Thus, not only inputs, but also outputs can be defined as standard behaviour and for an input, one of its outputs can be considered in Std while a different output can be excluded from Std. As a consequence, the set Std is specific for some mixed-IO system L, because Std is useful only if Std ⊆ L. To emphasise this difference we will call the tuple C = ⟨Std, d In , d Out , κ i , κ o ⟩ (cleanness) context (instead of cleanness contract). Robust cleanness of mixed-IO systems w.r.t. such a context is defined below [18]. and only if Std ⊆ L and for all σ ∈ Std and σ ′ ∈ L, We will in the following refer to Definition 4.1 for l-robust cleanness and Definition 4.2 for u-robust cleanness. Definition 4 universally quantifies a standard trace σ. For l-robust cleanness, the universal quantification of σ ′ effectively only quantifies an input sequence; the input projection for the existentially quantified σ ′′ must match the projection for σ ′ . The remaining parts of the definition are conceptually identical to their reactive systems counterpart in Definition 1.1. For u-robust cleanness, the existentially quantified trace σ ′′ is obtained from set Std in contrast to l-robust cleanness, where σ ′′ can be any arbitrary trace of L. This is necessary, because u-robust cleanness is defined w.r.t. a cleanness context; from knowing that σ ∈ Std is a standard trace and by enforcing that σ↓ i = σ ′′ ↓ i we cannot conclude that also σ ′′ ∈ Std. Definition 5 shows the definition func-cleanness of mixed-IO systems. Definition 5 A mixed-IO system L ⊆ (In ∪ Out) ω is func-clean w.r.t. context C = ⟨Std, d In , d Out , f ⟩ if and only if Std ⊆ L and for all σ ∈ Std and σ ′ ∈ L, We will in the following refer to Definition 5.1 for l-func-cleanness and Definition 5.2 for u-func-cleanness. HyperLTL Linear Temporal Logic (LTL) [95] is a popular formalism to reason about properties of traces. A trace is an infinite word where each literal is a subset of AP, the set of atomic propositions. We interpret programs as circuits encoded as sets C ⊆ (2 AP ) ω of such traces. LTL provides expressive means to characterise sets of traces, often called trace properties. For some set of traces T , a trace property defines a subset of T (for which the property holds), whereas a hyperproperty defines a set of subsets of T (constituting combinations of traces for which the property holds). In this way it specifies which traces are valid in combination with one another. Many temporal logics have been extended to corresponding hyperlogics supporting the specification of hyperproperties. HyperLTL [30] is such a temporal logic for the specification of hyperproperties of reactive systems. It extends LTL with trace quantifiers and trace variables that make it possible to refer to multiple traces within a logical formula. A HyperLTL formula is defined by the following grammar, where π is drawn from a set V of trace variables and a from the set AP: The quantifiers ∃ and ∀ quantify existentially and universally, respectively, over the set of traces. For example, the formula ∀π. ∃π ′ . ϕ means that for every trace π there exists another trace π ′ such that ϕ holds over the pair of traces. To account for distinct valuations of atomic propositions across distinct traces, the atomic propositions are indexed with trace variables: for some atomic proposition a ∈ AP and some trace variable π ∈ V, a π states that a holds in the initial position of trace π. The temporal operators and Boolean connectives are interpreted as usual for LTL. Further operators are derivable: ϕ ≡ true U ϕ enforces ϕ to eventually hold in the future, ϕ ≡ ¬ ¬ϕ enforces ϕ to always hold, and the weak-until operator ϕ W ϕ ′ ≡ ϕ U ϕ ′ ∨ ϕ allows ϕ to always hold as an alternative to the obligation for ϕ ′ to eventually hold. HyperLTL Characterisations of Cleanness D'Argenio et al. [31] assume distinct sets of atomic propositions to encode inputs and outputs. That is, they assume that AP = AP i ∪ AP o of atomic propositions, where AP i and AP o are the atomic propositions that define the the input values and, respectively, the output values. Thus, in the context of Boolean circuit encodings of programs, we take In = 2 AP i and Out = 2 APo . We capture the following natural correspondence between reactive programs and Boolean circuits; a circuit C can be interpreted as a functionŜ : In the HyperLTL formulas below occur, for convenience, non-atomic propositions. Their semantics is encoded by atomic propositions and Boolean connectives according to a Boolean encoding of inputs and outputs. We refer to the original work for the details [31, Table 1]. Further, we assume that there is a quantifier-free HyperLTL formula StdIn π that can check whether the trace represented by trace variable π is in the set of standard inputs StdIn ⊆ In ω . That is, StdIn π should be defined such that for every trace t ∈ C it holds that {π := t} |= C StdIn π if and only if (t↓ AP i ) ∈ StdIn. Proposition 1 shows HyperLTL formulas for l-robust cleanness and u-robust cleanness, respectively. 1 Proposition 1 Let C be a set of infinite traces over 2 AP , letŜ be the reactive system constructed from C according to Equation (1), and let C = ⟨StdIn, d In , d Out , κ i , κo⟩ be a contract for robust cleanness. ThenŜ is l-robustly clean w.r.t. C if and only if C satisfies the HyperLTL formula , andŜ is u-robustly clean w.r.t. C if and only if C satisfies the HyperLTL formula The first quantifier (for π 1 ) in both formulas implicitly quantifies the standard input i and the second quantifier (for π 2 ) implicitly quantifies the second input i ′ . Due to the potential nondeterminism in the behaviour of the system, the third, existential, quantifier for π ′ 1 , respectively π ′ 2 is necessary. While the formula for l-robust cleanness has the universal quantification on the outputs of the program that takes the standard input i and the existential quantification on the output for i ′ , the formula for u-robust cleanness works in the other way around. Thus, the formulas capture the ∀∃ alternation in Definition 1. The weak until operator W has exactly the behaviour necessary to represent the interaction between the distances of inputs and the distances of outputs. The HyperLTL formulas for func-cleanness are given below. Proposition 2 Let C be a set of infinite traces over 2 AP , letŜ be the reactive system constructed from C according to Equation (1), and let C = ⟨StdIn, d In , d Out , f ⟩ be a contract for func-cleanness. ThenŜ is l-func-clean w.r.t. C if and only if C satisfies the HyperLTL formula , andŜ is u-func-clean w.r.t. C if and only if C satisfies the HyperLTL formula Signal Temporal Logic LTL enables reasoning over traces σ ∈ (2 AP ) ω for which it is necessary to encode values using the atomic propositions in AP. Each literal in a trace represents a discrete time step of an underlying model. Thus, σ can equivalently be viewed as a function N → 2 AP . One extension of LTL is Signal Temporal Logic (STL) [32,74], which instead is used for reasoning over real-valued signals that may change in value along an underlying continuous time domain. In this article, we generalise the original work and use generalised timed traces (GTTs) [52], which, for some value domain X and time domain T define traces as functions T → X. The time domain T can be either N (discrete-time), or R ≥0 (continuous-time). For the value domain we will use vectors of real values X = R n for some n > 0 or, to express mixed-IO traces, the set X = In ∪ Out. STL formulas can express properties of systems modelled as sets M ⊆ (T → X) of traces by making the atomic properties refer to booleanisations of the signal values. The syntax of the variant of STL that we use in this article is as follows, where f ∈ X → R: STL replaces atomic propositions by threshold predicates of the form f > 0, which hold if and only if function f applied to the trace value at the current time returns a positive value. The Boolean operators and the Until operator U are very similar to those of HyperLTL. The Next operator X is not part of STL, because "next" is without precise meaning in continuous time. The definitions of the derived operators , and W are the same as for HyperLTL. Formally, the Boolean semantics of an STL formula ϕ at time t ∈ T for a trace w ∈ T → X is defined inductively: A system M satisfies a formula ϕ, denoted M |= ϕ, if and only if for every w ∈ M it holds that w, 0 |= ϕ. 7: end if 8: end while does not hold. The quantitative semantics for an STL formula ϕ, trace w, and time t the quantitative semantics is defined inductively: Robustness and Falsification The value of the quantitative semantics can serve as a robustness estimate and as such be used to search for a violation of the property at hand, i.e., to falsify it. The robustness of STL formula ϕ is its quantitative value at time 0, that is, R ϕ (w) := ρ(ϕ, w, 0). So, falsifying a formula ϕ for a system M boils down to a search problem with the goal condition R ϕ (w) < 0. Successful falsification algorithms solve this problem by understanding it as the optimisation problem minimise w∈M R ϕ (w). Algorithm 1 [1,86] sketches an algorithm for Monte-Carlo Markov Chain falsification, which is based on acceptance-rejection sampling [28]. An input to the algorithm is an initial trace w and a computable robustness function R. Robustness computation for STL formulas has been addressed in the literature [32,48]; we omit this discussion here. The third input PS is a proposal scheme that proposes a new trace to the algorithm based on the previous one (line 2). The parameter β (used in line 3) can be adjusted during the search and is a means to avoid being trapped in local minima, preventing to find a global minimum. Notably, there exists prior work by Nguyen et al. [87] that discusses an extension of STL to HyperSTL though using a non-standard semantic underpinning. In this context, they present a falsification approach restricted to the fragment "t-HyperSTL" where, according to the authors, "a nesting structure of temporal logic formulas involving different traces is not allowed". Therefore, none of our cleanness definitions belongs to this fragment. Logical Characterisation of Mixed-IO Cleanness In this section we provide a temporal logic characterisation for robust cleanness and func-cleanness for mixed-IO systems. For this, we propose a HyperSTL semantics (different to that of [87]) and propose HyperSTL formulas for robust cleanness and func-cleanness. We explain how these formulas can be applied to mixed-IO traces and prove that the characterisation is correct. Furthermore, for the special case that Std is a finite set, we reformulate the HyperSTL formulas characterising the u-cleannesses as equivalent STL formulas. Hyperlogics over Continuous Domains Previous work [87] extends STL to HyperSTL echoing the extension of LTL to HyperLTL. We use a similar HyperSTL syntax in this article: The meaning of the universal and existential quantifier is as for HyperLTL. In contrast to HyperLTL (and to the existing definition of HyperSTL), we consider it insufficient to allow propositions to refer to only a single trace. In HyperLTL atomic propositions of individual traces can be compared by means of the Boolean connectives. To formulate thresholds for real values, however, we feel the need to allow real values from multiple traces to be combined in the function f , and thus to appear as arguments of f . Hence, in our semantics of HyperSTL, f > 0 holds if and only if the result of f , applied to all traces quantified over, is greater than 0. For this to work formally, the arity of function f is the number m of traces quantified over at the occurrence of f > 0 in the formula, so f : X m → R. A trace assignment [30] Π : V → M is a partial function assigning traces of M to variables. Let Π[π := w] denote the same function as Π, except that π is mapped to trace w. The Boolean semantics of HyperSTL is defined below. Definition 6 Let ψ be a HyperSTL formula, t ∈ T a time point, M ⊆ (T → X) a set of GTTs, and Π a trace assignment. Then, the Boolean semantics for M, Π, t |= ψ is defined inductively: A system M satisfies a formula ψ if and only if M, ∅, 0 |= ψ. The quantitative semantics for HyperSTL is defined below: Definition 7 Let ψ be a HyperSTL formula, t ∈ T a time point, M ⊆ (T → X) a set of GTTs, and Π a trace assignment. Then, the quantitative semantics for ρ(ψ, M, Π, t) is defined inductively: HyperSTL Characterisation The HyperLTL characterisations in Section 2.2.1 assume the system to be a subset of (2 AP ) ω and works with distances between traces by means of a Boolean encoding into atomic propositions. By using HyperSTL, we can characterise cleanness for systems that are representable as subsets of (T → X). We can take the HyperLTL formulas from Propositions 1 and 2 and transform them into HyperSTL formulas by applying simple syntactic changes. We get for l-robust cleanness the formula for l-func-cleanness we get the formula and, finally, u-func-cleanness is encoded by The quantifiers remain unchanged relative to the formulas in Propositions 1 and 2. The formulas use generic projection functions ↓ i : X → In and ↓ o : X → Out to extract the input values, respectively output values from a trace. To apply the formulas, these functions must be instantiated with functions for the concrete instantiation of the value domain X of the traces to be analysed. For example, for In = R m , Out = R l , and M ⊆ (T → R m+l ), the projections could be defined for every w = (s 1 , . . . , s m , s m+1 , . . . , s m+l ) as w↓ i = (s 1 , . . . , s m ) and w↓ o = (s m+1 , . . . , s m+l ). The input equality requirement for two traces π and π ′ is ensured by globally enforcing eq(π↓ i , π ′ ↓ i ) ≤ 0. eq is a generic function that returns zero if its arguments are identical and a positive value otherwise. It must be instantiated for concrete value domains. For example, eq((s 1 , . . . , s m ), (s ′ 1 , . . . , s ′ m )) could be defined as the sum of the componentwise distances 1≤i≤m |s i − s ′ i |. Finally, in the above formulas we perform simple arithmetic operations to match the syntactic requirements of HyperSTL. Formulas (3) and (5) are prepared to express u-robust cleanness, respectively u-func-cleanness w.r.t. both cleanness contracts or cleanness contexts. That is, we assume the existence of a function Std π that returns a positive value if and only if the trace assigned to π encodes a standard input (when considering cleanness contracts) or encodes an input and output that constitute a standard behaviour (when considering cleanness contexts). Explicitly requiring that π ′ 1 represents a standard behaviour echoes the setup in Definitions 4.2 and 5.2. We remark that for encoding Std π , due to the absence of the Next-operator in HyperSTL, it might be necessary to add a clock signal s(t) = t to traces in a preprocessing step. The HyperSTL formulas ψ l-rob and ψ u-rob reason about sets of traces. For example, the set M = {w 0 , w 1 , w A , w B } satisfies both formulas. If both π 1 and π 2 represent standard traces, then π 1 ↓ i = π 2 ↓ i , because w 0 ↓ i = w 1 ↓ i , and the formulas hold for π ′ 2 = π 1 , respectively π ′ 1 = π 2 . Otherwise, assume that π 1 represents w 0 and π 2 represents w B (the reasoning for other combinations of traces is similar). First considering ψ l-rob , we pick w A for π ′ 2 . We get that 2 ↓ i | = 0 and, thus, eq(π 2 ↓ i , π ′ 2 ↓ i ) = 0. At time steps 0 ≤ t ≤ 3, the distance between the outputs |w 0 ↓o(t) − w A ↓o(t)| is at most κo. Hence, the left operand of W holds and the formula is satisfied for t ≤ 3. Hence, the right operand of the W operator holds and ψ l-rob is satisfied also for t ≥ 3. Notice that if we would remove w A from M, then it would violate ψ l-rob , because there is no possible choice for π ′ 2 that has the same inputs as w B and where the output distances to w 0 are below the κo threshold. To satisfy ψ u-rob , we pick w 1 for π ′ 1 . The reasoning why the formula holds for this choice is analogue to ψ l-rob . Notice that if we add the trace w to M, then ψ u-rob is violated. Concretely, π 2 could represent w ; then, whether we pick w 0 or w 1 for π ′ 1 , we eventually get outputs that violate the κo constraint, while the κ i constraint is always satisfied. For example, if we compare w and w 1 , then we have for all time steps t ≤ 3 that Hence, at t = 3 the left and right operand of W are false, so ψ u-rob is violated. Correctness under Mixed-IO Interpretation Mixed-IO signals are defined in the discrete time domain N and value domain In ∪ Out. The abstract functions ↓ i and ↓ o can be defined equally to the syntactically identical projection functions for mixed-IO models defined in Section 2.1. The function eq(i 1 , i 2 ) can be defined using the distance function d In and some arbitrary small ε > 0: In the second clause of the above definition we add some positive value ε to the result of d In , because d In (i 1 , i 2 ) could be 0 even if i 1 ̸ = i 2 . For the correctness of the above HyperSTL formulas, however, it is crucial that eq(i 1 , i 2 ) = 0 if and only if i 1 = i 2 . For a good performance of the falsification algorithm, we will nevertheless want to make use of d In if i 1 ̸ = i 2 . Proposition 3 shows that HyperSTL formulas (2) and (3) under the mixed-IO interpretation outlined above indeed characterise l-robust cleanness and u-robust cleanness. Proposition 4 shows the same for func-cleanness. STL Characterisation for Finite Standard Behaviour In many practical settings -when the different standard behaviours are spelled out upfront explicitly, as in NEDC and WLTC -it can be assumed that the number of distinct standard behaviours Std is finite (while there are infinitely many possible behaviours in M). Finiteness of Std makes it possible to remove by enumeration the quantifiers from the u-robust cleanness and u-func-cleanness HyperSTL formulas. This opens the way to work with the STL fragment of HyperSTL, after proper adjustments. In the following, we assume that the set Std = {w 1 , . . . , w c } is an arbitrary standard set with c unique standard traces, where every w k : T → X uses the same time domain T and value domain X. To encode the HyperSTL formulas (3) and (5) in STL, we use the concept of self-composition, which has proven useful for the analysis of hyperproperties [9,50]. We concatenate a trace under analysis w : T → X and the standard traces . . , w c ) | w ∈ M} the system in which every trace in M is composed with the standard traces in Std. For every w + ∈ M • Std, we will in the following STL formula write w to mean the projection on w + to the trace w, and we write w k , for 1 ≤ k ≤ c, to mean the projection on w + to the kth standard trace. The theorem for u-func-cleanness is analogue to Theorem 5. Example 4 We consider the robust cleanness context C = ⟨Std, d In , d Out , κ i , κo⟩ where Std = {w 1 , w 2 } contains the two standard traces w 1 = 1 i 2 i 3 i 7o 0 i δ ω and w 2 = 0 i 1 i 2 i 3 i 6o δ ω . We here decorate inputs with index i and outputs with index o, i.e., w 1 describes a system receiving the three inputs 1, 2, and 3, then producing the output 7, and finally receiving input 0 before entering quiescence. We take otherwise. The contractual value thresholds are assumed to be κ i = 1 and κo = 6. Assume we are observing the trace w = 0 i 1 i 2 i 6o 0 i δ ω to be monitored with STL formula φ u-rob (from Lemma 10). First notice, that for combinations of a and b in For the first conjunct, the input distance between inputs in w and w 1 is always 1 at positions 1 to 3, it is 0 at position 4 (becausei is compared toi ), and remains 0 in position 5 and beyond. Thus, d In (w 1 ↓ i , w↓ i ) − κ i is always at most 0, and the right hand-side of the W operator is always false. Consequently, by definition of W, the left operand of W must always hold, i.e., d Out (w 1 ↓o, w↓o) must always be less or equal to 6. This is the case for w 1 and w: at all positions except for 4, -o is compared to -o (or δ to δ), so the difference is 0, and at position 4, the distance of 6 and 7 is 1. For the second W-formula, w is compared to w 2 . These two traces are comparable only to a limited extent: the order of input and output is altered at the last two positions of the signals before quiescence. Hence, the right operand of W is true at position 4, and the formula holds for the remaining trace. For positions 1 to 3, the input distances are 0, because the input values are identical. At these positions, the left operand must hold. The values are input values, so -o is compared to -o at each position. This distance is defined to be 0, so it holds that −6 ≤ 0, and the formula is satisfied. Since both formulas hold, the conjunction of both holds, too, and trace w is qualified as robustly clean. There could however be other system traces not considered in this example, that overall could violate robust cleanness of the system. Restriction of input space Robust cleanness puts semantic requirements on fragments of a system's input space, outside of which the system's behaviour remains unspecified. Typically, the fragment of the input space covered is rather small. To falsify the STL formula φ u-rob from Lemma 10, the falsifier has two challenging tasks. First, it has to find a way to stay in the relevant input space, i.e., select inputs with a distance of at most κ i from the standard behaviour. Only if this is assured it can search for an output large enough to violate the κ o requirement. In this, a large robustness estimate provided by the quantitative semantics of STL cannot serve as an indicator for deciding whether an input is too far off or whether an output stays too close to the standard behaviour. We can improve the efficiency of the falsification process significantly by narrowing upfront the input space the falsifier uses. In practice, test execution traces will always be finite. In previous reallife doping tests, test execution lengths have been bounded by some constant B ∈ N [18], i.e., systems are represented as sets of finite traces M ⊆ (In ∪ Out) B (which for formality reasons each can be considered suffixed with δ ω ). In this bounded horizon, we can provide a predicate discriminating between relevant and irrelevant input sequences. Formally, the restriction to the relevant input space fragment of a system M ⊆ (In ∪ Out) B is given by the set In Std, There are rare cases in which this optimisation may prevent the falsifier from finding a counterexample. This is only the case if there is an input prefix leading to a violation of the formula for which there is no suffix such that the whole trace satisfies the κ i constraint. Below is a pathological example in which this could make a difference. Example 5 Apart from NOx emissions, NEDC (and WLTC) tests are used to measure fuel consumption. Consider a contract similar to the contracts above, but with fuel rate as the output quantity. Assuming a "normal" fuel rate behaviour during the standard test, there might be a test within a reasonable κ i distance, where the fuel is wasted insanely. Then, the fuel tank might run empty before the intended end of the test, which therefore could not be finished within the κ i distance, because speed would be constantly 0 at the end. The actually driven test is not in set In Std,κ i , but there is a prefix within κ i distance that violates the robust cleanness property. Notably, there may be additional techniques to reduce the size of the input space. For example, if the next input symbol depends on the history of inputs, this constraint could be considered in the proposal scheme. Supervision of Diesel Emission Cleaning Systems The severity of the diesel emissions scandal showed that the regulations alone are insufficient to prevent car manufacturers from implementing tamperedor doped -emission cleaning systems. Recent works [18] shows that robust cleanness is a suitable means to extend the precisely defined behaviour of cars for the NEDC to test cycles within a κ i range around the NEDC. To demonstrate the usefulness of robust cleanness, the essential details of the emission testing scenario were modelled: the set of inputs is the set of speed values, an output value represents the amount of emissions -in particular, the nitric oxide (NO x ) emissions -measured at the exhaust pipe of a car. The distance functions are the absolute differences of speed, respectively NO x , values, and the standard behaviour is the singleton set that contains a trace that consists of the inputs that define the test cycle followed by the average amount of NO x gas measured during the test. Thus, formally, we get In = R, Out = R, Std = {NEDC · o}, 3 and d In and d Out as defined in Example 4 [18]. The STL formulas developed in the previous section, combined with the probabilistic falsification approach, give rise to further improvements to the existing testing-based work [18] on diesel doping detection. To use the falsification algorithm in Algorithm 1, we implement the restriction of the input space to In {NEDC·o},κ i as explained in Section 3. With this restriction the STL formula φ u-rob from Lemma 10 can be simplified to This is because the conjunction and disjunction over standard traces becomes obsolete for only a single standard trace. For the same reason, the requirement (eq(w a ↓ i , w b ↓ i ) ≤ 0) becomes obsolete, as the compared traces are always identical. In the W subformula, the right proposition is always false, because of the restricted input space. We implemented Algorithm 1 for the robustness computation according to formula (7). In practice, running tests like NEDC with real cars is a time consuming and expensive endeavour. Furthermore, tests on chassis dynamometers are usually prohibited to be carried out with rented cars by the rental companies. On the other hand, car emission models for simulation are not available to the public -and models provided by the manufacturer cannot be considered trustworthy. To carry out our experiments, we instead use an approximation technique that estimates the amount of NO x emissions of a car along a certain trajectory based on data recorded during previous trips with the same car, sampled at a frequency of 1 Hz (one sample per second). Notably, these trips do not need to have much in common with the trajectory to be approximated. A trip is represented as a finite sequence ϑ ∈ (R × R × R) * of triples, where each such triple (v, a, n) represents the speed, the acceleration, and the (absolute) amount of NO x emitted at a particular time instant in the sample. Speed and acceleration can be considered as the main parameters influencing the instant emission of NO x . This is, for instance, reflected in the regulation [66,122] where the decisive quantities to validate test routes for real-world driving emissions tests on public roads are speed and acceleration. A recording D is the union of finitely many trips ϑ. We can turn such a recording into a predictor P of the NO x values given pairs of speed and acceleration as follows: The amount of NO x assigned to a pair (v, a) here is the average of all NO x values seen in the recording D for v ± ℓ and a ± ℓ, with 0 ≤ ℓ ≤ 2. To overcome measurement inaccuracies and to increase the robustness of the approximated emissions, the speed and acceleration may deviate up to 2 km/h, and 2 m/s 2 , respectively. This tolerance is adopted from the official NEDC regulation [126], which allows up to 2 km/h of deviations while driving the NEDC. To demonstrate the practical applicability of our implementation of Algorithm 1 and our NO x approximation, we report here on two experiments with an Audi A6 Avant Diesel admitted in June 2020 and with its successor admitted in 2021. We will refer to the former as car A20 and to the latter as car A21. We used the app LolaDrives to perform in total six low-cost RDE tests -two with A20 and four 4 with A21 -and recorded the data received from the cars' diagnosis ports. The raw data is available on Zenodo [14]. Using the emissions predictor proposed above we estimate that for an NEDC test A20 emits 86 mg/km of NO x and that A21 emits 9 mg/km. Car A20 has previously been falsified w.r.t. the RDE specification. Neither A20 nor A21 has been falsified w.r.t. robust cleanness. Before turning to falsification, we spell out meaningful contexts for robust cleanness. We identified suitable In, Out, Std, d In , and d Out at the beginning of the section. For κ i , it turned out that κ i = 15 km/h is a reasonable choice, as it leaves enough flexibility for human-caused driving mistakes and intended deviations [18]. The threshold for NO x emissions under lab conditions is 80 mg/km. The emission limits for RDE tests depend on the admission date of the car. Cars admitted in 2020 or earlier, must emit 168 mg/km at most, and cars admitted later must adhere to the limit of 120 mg/km. For our experiments, we use κ o = 88 mg/km for A20 and κ o = 40 mg/km for A21 to have the same tolerances as for RDE tests. Effectively, the upper threshold for A20 is 84 + 88 = 172 mg/km, and for A21 the limit is 9 + 40 = 49 mg/km. Notice that for software doping analysis, the output observed for a certain standard behaviour and the constant κ o define the effective threshold; this threshold is typically different from the thresholds defined by the regulation. We modified Algorithm 1 by adding a timeout condition: if the algorithm is not able to find a falsifying counterexample within 3,000 iterations, it terminates and returns both the trace for which the smallest robustness has been observed and its corresponding robustness value. Hence, if falsification of robust cleanness for a system is not possible, the algorithm outputs an upper bound on how robust the system satisfies robust cleanness. For the concrete case of the diesel emissions, the robustness value during the first 1180 inputs (sampled from the restricted input space In Std,κ i ) is always κ o . When the NEDC output o NEDC and the non-standard output o are compared, the robustness value is κ o − |o NEDC − o| (cf., eq. (7), the quantitative semantics of STL, and definition of d Out ). Hence, for test cycles with small robustness values, we get NO x emissions o that are either very small or very large compared to o NEDC . We ran the modified Algorithm 1 on A20 and A21 for the contexts defined above. For A20, it found a robustness value of −8, i.e., it was able to falsify robust cleanness relative to the assumed contract and found a test cycle for which NO x emissions of 182 mg/km are predicted. The test cycle is shown in Figure 2. For A21, the smallest robustness estimate found -even after 100 independent executions of the algorithm -was 38, i.e., A21 is predicted to satisfy robust cleanness with a very high robustness estimate. The corresponding test cycle is shown in Figure 3. On Doping Tests for Cyber-physical Systems The proposed probabilistic falsification approach to find instances of software doping needs several hundreds of iterations. This is problematic for testing real-world cyber-physical systems (CPS) to which inputs cannot be passed in an automated way. To conduct a test with a car, for example, the input to the system is a test cycle that is passed to the vehicle by driving it. Notably, we consider here the scenario that the CPS is tested by an entity that is different We propose the following integrated testing approach for effective doping tests of cyber-physical systems. The big picture is provided in Figure 4. In a first step, the CPS is used under real-world conditions without enforcing any specific constraints on the inputs to the system. For all executions, the inputs and outputs are recorded. So, essentially, the system can be used as it is needed by the user, but all interactions with it are recorded. From these recordings, a model can be learned that for arbitrary inputs (whether they were covered in the recorded data or not) predicts the output of the system. Such learning can be as simple as using statistics as we did for the emissions example above, or as complex as using deep neural nets. For the learned model, the probabilistic falsification algorithm computes a test input that falsifies it -inputs to this model can be passed automatically and an output is produced almost instantly. The resulting input serves as an input for the real CPS. If the prediction was correct, also the real system is falsified. If it was incorrect, the learned model can be refined and the process starts again. For diesel emissions, the first part of this integrated testing approach has been carried out as part of the work reported in this article. We leave the second part -evaluating the generated test traces from Figures 2 and 3 with a real car -for future work. Technical Context Software doping theory provides a formal basis for enlarging the requirements on vehicle exhaust emissions beyond too narrow lab test conditions. That conceptual limitation has by now been addressed by the official authorities responsible for car type approval [122,125]: The old NEDC-based test procedure is replaced by the newer Worldwide Harmonised Light Vehicles Test Procedure (WLTP), which is deemed to be more realistic. WLTP replaces the NEDC test by a new WLTC test, but WLTC still is just a single test scenario. In addition, WLTP embraces so called Real Driving Emissions (RDE) tests to be conducted on public roads. A recently launched mobile phone app [19,21], LolaDrives, harvests runtime monitoring technology for making low-cost RDE tests accessible to everyone. Learning or approximating the behaviour of a system under test has been studied intensively. Meinke and Sindhu [80] were among the first to present a testing approach incrementally learning a Kripke structure representing a reactive system. Volpato and Tretmans [128] propose a learning approach which gradually refines an under-and over-approximation of an input-output transition system representing the system under test. The correctness of this approach needs several assumptions, e.g., an oracle indicating when, for some trace, all outputs, which extend the trace to a valid system trace, have been observed. Individual Fairness of Systems Evaluating Humans Example 2 introduces a new application domain for cleanness definitions. Unica uses an AI system that is supposed to assist her with the selection of applicants for a hypothetical university. Cleanness of such a system can be related to the fair treatment of the humans that are evaluated by it. A usable fairness analysis can happen no later than at runtime, since Unica needs to make a timely decision on whether to include the applicant in further considerations. We describe technical measures that help in mitigating this challenge by providing her with information from an individual fairness analysis in a suitable, purposeful, expedient way. To this end, we propose a formal definition for individual fairness extending the one by [34] and based on func-cleanness. We develop a runtime monitor that analyses every output of P immediately after P's decision, which strategically searches for unfair treatment of a particular individual by comparing them to relevant hypothetical alternative individuals so as to provide a fairness assessment in a timely manner. Much like P is to support Unica, AI systems -in the broadest sense of the word -more and more often support human decision makers. Undoubtedly, such systems should be compliant with applicable law (such as the future European AI Act [39,40] or the Washington State facial recognition law [130]) and ought to minimise any risks to health, safety or fundamental rights. Sometimes, we cannot mitigate all these risks in advance by technical measures and also some risk-mitigation requires trade-off decisions involving features that are either impossible or difficult to operationalise and formalise. This is why it is essential that a human effectively oversees the system (which is also emphasised by several institutions such as UNESCO [127] and the European High Level Expert Group [58]). Effective human oversight, however, is only possible with the appropriate technical measures that allow human overseers to better understand the system at runtime [69]. From a technical point of view, this raises the pressing question of what such technical measures can and ought to look like to actually enable humans to live up to these responsibilities. Our contribution is intended to bridge the gap between the normative expectations of law and society and the current reality of technological design. Positioning within Related Research Topics Our contribution draws on and adds to three vibrant topics of current research, namely Explainable AI (XAI), AI fairness, and discrimination. XAI Many of the most successful AI systems today are some kind of black boxes [11]. Accordingly, the field of "Explainable AI" [53] focuses on the question of how to provide users (and possibly other stakeholders) with more information via several key perspicuity properties [115] of these systems and their outputs to make them understand these systems and their outputs in ways necessary to meet various desiderata [5,27,68,72,83,89]. The concrete expectations and promises associated with various XAI methods are manifold. Among them are enabling warranted trust in systems [61,64,100,109], increasing humansystem decision-making performance [67] for instance through increasing human situation awareness when operating systems [107], enabling responsible decisionmaking and effective human oversight [13,78,112], as well as identifying and reducing discrimination [72]. It often remains unclear what kind of explanations are generated by the various explainability methods and how they are meant to contribute to the fulfilment of the desiderata, even though these questions have become the subject of systematic and interdisciplinary research [68,69]. Our approach can be taxonomised along at least two different distinctions [69,84,99,100,114]: First, it is model-agnostic (not model-specific), i.e., it is not tailored to a particular class of models but operates on observable behaviour -the inputs and outputs of the model. Second, our method is a local method (not global ), i.e., it is meant to shed light on certain outputs rather than the system as a whole. With regard to fairness, there are two distinctions that are especially relevant to our work. First, one distinction is made between individual fairness, i.e., that similar individuals are treated similarly [34], and group fairness, i.e., that there is adequate group parity [22]. Measures of individual fairness are often close to the Aristotelian dictum to treat like cases alike [6,7]. In a sense, operationalisations of individual fairness are robustness measures [23,116], but instead of requiring robustness with respect to noise or adversarial attacks, measures of individual fairness, such as the one by Dwork et al. [34], call for robustness with respect to highly context-dependent differences between representations of human individuals. Second, recent work from the field of law [129] suggests to differentiate between bias preserving and bias transforming fairness metrics. Bias preserving fairness metrics seek to avoid adding new bias. For such metrics, historic performances are the benchmarks for models, with equivalent error rates for each group being a constraint. In contrast, bias transforming metrics do not accept existing bias as a given or neutral starting point, but aim at adjustment. Therefore, they require to make a "positive normative choice" [129], i.e. to actively decide which biases the system is allowed to exhibit, and which it must not exhibit. Over the years, many concrete approaches have been suggested to foster different kinds of fairness in artificial systems, especially in AI-based ones [72,79,94,129,132]. Yet, to the best of our knowledge, an approach like ours is still missing. One of the approaches that is closest to ours, namely that by John et al. [63], is not local and therefore not suitable for runtime monitoring. Also, it is not model-agnostic. So, to the best of our knowledge, our approach provides a new contribution to the debate on unfairness detection. It is important to note/recognise that our approach can only be understood as part of a more holistic approach to preventing or reducing unfairness. After all, there are many sources of unfairness [8] (also see Figure 5 and Appendix B). Therefore, not every technical measure is able to detect every kind of unfairness and eliminating one source of unfairness might not be sufficient to eliminate all unfairness. Our approach tackles only unfairness introduced by the system, but not other kinds of unfairness. Discrimination We understand discrimination as dissimilar treatment of similar cases or similar treatment of dissimilar cases without justifying reason. This is a definition that can also be found in the law [43, §43]. Our work is exclusively focused on discrimination qua dissimilar treatment of similar cases. Discrimination requires a thoughtful and largely not formalisable consideration of "justifying reason". However, we will exploit the relation of discrimination and fairness: Unfairness in a system can arguably be a good proxy of discrimination -even though not every unfair treatment by a system necessarily constitutes discrimination (especially not in the legal sense). Thus, a tool that highlights cases of unfairness in a system can be highly instrumental in detecting discriminatory features of a system. It is not viable, though, to let such a tool rule out unfair treatment fully automatically without human oversight, since there could be justifying reason to treat two similar inputs in a dissimilar way. Individual Fairness Unica from Example 2 should be able to detect individual unfairness. An operationalisation thereof by Dwork et al. [34] is based on the Lipschitz condition to enforce that similar individuals are treated similarly. To measure similarity, they assume the existence of an input distance function d In and an output distance function d Out . This assumption is very similar to the one that we implicitly made in the previous sections for robust cleanness and func-cleanness. However, in the case of the fair treatment of humans finding reasonable distance functions is more challenging than it was for the examples in the previous chapters. Dwork et al. assume that both distance functions perfectly measure distances between individuals 5 and between outputs of the system, respectively, but admit that in practice these distance functions are only approximations of a ground truth at best. They suggest that distance measures might be learned, but there is no one-size-fits-all approach to selecting distance measures. Indeed, obtaining such distance metrics is a topic of active research [60,85,133]. Additionally, the Lipschitz condition assumes a Lipschitz constant L to establish a linear constraint between input and output distances. Lipschitz-fairness comes with some restrictions that limit its suitability for practical application: d In -d Out -relation: High-risk systems are typically complex systems and ask for more complex fairness constraints than the linearly bounded output distances provided by the Lipschitz condition. For example, using the Lipschitz condition prevents us from allowing small local jumps in the output and at the same time forbidding jumps of the same rate of increase over larger ranges of the input space (also see supplementary material in Section A). Input relevance: The condition quantifies over the entire input domain of a program. This overlooks two things: first, it is questionable whether each input in such a domain is plausible as a representation for a real-world individual. But whether a system is unfair for two implausible and purely hypothetical inputs is largely irrelevant in practice. Secondly, it also ignores that mere potential unfair treatment is at most a threat, not necessarily already a harm [104]. Therefore, even with a restriction to only plausible applicants, the analysis might take into account more inputs than needed for many real-world applications. What is important in practice is the ability to determine whether actual applicants are treated unfairly -and for this it is often not needed to look at the entire input domain. Monitorability: In a monitoring scenario with the Lipschitz condition in place, a fixed input i 1 must be compared to potentially all other inputs i 2 . Since the input domain of the system can be arbitrarily large, the Lipschitz condition is not yet suitable for monitoring in practice (for a related point see John et al. [63]). We propose a notion of individual fairness that is based on Definition 3. Instead of cleanness contracts we consider here fairness contracts, which are tuples F = ⟨d In , d Out , f ⟩ containing input and output distance functions and the function f relating input distances and output distances. Notably, the set of standard inputs StdIn known from cleanness contracts is not part of a fairness contract; it is unknown what qualifies an input to be 'standard' in the context of fairness analyses. Still, our fairness definition evaluates fairness for a set of individuals I ⊆ In (e.g., a set of applicants), which has conceptual similarities to the set StdIn. A fairness contract specifies certain fairness parameters for a concrete context or situation. Such parameters should generally not already include I to avoid introducing new unfairness through the monitor by tailoring it to specific inputs individually or by treating certain inputs differently from others. Func-fairness can thus be defined as follows: Definition 9 A deterministic sequential program P : In → Out is func-fair for a set I ⊆ In of actual inputs w.r.t. a fairness contract F = ⟨d In , d Out , f ⟩, if and only if for every i ∈ I and i ′ ∈ In, d Out (P(i), P(i ′ )) ≤ f (d In (i, i ′ )). The idea behind func-fairness is that every individual in set I is compared to potential other inputs in the domain of P. These other inputs do not necessarily need to be in I, nor do these inputs need to have "physical counterparts" in the real world. Driven by the insights of the Input relevance restriction of Lipschitzfairness, we explicitly distinguish inputs in the following and will call inputs that are given to P by a user actual inputs, denoted i a , and call inputs to which such i a are compared to synthetic inputs, denoted i s . Actual inputs are typically 6 inputs that have a real-world counterpart, while this might or might not be true for synthetic inputs. On first glance, an alternative to using synthetic inputs is to use only actual inputs, e.g., to compare every actual input with every other actual input in I. For example, for a university admission, all applicants could be compared to every other applicant. However, this would heavily rely on contingencies: the detection of unfair treatment of an applicant depends on whether they were lucky enough that, coincidentally, another candidate has also applied who aids in unveiling the system's unfairness towards them. Instead, func-fairness prefers to over-approximate the set of plausible inputs that actual inputs are compared to rather than under-approximating it by comparing only to other inputs in I. This way, the attention of the human exercising oversight of the system might be drawn to cases that are actually not unfair, but as a competent human in the loop, they will most likely be able to judge that the input was compared to an implausible counterpart. This will usually enable more effective human oversight than an under-approximation that misses to alert the human to unfair cases. Notice that func-fairness is a conservative extension of Lipschitz-fairness. With I = In and f (x) = L·x, func-fairness mimics Lipschitz-fairness. Wachter et al. [129] classify the Lipschitz-fairness of Dwork et al. [34] as bias-transforming. As we generalise this and introduce no element that has to be regarded as bias-preserving, our approach arguably is bias-transforming, too. Func-fairness, with its function f , provides a powerful tool to model complex fairness constraints. How such an f is defined has profound impact on the quality of the fairness analysis. A full discussion about which types of functions make a good f go beyond the scope of this article. A suitable choice for f and the distance functions d In and d Out heavily depends on the context in which fairness is analysed -there is no one-fits-it-all solution. Func-fairness makes this explicit with the formal fairness contract F = ⟨d In , d Out , f ⟩. Fairness Monitoring We develop a probabilistic-falsification-based fairness monitor that, given a set of actual inputs, searches for a synthetic counterexample to falsify a system P w.r.t. a fairness contract F. To this end, it is necessary to provide a quantitative description of func-fairness that satisfies the characteristics of a robustness estimate. We call this description fairness score. For an actual input i a and a synthetic input i s we define the fairness score as F (i a , i s ) := f (d In (i a , i s )) − d Out (P(i a ), P(i s )). F is indeed a robustness estimate function: if F (i a , i s ) is non-negative, then d Out (P(i a ), P(i s )) ≤ f (d In (i a , i s )), and if it is negative, then d Out (P(i a ), P(i s )) ̸ ≤ f (d In (i a , i s )). For a set of actual inputs I, the definition generalises to F (I, i s ) := min{F (i a , i s ) | i a ∈ I}, i.e., the overall fairness score is the minimum of the concrete fairness scores of the inputs in I. Notice that R I (i s ) := F (I, i s ) is essentially the quantitative interpretation of φ u-func (from Lemma 11) after simplifications attributed to the fact that P is a sequential and deterministic program (cf. Definition 2.2 vs. Definition 3). Algorithm 2 shows FairnessMonitor, which builds on Algorithm 1 to search for the minimal fairness score in a system P for fairness contract F. The algorithm stores fairness scores in triples that also contain the two inputs for which the fairness score was computed. The minimum in a set of such triples is Algorithm 2 FairnessMonitor, with ξ-min S = (ξ, i 1 , i 2 ) only if (ξ, i 1 , i 2 ) ∈ S and for all (ξ ′ , i ′ 1 , i ′ 2 ) ∈ S, ξ ′ ≥ ξ Falsification Parameters: PS: Proposal scheme, β: Temperature parameter Input: System P : In → Out, Fairness contract F = ⟨d In , d Out , f ⟩, and set of actual inputs I Output: A minimal fairness score triple from R × I × In. 10: if r ≤ α then 11: defined by the function ξ-min that returns the triple with the smallest fairness score of all triples in the set. The first line of FairnessMonitor initialises the variable i s with an arbitrary actual input from I. For this value of i s , the algorithm checks the corresponding fairness scores for all actual inputs i a ∈ I and stores the smallest one. In line 3, the globally smallest fairness score triple is initialised. In line 5 it uses the proposal scheme to get the next synthetic input i ′ s . Line 6 is similar to line 2: for the newly proposed i ′ s it finds the smallest fairness score, stores it, and updates the global minimum if it found a smaller fairness score (line 7). Lines 8-13 come from Algorithm 1. The only difference is that in addition to i s we also store the fairness score ξ. Line 4 of Algorithm 2 differs from Algorithm 1 by terminating the falsification process after a timeout occurs (similar to the adaptation of Algorithm 1 in Section 4). Hence, the algorithm does not (exclusively) aim to falsify the fairness property, but aims at minimising the fairness score; even if the fair treatment of the inputs in I cannot be falsified in a reasonable amount of time, we still learn how robustly they are treated fairly, i.e., how far the least fairly treated individual in I is away from being treated unfairly. After the timeout occurs, the algorithm returns the triple with the overall smallest seen fairness score ξ min , together with the actual input i 1 and the synthetic input i 2 for which ξ min was found. In case ξ min is negative, i 2 is a counterexample for P being func-fair. FairnessMonitor implements a sound F-unfairness detection as stated in Proposition 7. However, it is not complete, i.e., it is not generally the case that P is func-fair for I if ξ is positive. It may happen that there is a counterexample, but FairnessMonitor did not succeed in finding it before the timeout. This is analogue to results obtained for model-agnostic robust cleanness analysis [18]. Proposition 7 Let P : In → Out be a deterministic sequential program, F = ⟨d In , d Out , f ⟩ a fairness contract, and I a set of actual inputs. Further, let (ξ min , i 1 , i 2 ) be the result of FairnessMonitor(P, F, I). If ξ min is negative, then P is not func-fair for I w.r.t. F. Moreover, FairnessMonitor circumvents major restrictions of the Lipschitzfairness: d In -d Out -relation: Func-fairness defines constraints between input and output distances by means of a function f , which allows to express also complex fairness constraints. For a more elaborate discussion, see Section A. Input relevance: Func-fairness explicitly distinguishes between actual and synthetic inputs. This way, func-fairness acknowledges a possible obstacle of the fairness theory when it comes to a real-world usage of the analysis, namely that only some elements of the system's input domain might be plausible and that usually only few of them become actual inputs that have to be monitored for unfairness. Monitorability: FairnessMonitor demonstrates that func-fairness is monitorable. It resolves the quantification over In using the above concepts from probabilistic falsification using the robustness estimate function F as defined above. Towards func-fairness in the loop If a high-risk system is in operation, a human in the loop must oversee the correct and fair functioning of the outputs of the system. To do this, the human needs real-time fairness information. Figure 6 shows how this can be achieved by coupling the system P and the FairnessMonitor in Algorithm 2 in a new system called FairnessAwareSystem. FairnessAwareSystem is sketched in Algorithm 3. Intuitively, the FairnessAwareSystem is a higher-order program that is parameterised with the original program P and the fairness contract F. When instantiated with these parameters, the program takes arbitrary (actual) inputs i a from In. In the first step, it does a fairness analysis using FairnessMonitor with arguments P, F, and {i a }. To make fairness scores comparable, FairnessAwareSystem normalises the fairness score ξ received from FairnessMonitor by dividing 7 it by the output distance limit f (d In (i a , i s )). For fair outputs, the score will be between 0 (almost unfair) and 1 (as fair as possible). 8 Outputs that are not func-fair are accompanied by a negative score representing how much the limit f (d In (i a , i s )) is exceeded. A fairness score of −n means that the output distance of P(i a ) and P(i s ) is n + 1 times as high as that limit. Finally, FairnessAwareSystem returns the triple with P's output for i a , the normalised fairness score, and the synthetic input with its output witnessing the fairness score. Interpretation of monitoring results Especially when FairnessAwareSystem finds a violation of func-fairness, the suitable interpretation and appropriate response to the normalised fairness score proves to be a non-trivial matter that requires expertise. Example 6 Instead of using P from Example 2 on its own, Unica now uses FairnessAwareSystem with a suitable fairness contract. (Which fairness contracts are suitable is an open research problem, see Limitations & Challenges in Section 7.) and thereby receive a fairness score along with P's verdict on each applicant. If the fairness score is negative, she can also take into account the information on the synthetic counterpart returned by FairnessAwareSystem. Among the 4096 applicants for the PhD program, the monitoring assigns a negative fairness score to three candidates: Alexa, who received a low score, Eugene, who was scored very highly, and John, who got an average score. According to their scoring, Alexa would be desk-rejected, while Eugene and John would be considered further. Alexa's synthetic counterpart, let's call him Syntbad, is ranked much higher than Alexa. In fact, he is ranked so high that Syntbad would not be desk-rejected. Unica compares Alexa and Syntbad and finds that they only differ in one respect: Syntbad's graduate university is the one in the official ranking that is immediately below the one that Alexa attended. Unica does some research and finds that Alexa's institution is I n p u t Output (a) case of unfairness where input is treated worse than relevant counterpart I n p u t Output (b) case of unfairness where input is treated better than relevant counterpart I n p u t Output (c) case of no detected unfairness Fig. 7: Exemplary illustration of configurations of an input (red cross) and its synthetic counterparts (grey circles) and the synthetic counterpart with the minimal fairness score (blue polygon); with a two-dimensional input space (grid) and a one-dimensional output. predominantly attended by People of Colour, while this is not the case for Syntbad's institution. Therefore, FairnessAwareSystem helped Unica not only to find an unfair treatment of Alexa, but also to uncover a case of potential racial discrimination. John's counterpart, Synclair, is ranked much lower than him. Unica manually inspects John's previous institution (an infamous online university), his GPA of 1.8, and his test result with only 13%. She finds that this very much suggests that John will not be a successful PhD candidate and desk-rejects him. Therefore, Unica has successfully used FairnessAwareSystem to detect a fault in scoring system P whereby John would have been treated unfairly in a way that would have been to his advantage. Eugene received a top score, but his synthetic counterpart, Syna, received only an average one. Unica suspects that Eugene was ranked too highly given his graduate institution, GPA, and test score. However, as he would not have been desk-rejected either way, nothing changes for Eugene, and the unfairness he was subject to, is not of effect to him. The cases of John and Eugene share similarities with the configuration in (b) in Figure 7, the one of Alexa with (a), and the ones of all other 4093 candidates with (c). If our monitor finds only a few problematic cases in a (sufficiently large and diverse) set of inputs, our monitoring helps Unica from our running example by drawing her attention to cases that require special attention. Thereby, individuals who are judged by the system have a better chance of being treated fairly, since even rare instances of unfair treatment are detected. If, on the other hand, the number of problematic cases found is large, or Unica finds especially concerning cases or patterns, this can point to larger issues within the system. In these cases, Unica should take appropriate steps and make sure that the system is no longer used until clarity is established why so many violations or concerning patterns are found. If the system is found to be systematically unfair, it should arguably be removed from the decision process. A possible conclusion could also be that the system is unsuitable for certain use cases, e.g., for the use on individuals from a particular group. Accordingly, it might not have to be removed altogether but only needs to be restricted such that problematic use cases are avoided. In any case, significant findings should also be fed back to developers or deployers of the potentially problematic system. A fairness monitoring such as in FairnessAwareSystem or a fairness analysis as in FairnessMonitor could also be useful to developers, regulating authorities, watchdog organisations, or forensic analysts as it helps them to check the individual fairness of a system in a controlled environment. Interdisciplinary Assessment of Fairness Monitoring Regulations for car related emissions are in force for a considerable amount of time, thus, its legal interpretation is mostly clear. In case of human oversight of AI systems, the AI act is new and parts of it are legally ambiguous. This raises the question of whether our approach meets requirements that go beyond pre-theoretical deliberations. Even though comprehensive analyses would go far beyond the scope of this paper, we will nevertheless assess some key normative aspects in philosophical and legal terms, and also briefly turn to the related empirical aspects, especially from psychology. Psychological assessment Fairness monitoring promises various advantages in terms of human-system interaction in application contexts -provided it is extended by an adequate user interface -which call for empirical tests and studies. We will only discuss a possible benefit that closely aligns with the current draft of the AI Act: our approach may support effective human oversight. Two central aspects of effective oversight are situation awareness and warranted trust. Our method highlights unfairness in outputs which can be expected to increase users' situation awareness (i.e., "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future" [36, p. 36]), which is a variable central for effective oversight [37]. In the minimal case, this allows users to realise that something requires their attention and that they should check the outputs for plausibility and adequacy. In the optimal case and after some experience with the monitor, it may even allow users to predict instances where a system will produce potentially unfair outputs. In any case, the monitoring should enable them to understand limitations of the system and to feed back their findings to developers who can improve the system. This leads us to warranted trust, which includes that users are able to adequately judge when to rely on system outputs and when to reject them [61,71]. Building warranted trust strongly depends on users being able to assess system trustworthiness in the given context of use [71,108]. According to their theoretical model on trust in automation, Lee and See [71] propose that trustworthiness relates to different facets of which performance (e.g., whether the system performs reliably with high accuracy) and process (e.g., knowing how the system operates and whether the system's decision-processes help to fulfil the trustor's goals) are especially relevant in our case. Specifically, fairness monitoring should enable users to more accurately judge system performance (e.g., by revealing possible issues with system outputs) and system processes (e.g., whether the system's decision logic was appropriate). In line with Lee and See's propositions, this should provide a foundation for users to be better able to judge system trustworthiness and should thus be a promising means to promote warranted trust. In consequence, our monitoring provides a needed addition to high-risk use contexts of AI because it offers information enabling humans to more adequately use AI-based systems in the sense of possibly better human-system decision performance and with respect to user duties as described in the AI Act. Philosophical assessment More effective oversight promises more informed decision-making. This, in turn, enables morally better decisions and outcomes, since humans can morally ameliorate outcomes in terms of fairness and can see to it that moral values are promoted. Also, fairness monitoring helps in safeguarding fundamental democratic values if it is applied to potentially unfair systems which are used in certain societal institutions of a high-risk character such as courts or parliaments. It could, for example, make AI-aided court decisions more transparent and promote equality before the law. However, since our approach requires finding context-appropriate and morally permissible parameters for F, moral requirements arise to enable the finding of such parameters. This not only affects, e.g., developers of such systems, but also those who are in a position to enforce that adequate parameters are chosen, such as governmental authorities, supervising institutions or certifiers. Apart from that, various parties have arguably a legitimate interest in adequately ascribing moral responsibility for the outcomes of certain decisions to human deciders [13] -regardless of whether the decision making process is supported by a system. Adequately ascribing moral responsibility is not always possible, though. One precondition for moral responsibility is that the agent had sufficient epistemic access to the consequences of their doing [88,117], i.e., that they have enough and sufficiently well justified beliefs about the results of their decision. Someone overseeing a university selection process (like Unica) should, for example, have sufficiently well justified beliefs that, at the very least, their decisions do not result in more unfairness in the world. If the admission process is supported by a black-box system, though, Unica cannot be expected to have any such beliefs since she lacks insight in the fairness of the system. Therefore, adequate responsibility ascription is usually not possible in this scenario. Our monitoring alleviates this problem by providing the decider with better epistemic access to the fairness of the system. FairnessAwareSystem helps in making Unica's role in the decision process significant and not only that of a mere button-pusher. FairnessAwareSystem makes it possible for her to fulfil some of the responsibilities and duties plausibly associated with her role. For example, she can now be realistically expected to not only detect, but resolve at least some cases of apparent unfairness competently (although she may need additional information to do so). In this respect, she should not be 'automated away' (cf. [77]). Legal assessment A central legislative debate of our time is how to counter the risks AI systems can pose to the health and safety or fundamental rights of natural persons. Protective measures must be taken at various levels: First, before being permitted on the market, it must be ensured ex ante that such high-risk AI-systems are in conformity with mandatory requirements 9 regarding safety and human rights. This means in particular that the selection of the properties which a system should exhibit requires a positive normative choice and should not simply replicate biases present in the status quo [129]. In addition, AI-systems must be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, it is necessary for the provider to identify appropriate human oversight measures before its placing on the market or putting into service. In particular, such measures should guarantee that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role [39, recital 48][40, Art. 14 (5)]. Second, during runtime, the proper functioning of high-risk AI systems, which have been placed on the market lawfully, must be ensured. To achieve this goal, a bundle of different measures is needed, ranging from legal obligations to implement and perform meaningful oversight mechanisms to user training and awareness in order to counteract 'automation bias'. Furthermore, the AI Act proposal requires deployers to inform the provider or distributor and suspend the use of the system when they have identified any serious incidents or any malfunctioning [39,40,Art. 29(4)]. Third, and ex post, providers must act and take the necessary corrective actions as soon as they become aware, e.g. through information provided by the deployer, that the high-risk system does not (or no longer) meet the legal requirements [39,40,Art. 16(g)]. To this end, they must establish and document a system of monitoring that is proportionate to the type of AI technology and the risks of the high-risk AI system [39,40,Art. 61(1)]. Fairness monitoring can be helpful in all three of the above respects. Therefore, we argue that there is even a legal obligation to use technical measures such as the method presented in this paper if this is the only way to ensure effective human oversight. Conclusion & Future Work This articles brings together software doping theory and probabilistic falsification techniques. To this end, it proposes a suitable HyperSTL semantics and characterises robust cleanness and func-cleanness as HyperSTL formulas and, for the special case of finite standard behaviour, STL formulas. Software doping techniques have been extensively applied to the tampered diesel emission cleaning systems; this article continues this path of research by demonstrating how testing of real cars can become more effective. For the first time, we apply software doping techniques to high-risk (AI) systems. We propose a runtime fairness monitor to promote effective human oversight of high-risk systems. The development of this monitor is complemented by an interdisciplinary evaluation from a psychological, philosophical, and legal perspective. Limitations & Challenges A challenge to those employing robust cleanness or func-cleanness analysis is the selection of suitable parameters, especially d In , d Out , and f or κ i and κ o . Because of their high degree of context sensitivity, there are no paradigmatic candidates for them that one can default to. Instead, they have to be carefully selected with the concrete system, the structure of input data and the situation of use in mind. Reasonable choices for robust cleanness analysis of diesel emissions have been proposed in recent work [18,20]. With respect to individual fairness analysis, potential systems to which FairnessAwareSystem or FairnessMonitor can be applied to are still too diverse to give recommendations for the contract parameters. Obviously, further technical limitations include that f , d In , and d Out must be computable. With a particular regard to fairness analysis, we identify also non-technical limitations. As seen in Figure 5, our fairness monitoring aims to uncover a particular kind of unfairness, namely individual unfairness that originates from within the system. This excludes all kinds of group unfairness as well as unfairness from sources other than the system. Another limitation is the human's competence to interpret the system outputs. Even though this is not a limitation that is inherent to our approach, it nevertheless will arguably be relevant in some practical cases, and an implementation of the monitoring always has to happen with the human in mind. For example, the design of the tool should avoid creating the false impression that the system is proven to be fair for an individual if no counterexample has been found. Interpretations like this could lead to inflated judgements of system trustworthiness and eventually to overtrusting system outputs [108,110]. Also, it might be reasonable to limit access to the monitoring results: if individuals who are processed by the system have full access to their fairness analysis, they could use this to 'game' the system, i.e. they could use the synthetic inputs to slightly modify their own input such that they receive a better outcome. While more transparency for the user is generally desirable, this has to be kept in mind to avoid introducing new unfairness on a meta-level. Future Work The probabilistic falsification technique we use in this article can be seen as a modular framework that consists of several interchangeable components. One of these components is the optimisation technique used to find the input with minimal robustness value. Algorithm 1 uses a simulated annealing technique [28,105], but other techniques have been proposed for temporal logic falsification, too [4,106]. We want to further look into such alternative optimisation techniques and to evaluate if they offer benefits w.r.t. cleanness falsification. Finally, the fairness monitoring approach has been presented using a toy example. It is not claimed to be readily applicable to real-life scenarios. Besides the future work that has already been mentioned throughout the paper, we are planning on various extensions of our approach, and are working on an implementation that will allow us to integrate the monitoring into a real system. Moreover, we plan to test the possible benefits and shortcomings of the approach in user studies where decision-makers are tasked to make hiring decisions with and without the fairness monitoring approach. Further work will encompass activities such as the improvement and embedding of the algorithm FairnessAwareSystem into a proper tool that can be used by non-computer-scientists, and the extension of the monitoring technique to cover more types of unfairness. For example, logging the output of the fairness monitor could be used to identify groups that are especially likely to be treated unfairly by the system: The individual fairness verdicts provided by FairnessAwareSystem and FairnessMonitor may also be logged and considered for further fairness assessments or other means of quality assurance of system P . Statistical analysis might unveil that individuals of certain groups are treated unfairly more frequently than individuals from other groups. Depending on the distinguishing features of the evaluated group, this can uncover problems in P , especially if protected attributes, such as gender, race, age, etc, are taken into account. Thereby, system fairness can be assessed for protected attributes without including them in the input of P , which should generally be avoided, and even without disclosing them to the human in the loop. By evaluating the monitoring logs from sufficiently many diverse runs of FairnessAwareSystem, our local method can be lifted such that it resembles a global method for many practical applications, i.e. we can make statistical statements about the general fairness of P . Such an evaluation can also be used to extract prototypes and counterexamples in the spirit of Been et al. [65] illustrating the tendency to judge unfairly. This is an interesting combination of individual and group fairness that we want to look into further. Other insights from the research on reactive systems [18,20,31] can potentially be used to further enrich the monitoring. Finally, various disciplines have to join forces to resolve highly interdisciplinary questions such as what constitutes reasonable and adequate choices for f , d In , and d Out in given contexts of application. Competing interests. The authors have no competing interests to declare that are relevant to the content of this article. Appendix A Technical Appendix This appendix illustrates that func-fairness is more expressive than Lipschitzfairness and why this is useful. For this, we use as a toy example a very simple, hypothetical HR scoring system that aggregates five scores given to the candidates. We remark that the whole scenario, the implementation of the system, the choice of distance functions and f , is likely not applicable for real-life situations; everything is picked so that our explanations are understandable. Suppose that certain qualities and characteristics of the applicants are prescored by other systems on a scale from 0 to 100 %, where 0 means that the candidate is utterly unsuitable for the job in a certain regard, while a scoring of 100 % means that the candidate is perfect for the job in this regard. In particular, we will assume that the following marks are given to each applicant: an education mark for how well they are academically suitable for the job, an experience mark for how well their previous work experience fits the job, a personality mark for their personal and social skills, a mental ability mark for what is colloquially referred to as an applicant's general intelligence, and, finally, a skill mark that tracks the special skills that applicants have which might be beneficial for the job, such as their knowledge of foreign languages. The system P that is of interest for us in this example is the one that aggregates all of these marks and gives out an overall score of how well the candidate is suited for the job. The human responsible for the hiring process can use this in her hiring decision, e.g., she can focus on the top-scoring candidates and choose among them. Let M = [0, 1] ⊆ R be the reals between 0 and 1. Each of the five marks mentioned above is a real number from set M. The input domain In = M 5 for the sketched HR system is a tuple of five marks. The output of the system is the overall suitability score of an applicant, which is also a value from M. The distance between two inputs is defined as the euclidean distance, normalised to a value between 0 and 1, i.e., where ed represents the education mark, ex the experience mark, pe the personality mark, in the mental ability mark, and sk the skill mark of an applicant. The distance between two outputs d Out (o 1 , o 2 ) = |o 1 − o 2 | is the absolute difference between the overall scores o and o ′ . Note that also output distances are values between 0 and 1. Our scoring system is a function P : M 5 → M. We will assume here that P is defined as the sum of five subscoring systems, one for each of the five input marks, computing a value between 0 and 0.2. Then, P ((ed, ex, pe, in, sk)) := P ed (ed) + P ex (ex) + P pe (pe) + P in (in) + P sk (sk). Let P ed , P ex , P pe and P in be defined according to the plot shown in Fig. A1 a). With an increasing mark, these subscores increases up to an input mark of 0.8, whereafter the applicant becomes overqualified and the subscore slowly decreases. P sk is depicted in Fig. A1 b): The skill mark is less important, however a minimum amount of skills is required for the job. Hence, there is a jump of the skill score at an skill mark of roughly 0.19. Let John be an applicant with ed = ex = pe = in = 0.5 and a skill mark of sk = 0.2, which maps to a skill score on the plateau after the jump. The subscores for education, experience, personality and mental ability mark are 0.12 each. The skill score computed for John is 0.05. Hence, John's overall score is P (John) = 4 · 0.12 + 0.05 = 0.53. Let Synthia be a synthetic applicant with the same marks as John, except for the skill mark, which is 0.19 in Synthia's case. As depicted in Fig. A1 b), the skill subscore for skill mark 0.19 is 0.02 -Synthia is at the plateau right before the jump of the skill score. Her overall score is P (Synthia) = 4 · 0.12 + 0.02 = 0.50. The input distance between John and Synthia is d In (John, Synthia) = 0.01 2 5 ≈ 0.0045 and the output distance is d Out (John, Synthia) = |0.53 − 0.5| = 0.03. It is easy to see that if we use Lipschitz-fairness, the Lipschitz constant L must be at least L = 6.7 to allow the small jump in the skill subscoring function. We argue that small jumps like those in the skill subscore are normal behaviour and, hence, fair. Assume for the remainder of this example that we use Lipschitz-fairness with L = 6.7. Consider now a slightly modified variant P ′ of P . P ′ is as P but uses a different subscoring function P ′ sk for the skill score. Fig. A1 c) shows the skill subscoring function for P ′ . P ′ sk has a jump at skill mark 0.13 that is significantly larger than that in P sk . We assume in this example that such a big jump is Func-fairness is different in this regard. Function f receives the input distance and can freely define a bound on output distances based on the input distance. Indeed, the concrete f on the right overcomes the problem observed in the example. It uses the input distance for a case distinction on the magnitude of the input distance. For input distances up to 0.01, f effectively applies Lipschitz-fairness with L = 8 to allow small jumps. For input distances between 0.01 and 0.1, f behaves like Lipschitz-fairness for L = 4, and for larger input distances, it enforces L = 2. In all cases we add 0.001 to the result to avoid f becoming zero (see footnote 7 on page 33 in the main paper). Applying func-fairness with C = ⟨d In , d Out , f ⟩ to P , the combination of John and Synthia (and hence the small jump of the skill score function) is not highlighted by FairnessAwareSystem, i.e., it is correctly detected as func-fair. Applied to P ′ , however, John and Synclair fall into the second case in the definition of f , but, as the emulated Lipschitz condition with L = 4 is violated, FairnessAwareSystem likely finds a negative fairness score, i.e., P ′ is not func-fair w.r.t. John. We remark that we propose this f for purely illustrative purposes. For real-world examples, f should be more sophisticated. Finding a suitable f can be a non-trivial task which hinges on various aspects that are crucial for the fairness evaluation in a given context. Clearly, the P and f provided in this illustration are toy examples that are probably inappropriate for real-world usage. A.1 Proofs In this section, we will provide proofs for most of the propositions and theorems in the main paper. First, we show the correctness of the HyperSTL characterisations of robust cleanness and func-cleanness. We first provide a lemma, which destructs the globally ( ) and weak until (W) operators such that the timing constraints encoded by these operators becomes explicit. Lemma 8 Let σ : T → X be a trace with T = N or T = R ≥0 and let ϕ and ψ be STL formulas. Then the following equivalences hold. Proof We prove the two statements separately. 1. Using the definition of the derived operators and , we get that σ, 0 |= ϕ holds if and only if σ, 0 |= ¬(⊤ U ¬ϕ) holds. Using the (Boolean) semantics of STL, we get that this is equivalent to ¬(∃t ≥ 0. σ, t |= ¬ϕ ∧ ∀t ′ < t. σ, t ′ |= ⊤). After simple logical operations, we get that this is equivalent to ∀t ≥ 0. σ, t |= ϕ as required. 2. Using 1, the definition of W, the (Boolean) semantics of STL, and considering that T = N, we get that σ, 0 |= ϕ W ψ if and only if ∃t ∈ N. σ, t |= ψ ∧ ∀t ′ < t. σ, t ′ |= ϕ or ∀t ∈ N. σ, t |= ϕ. We denote this proposition as V . It is easy to see that the right operand of the equivalence to prove can be rewritten to ∀t ∈ N. (∃t ′ ≤ t. σ, t ′ |= ψ) ∨ σ, t |= ϕ. We denote this proposition as W and must show that V ⇒ W and W ⇒ V . To prove that V implies W , we distinguish two cases. • For the first case, assume that the left operand of the disjunction in V holds, i.e., there is some t ∈ N, such that σ, t |= ψ ∧ ∀t ′ < t. σ, t ′ |= ϕ. To prove that W implies V , let PV = {t ∈ N | σ, t |= ψ} be the set of all time points at which ψ holds. If PV is the empty set, it follows immediately from W that ∀t ∈ N. σ, t |= ϕ and that, hence, V holds. If PV is not empty, let t = min PV be the smallest time in PV (the minimum always exists, because T = N). Then, obviously, ∃t ∈ N. σ, t |= ψ. To show that V holds, it suffices to show that ∀t ′ < t. σ, t ′ |= ϕ. This follows from W , because t is the smallest time at which σ, t |= ψ holds and, therefore, for every t ′ < t it does not hold that σ, t ′ |= ψ. □ Lemma 9 is specific to the HyperSTL formula (3); it converts it into a first-order logic formula. Proof Using Lemma 8.1, Lemma 8.2, and Definition 6, we get that holds for Π = {π := w, π ′ := w ′ , π ′′ := w ′′ }. Using the the constraint under which Stdπ must be modelled, and by further applying Definition 6 and basic logical operations, we get that the above proposition is equivalent to Finally, after carefully reordering premises, we get that the above holds if and only if We omit the lemma analogue to Lemma 9 that reformulates formula (2) as a first-order characterisation. The proof for Proposition 3 further transforms the first-order characterisations of formulas (2) and (3) to show that they indeed match the definitions of l-robust cleanness and u-robust cleanness. Proposition 3 Let L ⊆ N → (In ∪ Out) be a mixed-IO system and C = ⟨Std, d In , d Out , κ i , κo⟩ a contract or context for robust cleanness with Std ⊆ L. Further, let Stdπ be a quantifier-free HyperSTL subformula, such that L, {π := w}, 0 |= Stdπ if and only if w ∈ Std. Then, L is l-robustly clean w.r.t. C if and only if L, ∅, 0 |= ψ l-rob , and L is u-robustly clean w.r.t. C if and only if L, ∅, 0 |= ψ u-rob . Proof We prove the correctness for l-robust cleanness and u-robust cleanness separately and begin with u-robust cleanness. Using Lemma 9, we get that holds if and only if After applying simple logical operations and using that eq(i 1 , i 2 ) = 0 if and only if i 1 = i 2 , we get that this is equivalent to , w 2 ↓o[t]) ≤ κo , which, since we assumed Std ⊆ L, is equivalent to the definition of u-robust cleanness for mixed-IO systems. The proof for l-robust cleanness is analogue. □ We recapitulate the proposition similar to Proposition 3 for func-cleanness. The proof for Proposition 4 is conceptually similar to the one for Proposition 3. The only difference is that instead of the reasoning about the W construct, the globally enforced relation between output distances and the result of f must be proven equivalent in the HyperSTL formulas and func-cleanness. We omit the proofs here. Next, we show the correctness of the STL characterisations, i.e., we will prove the correctness of Theorems 5 and 6. We do so by first establishing a connection between the HyperSTL and the STL characterisations. Proof Using Lemma 9 we get that Since Std = {w 1 , . . . , wc}, we can replace the universal and existential quantifiers over Std by a conjunction, respectively disjunction, over the standard traces [103]. We instantiate the universal quantifier for w ′′ with w and get that 1≤a≤c 1≤b≤c From the Boolean semantics of STL and by replacing all traces w, respectively w k , by the corresponding w + -projections, we get the equivalent proposition 1≤a≤c 1≤b≤c With the Boolean semantics of STL and Lemma 8.1 and 8.2 we get the equivalent statement that □ We are now able to prove Theorem 5. Theorem 5 Let L ⊆ N → (In ∪ Out) be a mixed-IO system and C = ⟨Std, d In , d Out , κ i , κo⟩ a context for robust cleanness with finite standard behaviour Std = {w 1 , . . . , wc} ⊆ L. Then, L is u-robustly clean w.r.t. C if and only if (L • Std) |= φ u-rob , where Proof The theorem follows from Proposition 3 and Lemma 10. □ To prove Theorem 6, we establish the following lemma, which is analogue to Lemma 10, up to u-func-cleanness replacing u-robust cleanness. The proof for Lemma 11 is, up to the different reasoning for , identical to that of Lemma 10. We omit it here. Proof The theorem follows from Proposition 4 and Lemma 11. □ Appendix B Fairness Pipeline As explained in Section 2 in the main paper, it is important to recognise that there are many sources of unfairness [8]. Section B shows a more detailed version of Figure 5 in the main paper. Not every technical measure is able to detect every kind of unfairness and eliminating one source of unfairness might not be sufficient to eliminate all unfairness. World There can be unfairness in the world that leads to individuals already having worse (or better) starting conditions than others and subsequently have a lower (or higher) chance that the final decision is made in their favour. For example, an individual could be systematically excluded from certain societal resources (e.g., girls who are excluded from education in Afghanistan under the Taliban) which puts these individuals at a disadvantage. Input data The input data or its collection, representation or selection could be problematic and lead to unfairness [137]. If, for example, crucial information is left out in the input data or data is aggregated in unsuitable ways, individuals could face an outcome that is unwarranted by the factual situation. System (and training data) The system itself can introduce new unfairness. Among other things, this can come about by erroneous algorithms or (in the case of a trained model) by problematic training data, e.g., if a certain group of individuals is not properly represented [118]. Output The human decider can fail to interpret the output properly [136,138], which can lead to further unfairness. They could, for example, lack knowledge of the limitations of the system or fail to take into account that the system output is subject to some systematic uncertainty. Decision The human decider can make an unfair decision even in the face of a fair system output and an adequate interpretation thereof, for example if they have conscious or subconscious bias against certain groups [135]. Unfairness in any part of the chain can arguably perpetuate or reinforce unfairness in the world. In the main paper, we propose a runtime monitoring technique that aims to uncover individual unfairness introduced by the system. By focusing on the system and its input-output relation only, we can say that the system is unfair without having to say anything about the degree of fairness with which an individual is treated in other respects in the decision process. It especially allows us to say that a system output is unfair, even though the outcome of the overall decision process is not. It may, for example, be that the system unfairness is 'cancelled out' by something else that is hidden from the system: an applicant with a stellar-looking CV might be treated unfairly by the system because of their age, but not hiring them is not unfair because they are known to have forged their diploma. Cases like this, however, do not make the unfairness introduced by the system any less problematic. C.1 EU Anti-Discrimination Law Antidiscrimination is a principle deeply rooted in EU law. It is enshrined in Art. 21 of the Charter of Fundamental Rights (CFR) [46], which prohibits "[a]ny discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation" as well as "any discrimination based on grounds of nationality". According to Art. 51 CFR, the addressees of this fundamental right are the EU and its institutions, bodies, offices and agencies as well as the Member States, insofar as they implement Union law. They are directly bound by Art. 21 above all in their legislative activities, but also in their executive and judicial measures. In contrast, private individuals are not directly bound by Art. 21 CFR, but they may be bound by regulations implementing this provision. However, according to recent European Court of Justice (ECJ) case law, Art. 21 CFR is directly applicable as a result of Directives, such as Directive 2000/78/EC [120] establishing a general framework for equal treatment in employment and occupation [44, § 76]. Apart from this, while Art. 21 CFR stipulates a general prohibition of any unjustified discrimination, the more specific secondary legislation applicable to private actors only prohibits discrimination only in certain sensitive areas and only with regard to certain protected attributes. Correspondingly, private actors may not discriminate against certain persons-to name just a few-in employment relationships [120], in cases of abuse of a dominant market position [47,Art. 102] or also in so-called mass transactions, i.e., contracts that are typically concluded without regard to the person on comparable terms in a large number of cases [121]. In contrast, discriminating in other legal relationships or on other grounds such as local origin (as opposed to ethnic origin), or a person's financial situation is not generally prohibited. The rationale behind these "discriminatory standards of anti-discrimination law" [56, 111,124] is the principle of private (or personal) autonomy, and more specifically freedom of contract as one of its manifestations, which govern legal transactions between private individuals [73]. According to this principle, individuals are free to shape their legal relationships according to their own preferences and ideas, however irrational or socially unacceptable they may be. In essence, this also includes a right to discriminate against others. This freedom to autonomously form legal relations is only constrained where this is stipulated by anti-discrimination legislation for policy reasons. When using an AI-system to recruit candidates, developers and deployers have to make sure that the system with its parameters comply with these legal requirements set by anti-discrimination law. This means in particular that the selection of the properties which a classifier should exhibit requires a positive normative choice and should not simply replicate biases present in the status quo [129]. However, the risks associated with deploying such systems in an HR context (such as a malfunctioning remaining undetected due to the system's opacity, a huge practical relevance of biased outputs due to the systems' scalability or the human operator's tendency of over-relying on the output produced by the AI system ( "automation bias")), raise the question whether it can still be deemed normatively acceptable that the EU legal framework turns a blind eye on certain forms of discrimination. Furthermore, the principle of private autonomy as rationale for justifying the freedom to discriminate against others is only valid with regard to human's wilful actions, but not to algorithm-generated output. We are not advocating for abolishing the existing balance between private autonomy (freedom to contract) and prohibition to discriminate. So humans should still be permitted to differentiate on grounds that are not caught by anti-discrimination law. However, there is no reason to grant the "right to discriminate" also to a non-human system that has merely "learned" this discrimination. In this respect, it seems justified to apply different standards for algorithms with regard to the prohibition of discrimination than for human decisions. With regard to an AI system's decision metrics, therefore, it should be considered to expand the secondary legal framework to include a broad prohibition of discrimination. This would not mean that all discrimination would be unlawful, since objectively justified unequal treatment is, after all, permissible, but it would shift the focus to the question of objective justification [45]. Another legal challenge that will become even more pressing with the advent of technical decision systems is how to detect and prove prohibited discrimination. This is because the prohibition of discrimination resulting from various legal regulations in certain, especially sensitive, areas, such as human resources, presupposes that a difference in treatment is recognised in the first place. The recognition of discrimination is therefore not only in the interest of the decision-maker, who is threatened with sanctions in the event of a violation of the prohibition of discrimination. Rather, it is also essential for the discriminated party to prove the discrimination. For as far as a legal claim follows from a prohibited discrimination, the principle applies that the person who invokes the legal claim must prove the facts giving rise to the claim. Especially when complex algorithms are used, however, it is likely to be extremely difficult to prove corresponding circumstantial evidence. According to the case law of the ECJ, however, the burden of proof is reversed if the party who has prima facie been discriminated against would otherwise have no effective means of enforcing the prohibition of discrimination [41,42]. Monitoring, as described here, would therefore be a suitable means of providing the "prima facie" evidence necessary for shifting the burden of proof. C.2 Discrimination and the GDPR There has recently been discussion if and to which extent data protection law contains obligations for non-discriminating data processing or whether the scope of protection of data protection law is thereby overstretched. There is no explicit prohibition of discrimination in the General Data Protection Regulation (GDPR). According to Article 1 (2), however, the GDPR is intended to protect the fundamental rights and freedoms of natural persons. This is aimed in particular at their right to protection of personal data (Article 8 CFR), but not exclusively so. Thus, the broad and non-restrictive reference to fundamental rights also encompasses all other fundamental rights, including the right to non-discrimination (Article 21 CFR) [38]. This is reflected, for example, in the higher level of protection for data with an increased potential for discrimination, the so-called special categories of personal data under Article 9 GDPR. The GDPR can also be interpreted as granting a "preventive protection against discrimination", namely when discrimination is made impossible from the outset, in that the data-processing agencies cannot gain knowledge of characteristics susceptible to discrimination in the first place, i.e., when any respective data processing is forbidden [25]. Any processing of personal data must furthermore comply with the processing principles set out in Article 5 GDPR, including the fairness principle ('personal data shall be processed fairly') set out in Article 5(1)(a). While formerly transparency obligations were read into this principle while the Data Protection Directive was into effect, the regulatory content of the fairness principle is highly disputed since it was split off into a separate processing principle. But due to the fact that discriminatory data processing can hardly be described as fair, a prohibition of discrimination can be linked to the fairness principle [55,75]. However, the concrete scope of the fairness principle clearly goes beyond the understanding of fairness in the context of technical systems on which this paper is based. Specifically for the HR context, there are discrimination-sensitive regulations in the GDPR. Article 9 GDPR makes the processing of special categories of data, i.e., sensitive data and data susceptible to discrimination, subject to particularly strict authorisation criteria, which should in practice rarely be present in recruitment situations. On the one hand, processing for recruitment purposes, i.e., prior to the establishment of an employment relationship, is rarely necessary in order to exercise certain rights and obligations under employment law (Art. 9(2)(b) GDPR), and on the other hand, explicit consent (Art. 9(2)(a) GDPR) will often lack the necessary voluntariness due to the specifics of job application situations and the power imbalances inherent in them. The prohibition of processing sensitive data may be problematic in cases where the link to sensitive data is strictly necessary to detect discriminatory effects. For high-risk systems, Art. 10 V AI Regulation Proposal therefore provides for a new permissive clause: 'To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction, ... the providers of such systems may process special categories of personal data' while ensuring appropriate safeguards for the fundamental rights of natural persons. With regard to the processing of non-sensitive personal data, however, the opening clause in Art 88(1) GDPR allows Member States to adopt more specific rules for processing for recruitment purposes, whereby, according to paragraph 2, suitable and specific measures must be ensured to safeguard the fundamental rights of the data subject. These requirements can be met by state-of-the-art monitoring tools. The national regulations cannot be discussed in depth here. For Germany, for example, Section 26 of the Federal Data Protection Act (BDSG) stipulates that personal data may only be processed for recruitment purposes if this is necessary, i.e., if the data processing is required for the decision on recruitment. In any case, data processing may not be necessary if the characteristics depicted in the data may not be taken into account in the hiring decision, for example due to anti-discrimination law [101]. [
2023-08-14T06:42:21.094Z
2023-08-11T00:00:00.000
{ "year": 2023, "sha1": "316d601db036a54930f1b10450a90884d2d712d8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10703-024-00445-2.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "316d601db036a54930f1b10450a90884d2d712d8", "s2fieldsofstudy": [ "Computer Science", "Law" ], "extfieldsofstudy": [ "Computer Science" ] }
151489586
pes2o/s2orc
v3-fos-license
Evaluation of a Career Counselling Program Focused on Greek Elementary School Children ' s Career Interests Although childhood is the most significant period in one's career development process, little research attention has been paid to the evaluation of career counselling intervention programs in elementary-aged children. An intervention study was carried out in order to evaluate a career counselling program implemented in one Greek elementary school which focused on the enrichment of the children's career interests. The research methodology used was the quasi experimental research design. Children (N = 84) aged 8-11 years were distributed in experimental and control groups. Τhe impact of the intervention focused on the enrichment of their career interests, which was assessed via semi-structured interviews and use of drawings. The results showed a statistical significant difference between groups concerning children's career interests after intervention, while the analysis of drawings revealed more differences in self-confidence, selfesteem and extraversion in favour of the children that participated in the experimental group. Gender and age differences were also explored and revealed. The results are discussed in relation to various aspects of children's career development, as well as to the significance of career counselling intervention programs. Introduction Career development is a process that starts early in one's life.Childhood should be considered as the most active period of that process (Hartung, Porfeli, & Vondracek, 2005).Theory and research literature so far, views the child's vocational development as a process with various aspects, such as career exploration, awareness, expectations and aspirations, maturity and interests (Ginzberg, Ginsburg, Axelrad, & Herma, 1951;Gottfredson, 1996;Hartung, Porfeli, & Vondracek, 2005;Holland, 1997;Roe, 1957;Super, 1953).Children learn about work at the same period they start to establish a sense of self and the combination of these two parameters, shape the development of a vocational identity and self-concept (Porfeli, Hartung, & Vondracek, 2008;Schmitt-Rodermund & Vondracek, 1999), values (Porfeli, 2007), and interests (Holland, 1997).The aforementioned process differs according to age (Dorr & Lesser, 1980;Howard & Walsh, 2010, 2011;Walls, 2000) and gender (Helwig, 1998;Stockard & McGee, 1990).Gottfredson (2002), has supported that there is a four-stage process, which includes different developmental achievements.In brief, children of 3-5 years start to understand what it means to be an adult, children of 6-8 years become oriented to sex roles and their thinking leads to gendered career interests, while children of 9-13 years are more oriented to social valuation, which leads to a preference of the level of work. The European Journal of Counselling Psychology ejcop.psychopen.eu| 2195-7614 After exiting elementary school, the young adolescents get more oriented to the internal self and utilize the knowledge of self to select from the careers that would interest them. Research (Van der Wilk & Oppenheimer, 1991) in children's development of career interests, so far supports Gottfredson's theory, since the age of 8 to 11 years seems to be a critical period for children's awareness about interests. All the aforementioned aspects, processes and phases of career development in childhood, are not only considered to be important in educational and career choices made by one later in his/her life (Porfeli et al., 2008), but also connected to school success, as well as to drop-out rates in high school years (Akos, Niles, Miller, & Erford, 2011;Kao & Tienda, 1998;Turner & Lapan, 2013).Therefore, the school career counselling interventions would be more beneficial for society, if they were carried out during childhood (Schultheiss, 2008), when the person develops their career interests and bounds them with the emerging identity, and surely before the conceptions of gender in the world of work become definite (Liben & Bigler, 2002). Despite the importance of the processes that take place throughout childhood, limited intervention practices exist concerning career development during the elementary years and especially those that connect school learning with what happens in work life (Rohlfing, Nota, Ferrari, Soresi, & Tracey, 2012).Most of the career practices concern visits to various vocational places, in order for the children to learn about the world of employment.In the USA, very few elementary schools provide developmental career guidance programs to students in accordance to the outlines set by the national unions and institutions (Kobylarz, Crow, & Ettinger, 2004).Most of these efforts are constrained by the lack of basic research of how children learn about the world of employment (Porfeli et al., 2008).Moreover, research that provides practitioners with information about the efficiency of career intervention practices remains scarce.An Australian study (Gillies, McMahon, & Carroll, 1998) showed that the career education program implemented in one school, had a positive effect on the children's job knowledge but failed to make any changes on the children's gender stereotypical perceptions of different jobs.Another study also revealed similar results (Killeen, Edwards, Barnes, & Watts, 1999). Considering the following numerous facts that: first, little yet is known about children's career development, second, there are contradictory findings concerning the differential role of age and gender in children's career development (Helwig, 1998;Primé, Nota, Ferrari, et al., 2010;Rohlfing et al., 2012) concerning whether the children's career knowledge differ across gender and age, and third, career counselling practice has not been, so far, officially applied to Greek elementary schools, we carried out an intervention study, aiming to evaluate a career counselling intervention program in the domain of 8 and 11 years old, Greek elementary students' career interests.The program attempted to enhance children's knowledge about certain jobs and their qualifications, and therefore enrich their career interests.Although it is widely accepted that the vocational self-concept occurs through interests among one's other attributes (Super, 1963) and that the establishment of interests occurs in childhood (Holland, 1997), very little empirical attention has been paid to them (Hartung et al., 2005). The study examined the following questions: • Are there any effects of the career counselling program on children's career interests?More specifically, are there any differences between the experimental and control groups, in relation to their career interests prior and after the intervention? • Are there any gender differences in children's career interests? • Are there any gender differences in children's career interests post intervention? • Are there any differences in children's career interests across age? • Are there any differences in children's career interest across age post intervention? Method The research was carried out by adopting the quasi-experimental methodology, based both on the experimental research design and the interpretative phenomenological analysis (IPA), a qualitative research approach that guides research design, data collection and analysis (Smith, Flowers, & Larkin, 2009).IPA concerns one's lived experience and refers to the everyday flow of unconscious experience, or the experience that has had a major significance to the participant.It can provide a relatively holistic understanding of participants' experiences and perceptions.IPA is also idiographic, in that it aims to gain an in-depth understanding of participants' experiences. Research based on IPA, is characterized by small, homogeneous samples that enable an in-depth understanding, in the present study, of how Greek elementary children experience their vocational interests. Participants The research took place in an elementary school located in the western part of Athens. The purpose of the intervention was to enhance the children's career knowledge in regard to the characteristics of the jobs discussed, in order to enrich their career interests.The intervention consisted of six separate sessions, twice a week, of 45 minutes each and it was implemented during the curriculum course.Its principles were consistent with relevant research (Harkins, 2000;Johnson, 2000), which has reported that effective career educational programs are the ones that connect learning with work Career Intervention Program, Elementary School children, the language used was uncomplicated and easy, so that they could understand and fully participate in the program.Each session was named after the career types of personality.In every lesson, a handout was given to each one of the students.Every one of the personality type handouts consisted of four occupations that were typical of the personality type (e.g. the artistic type included the jobs of graphic designer, painter, interior designer and writer; the realistic included desktop publisher, electrical engineer, veterinarian and roofer; the investigative included dietician, plant scientist, computer analyst and mapping technician; the social included fitness trainer, child care worker, medical assistant and school counselor; the enterprising included lawyer, travel agent, loan officer and telemarketer and the conventional included accountant, archeologist, receptionist, and teller).There was a brief job description and the relevant qualifications needed, accompanied by a small activity such as roleplaying (e.g."Pretend you are an interior designer.What changes would you make to your classroom?","Draw a sketch of how you would like it to look").The teacher and children read the texts and enrolled into the activities included.At the end of each lesson, there was a group game of pantomime, during which everyone tried to guess the job that a student attempted to demonstrate. Research Instruments The findings were collected, with the help of a semi-structured interview, as well as children's drawings of "someone who does a job". Drawing (Draw a Person Who Does a Job) The method used, applied the drawings as assessment instruments for the purposes of the present study.It was based on the literature related to the assessment of drawings, entitled "Draw a person" (Cox, 2005;Kounenou, 2007;Kroti & Mani, 2003;Machover, 1949;Malchiodi, 1998).According to the literature, drawings permit children to express inner emotions, thoughts, and representations of self, providing useful information for their self-concept and unconscious world.As Malchiodi (1998) has stated, the figure is the person him/herself and the paper is the person's environment.Drawing, as an assessment instrument, provides information about the unconscious inner world of the children, more than any other verbal activity (Diem-Wille, 2001) and is very familiar to young children. Besides, drawings can be used as a useful "ice-breaking" activity (Thomas & Jolley, 1998).According to the aforementioned relative literature, defense mechanisms become less activated when children are asked to draw a person on paper, while they become stronger, when they are asked to draw themselves.Therefore, when drawing is used for the assessment of the child's inner world, the guideline given to children is the following: "Please, draw a person on this paper".The assessment of the children's career interests, related to their career self-concept, for the present study, followed three of the criteria imposed by the literature in relation to the assessment of a child's inner world (Kounenou, 2007;Kroti & Mani, 2003;Machover, 1949;Malchiodi, 1998) The second author, who is a certified psychoanalytic psychotherapist, as well as a second certified psychologist in the assessment of projective drawing, analysed and transcribed the data, so that the final evaluation would be as objective and unbiased as possible.There were no differences between the two assessments. Semi-Structured Interview One-to-one semi-structured interviews, as well as projective techniques such as the drawing assessment, are preferred in IPA for data collection because they elicit detailed thoughts and feelings from participants (Smith et al., 2009).Therefore, a semi-structured interview schedule was conducted, based on relative career theory and research related to children's career knowledge and interests (e.g., Holland, 1997;Porfeli, 2008;Porfeli & Lee, 2012;Tyler, 1964).The semi-structured interview contained 5 open-ended questions.The questions for the interview concerned children's drawings and were the following: 1. What job does this person do? 2. Does he/she like the job he/she is doing? 3. Do you like that job? 4. What do you want to be when you grow up? 5. What is the job you do not want to do? Apart from the first question, which was used as an introductory one to the interview, the other four were selected in order to explore children's interest in relation to jobs (Crites, 1969;Germeijs, Verschueren, & Soenens, 2006;Patton & Porfeli, 2007;Porfeli & Lee, 2012).Children's responses to the Questions 1, 4, and 5 were coded, according to the six personality types (Holland, 1997) and the responses to Questions 2 and 3 were coded as yes or no.The standardized open-ended interviews were conducted in the same order with all participants and they had been decided beforehand, enhancing the comparability of the responses (Cohen, Manion, & Morrison, 2007). Prior to the interviews, short demographic questionnaires were completed by the children, with the help of the teacher, in order to obtain background information such as family structure, parental educational profile and job. Procedure The study was carried out from January to June 2014.Children were allowed to participate after the acquisition of their parents' informed consent.The head teacher was informed about the aims and objectives of our research. Parents were also informed about the study that was going to take place in the school and they all provided their informed consent.Before we began our research, we also asked for the children's permission because as Ford, Sankey, and Crisp ( 2007) have stated, it is important that children agree to take part.The children were willing to participate. One A4 sheet of white paper was given to each child along with a plain pencil.Children of all groups were asked to "draw a person who does a job".All children of all groups were subsequently interviewed individually on the predetermined questions.All interviews were recorded.Only children in the experimental groups received the intervention program.Children in the control group did not receive any special treatment and career information nor they did get involved in any career education lesson activities, but they were involved in regular curricula activities.Both groups' activities were conducted in the classroom by their class teacher.Subsequent to the intervention, children of all groups were asked to draw a person doing a job and the same interview questions. Analysis of Interviews The data processing was performed with the Statistical Package for Social Sciences (SPSS 17).Chi-square analysis was used to test whether there are differences between the control and the experimental groups, as the population distribution is not normal and the data are measured on nominal scales.The analysis between the groups, prior to intervention, showed no statistical significant differences in all of the responses to the five questions. The statistical analysis of the differences between the control and the experimental group, after intervention, revealed one statistical significant difference in the question "what do you want to be when you grow up?".It seems that the majority of students in the control group preferred the "realistic" type of jobs, while students in the experimental group preferred "realistic" and "investigative" types of jobs.Additionally, it came out that artistic and social types of jobs appeal more to students of the experimental group than in those of the control group (see Table 1).Regarding gender and age differences between experimental and control groups, prior and after intervention, no statistical significant differences were found. Whilst testing for gender differences in the total sample, a statistical significant difference was found between boys and girls in the question "what job does this person do", with boys giving more answers for realistic, whereas girls gave more answers on "social" (see Table 2).Additionally, a statistical significant difference was found, concerning the answers on the question "what do you want to be when you grow up" between boys and girls.The majority of boys showed a preference towards "realistic" types of professions and no one responded to the "social" types, whereas girls' answers were distributed between all categories (see Table 3).On the other hand, no statistical significant gender differences were found in the questions "do you like that job", "does he/she like that job", and "what is the job you don't want to do". No statistical significant age differences were found, with the exception of the question "Do you like that job", before and after the intervention respectively, where younger students gave less or no negative answers in comparison to older ones (see Tables 4 and 5).Additionally, no statistical differences were found concerning other demographic characteristics of the sample such as parental job.Only 10 out of 84 children stated that they had drawn a job similar to their parents', mainly the younger ones. Analysis of Drawings The analysis of the drawings in regards to the criteria specified, prior to the study, revealed some interesting differences between the experimental and the control groups.More specifically, in the experimental group of children Career Intervention Program, Elementary School paper after the intervention.The same children enlarged the figure, whereas 11 (8 boys, 3 girls), drew the human figure in more detail, after intervention.In the control group none of the children changed either the position of their drawing on the paper or its size.However, 10 children out of 21 (5 boys, 5 girls), added more detail to the human figure.In the experimental group of children aged 11, 18 (10 boys and 8 girls) out of 24 changed the position of the drawing to the centre of the page, 16 children (10 boys and 6 girls) enlarged the size of the figure and 15 (10 boys, 5 girls), drew more details in their human figure after the intervention.Changes in details for the experimental groups, meant that they drew the figure after the intervention with more accuracy in facial characteristics, attire, and made a more truthful sketch of the "job" that the person was doing (e.g. a teeth technician, after intervention, had all of his tools on a workbench and a particular emphasis was given to a set of false teeth, that the technician was preparing for the patient, likewise a gardener was wearing gloves and by the use of a spade, he was digging the soil around a plant).However, in the control group, only 4 children (3 boys, 1 girl) out of 20 moved their drawing to the centre of the page, whereas 6 (4 boys, 2 girls) drew the human figure in more detail.No change concerning the size of the figure was observed for both control groups.As it can be concluded from these findings, the boys in the sample of the present study showed more changes than girls, in both experimental age groups. Discussion The present study attempted to evaluate the impact of a career counselling intervention program on Greek elementary students' career interests.The statistical analysis of the data, revealed an important result.More specifically, children of the experimental groups enriched their career interests concerning the question "What do you want to be when you grow up?".After the intervention, they added investigative type of jobs along with realistic. However, children of both control groups did not change their answers after the intervention.The evaluation of the drawings revealed differences between the control and experimental groups of the study.More children of the experimental groups, in comparison to those of the control ones, appeared with more changes in their drawings after the intervention.Their post-drawings revealed more self-confidence, higher self-esteem, and extraversion. Consequently, children pointed out, through their drawings, that they were more self-assured after the intervention, concerning the job they would like to do.According to the literature, the question "what do you want to be when you grow up?" reveals one's career commitment, which means the person's decision on a career and his/her identification with it (Porfeli, 2008;Porfeli & Lee, 2012).The aforementioned findings, both the statistical and the evaluative ones, support scholars' assumptions (Porfeli & Lee, 2012;Skorikov & Vondracek, 2007) that career intervention programs can benefit children's career development, if they are carried out early enough, before children establish a clear and realistic sense of self and become committed to a career decision, based on narrow alternatives. The present study failed to depict any changes in children's career interests in regard to gender, after intervention.More specifically, no statistical significant differences were found between boys and girls of the experimental and control groups, after the intervention.This result is consistent with the findings of the evaluation of other career intervention programs carried out by other researchers (Gillies, McMahon, & Carroll, 1998;Killeen, Edwards, Barnes, & Watts, 1999). However, the analysis of drawings revealed some interesting findings concerning the gender of the participants. Boys' drawings had more changes than girls', after intervention, while girls were more consistent to the represen-tation of the job they preferred.According to previous research, girls present higher levels in career maturity (Greenberger, Campbell, Sorensen, & O'Connor, 1971) and it has been shown that girls decide about their careers earlier than boys and they are more stable in their career choice (McMahon & Patton, 1997). In regards to gender differences among children of the total sample, boys preferred to draw realistic types of jobs, while girls preferred the social ones.Moreover, significant gender differences were revealed concerning children's career commitment to their interests.Boys pointed out realistic occupations but no social, contrary to girls, who pointed out all of the career types.Similar results were found in Lippa's (2002) meta-analysis, which demonstrated that boys had chosen realistic occupations, much more than girls and that girls had preferred more social and artistic occupations, compared to boys.Theory and research in regard to gender issues, have supported that children, by the fifth grade, have already entered a process of circumscription based on gender (Auger, Blackhurst, & Wahl, 2005;Gottfredson, 2002).It seems that the career commitment to their interests in our sample, are in accordance to the sex-typing biases of what is appropriate for women and men.It seems that a career program especially designed for potential changes in very young students' gendered stereotypical perceptions (at least in students of 6-7 years old according to Gottfredson's developmental stages) would be more appropriate and efficient for generating changes in the specific field. As far as age is concerned, younger students of the sample were found to be more positively oriented to the jobs that they had drawn, therefore more confident in their career interests (they liked the job that they had drawn) than older ones.This finding seems to be in accordance to scholars (Hartung et al., 2005;Van der Wilk & Oppenheimer, 1991) who have justified that the structure of career interests appears to change over time and develop from more concrete to more abstract.Eleven year old students move from powerful and imaginary heroes to more realistic, vocational interests and they get less excited about them.Taking into consideration Gottfredson's (2002) theory about children's career developmental changes, it seems that the aforementioned finding is in accordance with the assumption that younger children are more oriented to the external surrounding and as they get older, they get more oriented to the internal self and get involved in more deep processes based on the comparison of self to various jobs and career interests.The inner processes might prevent them from being enthusiastic and positive oriented to jobs, at least not at the same degree as when they were younger. As far as the analysis of children's drawings is concerned, older children had more changes in their drawings, after the intervention, compared to younger ones.It seems that older children of the present study had concentrated in depicting a certain occupation, more complex and rendered with more details.Nevertheless, this finding is difficult to discuss in relation to career theory and research, since research (Kindler & Darras, 1997) Contrary to the findings of a research that took place in Mainland China, in which children's career aspirations were very much influenced by their parents' jobs (Liu, McMahon, & Watson, 2015), the majority of the children participated in our study did not mention an occupation that was identical or similar to that of their parents.It is possible that the low socio-economic status of the area or, in other words, the low social-economic status of the parents, acted as a discouragement in choosing one of their mother's or father's occupation.As Smith (2014) has stated, the environmental features, such as socio-economic status, affect individual preferences, as well as any interventions used to influence them. Limitations Although the study revealed some important results, concerning the importance of career counselling intervention programs, a longer intervention period would be more appropriate for a more effective assessment.The small size of the sample did not permit further statistical analyses in order to search for various relationships among the different variables.The measurement methods used in this study, namely a semi-structured interview and analysis of drawings, are susceptible to subjective interpretation.In addition, the career counselling intervention program that was developed in the USA and used in this study, imposes further limitations due to cultural differences. Consequently, the challenge is not only to develop and modify instruments and programs, so that they are compatible with each country's cultural contexts, but also to develop career and educational regulations and resources that could be beneficial to local users (Leung, 2004).Therefore, a future intervention study, having considered all the above-mentioned methodological issues, could provide the literature with more generalizable evidence and conclusions. Conclusions and Implications for Practice The career intervention program used in this study, managed to enrich elementary students' career interests and depict a statistical significant difference between the control and the experimental groups.The analysis of the drawings revealed several changes for the experimental group, in terms of self-confidence and extraversion, leading to conclusions for the beneficial role of the career counselling intervention program in children's career development and sense of self.However, the fact that the career intervention program did not manage to make any changes concerning the gender-based career interests, leads to the conclusion that career counselling intervention practices should be introduced early enough in one's life and be especially focused on gender and social stereotype issues. . The intervention was delivered to the experimental groups by the regular class teacher, who was educated and supervised by the first author, throughout the implementation of the intervention.The intervention sessions were based on the Oregon Career Aware handouts, developed by Oregon's Partnership for Occupational and Career Information (2012) and was based on Holland's RIASEC personality types.They were translated and adapted in order to match the Greek vocational reality.For younger The European Journal of Counselling Psychology 2016, Vol.5(1), 4-17 doi:10.5964/ejcop.v5i1.83 figures with details, implies more knowledge on the discussed issues, feelings of security and adequate emotional maturity, while the absence of details implies emotional immaturity and feelings of shame.The criteria selected for the purposes of the present study are the ones that assess child's self-confidence and self-esteem, allowing to test for changes in a child's attitude towards self and his/her perceptions/views. aged 8, 16 children (10 boys, 6 girls) out of 19, changed the position of the drawings towards the centre of the The European Journal of Counselling Psychology 2016, Vol.5(1), 4-17 doi:10.5964/ejcop.v5i1.83 related to the development of children's drawing ability, has addressed the fact that as children grow older, they develop a range of tactics in pictorial representation, which they use according to the needs and purpose of their drawings and the context in which their work is produced.Children of 8 years old belong to a different pictorial group than children of 11 years old.Children of 11 years old, unlike those of 8 years old, possess the technical ability to draw details in order to attribute anything they can see in the "real world" by matching conception to production. Table 1 Results of Chi-square Test and Descriptive Statistics for the Question "What do you want to be when you grow up?" Table 2 Results of Chi-square Test and Descriptive Statistics for theQuestion "What Job Does This Person Do?" and the Sex - Table 3 Results of Chi-square Test and Descriptive Statistics for the Question "What Do You Want to Be When You Grow Up?" and Sex -Before Intervention Table 4 Results of Chi-square Test and Descriptive Statistics for the Question "Do You Like That Job?" and Age -Before Intervention Table 5 Results of Chi-square Test and Descriptive Statistics for the Question "Do You Like That Job?" and Age -
2019-05-10T13:07:44.229Z
2016-12-23T00:00:00.000
{ "year": 2016, "sha1": "b403b638f0ed6213dc4da1e02fcef537c1467525", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5964/ejcop.v5i1.83", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7dbe76b3d7e52531beda09ed99223273456f373a", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
228911260
pes2o/s2orc
v3-fos-license
Improving the Anaerobic Digestion of Wine-Industry Liquid Wastes: Treatment by Electro-Oxidation and Use of Biochar as an Additive : Wine lees have a great potential to obtain clean energy in the form of biogas through anaerobic digestion due to their high organic load. However, wine lees are a complex substrate and may likely give rise to instabilities leading to failure of the biological process. This work analysed the digestion of wine lees using two di ff erent approaches. First, electro-oxidation was applied as pre-treatment using boron-doped diamond-based electrodes. The voltage was 25 V and di ff erent treatment times were tested (ranging from 0.08 to 1.5 h) at 25 ◦ C. Anaerobic digestion of wine lees was evaluated in batch tests to investigate the e ff ect of electro-oxidation on biogas yield. Electro-oxidation exhibited a significant positive e ff ect on biogas production increasing its value up to 330 L kg − 1 of volatile solids after 1.5 h of treatment, compared to 180 L kg − 1 of volatile solids measured from raw wine lees. As a second approach, the addition of biochar to the anaerobic digestion of wine lees was investigated; in the experimental conditions considered in the present study, the addition of biochar did not show any positive e ff ect on anaerobic digestion performance. Introduction Nowadays, Italy, France and Spain are the three top wine producers worldwide, with 14.8 million m 3 of wine produced in 2018 [1]. The winemaking industry generates large quantities of solid waste and wastewater. Solid residues derived from this industry include stalks from destemming, grape marcs (or pomace) from pressing, and lees from the settling of leftover products from the fermentation and dead yeast cells on the bottom of the vessel. The transformation of 1000 kg of processed grape produces 0.75 m 3 of wine and also 130 kg of marc, 60 kg of lees, 30 kg of stalks, and 1.65 m 3 of wastewater [2]. Winery wastes may be used to extract valuable chemicals (e.g., phenols, antioxidants, tartaric acid, lignocellulose), as feedstock for the production of bioenergy by means of thermo-chemical and biological processes, and for agricultural and environmental applications (composting, animal feed, biosorbents) [3][4][5]. In Europe, wine wastes as pomace and lees were commonly used in distilleries according to EC 1493/1999. Nowadays, European Council Regulations 479/2008/EC and 555/2008/EC have permitted many of the aforementioned applications. Anaerobic digestion (AD) is an established technology able to convert biowaste into biogas, a renewable energy source, and digestate, a potential soil improver. However, AD in many circumstances is unable to be cost-competitive and still presents several challenges that need to be overcome [6]; such as limitations due to the hydrolysis of complex compounds, the presence of inhibitory substances and the accumulation of recalcitrant components in the fermentation medium [7]. Other problems also include proper management of digestate and high costs associated with biogas cleaning and upgrading. Different strategies have been reported in the literature to counteract these drawbacks, such as the optimization of working parameters, co-digestion, and the application of pre-treatments or additives. AD has been widely studied for the treatment of winery wastewater [8] and wine solid residues as seen in Table 1. Batch digestion tests under mesophilic conditions (35-40 • C) have been traditionally evaluated reporting a wide variety of methane yields, ranging from 0.1 to 0.5 m 3 kg −1 of volatile solids (VS). Most studies concern the digestion of grape pomace, and few refer to wine lees (WL). Jasko et al. [9] reported specific biogas production (SBP) values ranging from 0.254 to 0.856 m 3 kg −1 VS (55-60% methane) when co-digesting WL at a percentage of 10-20%. Da Ros et al. [10] observed a specific methane production (SMP) of 0.370 m 3 CH 4 kg −1 VS from WL in AD at 55 • C. WL, consisting of dead yeast cells, tartrates, proteins, polysaccharides, and other substances produced and settled during the fermentation [18], are without doubt a complex substrate for AD and may likely create process instability. The high COD and low pH values, associated with the high concentration of organic acids and polyphenols, may seriously affect methane production. For this reason, some studies investigated the co-digestion of WL with waste-activated sludge [11] or their co-digestion with wine wastewater sludge [12] and other substrates [19]. Although AD has been proved to be an effective treatment option for the valorisation of winery wastes, to the authors' knowledge only few studies have focused specifically on WL (see Table 1), therefore they can be regarded as an undervalued residue. There is still a need to find an efficient method to counteract the toxicity of some of the recalcitrant compounds present in WL to achieve complete valorisation of this waste. Electrochemical oxidation, also known as electro-oxidation (EO), treatments experienced an increasing interest in the research community because this technology is able to reduce the content of refractory substances present in biowastes. EO is an attractive technology due to its ability to treat (under moderate conditions, ambient temperature and pressure) toxic and/or complex organic pollutants present in industrial or domestic wastewaters. A crucial advantage of EO is attaining the oxidation of organic substances without the need for adding chemicals, leaving therefore no toxic residues in the effluent stream. Electro-oxidation of organic substances occurs in association with the transfer of oxygen from water to the reaction products. Water is the source of oxygen atoms for complete oxidation. The electrode produces hydroxyl radicals (Equation (1) and Figure 1), which are the intermediaries for the oxygen evolution reaction (Equation (2)) and are also responsible for the oxidation of the organic matter by means of the interaction between organic pollutants and hydroxyl radicals (Equation (3)). These interactions are strongly linked with the anode surface [20,21]. Organic compounds aq + Electrode(·OH) a 2 → Electrode + Oxidised products + a 2 H + + e − (3) Boron-doped diamond (BDD) electrodes have shown interesting properties due to their capability to non-specifically oxidize AD inhibitors [21]. BDD electrodes oxidize the organic matter through the hydroxyl radical (OH•), a powerful oxidizing agent. BDD electrodes have good chemical stability, a wide potential window and high electrical conductivity, making them suitable to work with high organic loads and complex substrates [22]. BDD electrodes are commonly used for electrochemical wastewater treatment because of their efficient ability in total organic carbon (TOC) removal [21]. Recent studies did not show clear degradation pathways for macromolecules; generally, attention has been paid to organic matter Boron-doped diamond (BDD) electrodes have shown interesting properties due to their capability to non-specifically oxidize AD inhibitors [21]. BDD electrodes oxidize the organic matter through the hydroxyl radical (OH•), a powerful oxidizing agent. BDD electrodes have good chemical stability, a wide potential window and high electrical conductivity, making them suitable to work with high organic loads and complex substrates [22]. BDD electrodes are commonly used for electrochemical wastewater treatment because of their efficient ability in total organic carbon (TOC) removal [21]. Recent studies did not show clear degradation pathways for macromolecules; generally, attention has been paid to organic matter reduction (total organic carbon, chemical oxygen demand) or specific changes in the concentration of particular compounds, as polyphenols, alcohols or melanoidins [23][24][25]. Nevertheless, different studies are mainly focused into the pathway of degradation of a single molecule; Cañizares et al., 2005 [26] investigated the electrochemical oxidation of phenol, found that oxalic acid was the main intermediate in the oxidation of aromatic compounds, and suggested that the aromatic ring cleavage is faster than the oxidation of carboxylic acids. The studies of Nasr et al., 2005 [27] with hydroquinone resulted in a non-detection of aromatic intermediates, even with small quantities of charge applied. EO has been studied as pre-treatment of AD with the aim of accelerating the rate-limiting step of hydrolysis when recalcitrant substrates are to be treated [28] or reducing the toxicity of some organic compounds that may be present in winery waste. On the other hand, biochar (BC) is the solid residue derived from the thermo-chemical processing of biomass in the absence or with limited air. Recently, the suitability of using BC as an additive in AD has attracted growing attention in the scientific community [29][30][31]. Among the different additives available, BC is cost-effective and in recent years the use of carbon conductive materials in AD processes was proven as an interesting way for improving performance without greatly affecting the energy demand of the process [32]. To date, many studies verified the role of BC in increasing methane production from different substrates, suggesting different mechanisms of interaction with anaerobic microflora: serving as support media for the immobilisation of biomass, promoting syntrophic metabolisms between microbial populations, increasing the buffer capacity of the digestion system and mitigation of potential inhibitors [33,34]. The main objective of this study was to investigate the improvement of AD of WL by two different approaches: one was the application of EO as pre-treatment and the other was the use of BC as an additive. EO performance was assessed through the measurement of colour removal and decrease in organic substances' concentration. Despite anaerobic digestion of WL already being studied, the novelty of this work concerns two main issues: (1) the comparison of coupled processes (AD-EO and AD-BC) as alternatives and (2) mild pre-treatment process conditions, as low treatment time (from 0.08 to 1.5 h) and low current density (from 11 to 18 mA cm −2 ), in order to observe better, efficient, and more economic conditions for WL treatment compared to AD alone. AD performance was evaluated by means of specific methane production. In detail, the present study aimed to address Energies 2020, 13, 5971 5 of 17 the following research questions (RQ): (RQ1) Is AD appropriate for the valorisation of WL from a technical perspective, considering potential inhibitors? (RQ2) Could EO through BDD anodes act as an effective pre-treatment to improve the performance of AD of WL? (RQ3) Can BC be a valuable additive to improve the performance of AD of WL? (RQ4) Is the effect of BC associated with its physical properties rather than with the electro-chemical ones? Inoculum and Substrate Characterisation Anaerobic digestion tests were performed at the University of Leon, Spain and at the Politecnico di Torino, Italy ( Table 2). The inoculum was obtained from mesophilic digesters at wastewater treatment plants (WWTPs) in León (Spain) and Biella (Italy). Inoculum from Spain had a content of 21.1 ± 0.05 g L −1 total solids (TS) and 67.3% of volatile solid (VS) (expressed as % TS). The inoculum used in Italy had a solid content of 31.1 g L −1 TS and 43.1% VS (expressed as % TS). Wine Lees (WL) were sampled in July 2019 from a winemaking company located in the Langhe region (Piedmont, Italy) dedicated to Barbaresco wine production. Electro-Oxidation (EO) Pre-Treatment EO of WL was carried out under batch conditions using a 70 mL cell containing two boron-doped diamond (BDD) electrodes (Pro Aqua Diamond Electrodes, Niklasdorf, Austria) used as anode and cathode. The BDD electrodes had 42 cm −2 effective surfaces and were placed at a 5 mm distance ( Figure 1). The temperature of pre-treatments was 25 ± 1 • C, and a voltage of 25 V was applied for durations between 0.08 and 1.5 h (equivalent to current densities from 11 to 18 mA cm −2 ). The experimental conditions of EO tests were chosen according to previous experiments (data not shown). The current efficiency (C E ), expressed as a percentage (%), was calculated from the measured total organic carbon (TOC) values and using the equation proposed by Bensalah et al. [35]: where TOC t and TOC t+∆t are the experimental values measured for the wine lees at times t and t + ∆t (mg L −1 ), ∆t is the electrooxidation time (s), I is the current (A), F is the Faraday constant (96,485 c mol −1 ) and V is the volume of the solution (L). Electro-oxidation experiments were labelled according to the sample (wine lees (WL) for control sample) and for treated samples, adding the time of electrooxidation applied. The treatment time was in the range from 0.08 to 1.5 h (5 to 90 min). Therefore, assay denoted as WL_0.08 h, refers to wine lees treated for 0.08 h. Anaerobic digestion test digesters were labelled according to this same nomenclature. Biochar and Pumice Stone as Additives Biochar (BC) derived from softwood pellets (SWP550) obtained in a pilot-scale rotary kiln pyrolysis unit at 550 • C in the UK Biochar Research Centre (Edinburgh, UK) was employed in this study (Table 3). Aside from physical properties, BC has also chemical properties. Therefore, aiming at investigating the net chemical contribution of BC, a chemically inert material (granular pumice stone, PS, purchased from Bonsai Shopping) was also used in the AD tests. Both materials were manually powdered in an agate mortar and added separately to the digesters in a concentration of 10 g L −1 as described by Martínez et al. [29] and Gómez et al. [36]. Powdered pumice stone had total surface area equal to 6.29 m 2 g −1 . Digestion reactors were labelled as follows: WL for raw substrate, Bio_WL for wine lees supplemented with 10 g L −1 of biochar and PS_WL for wine lees supplemented with 10 g L −1 of pumice stone. Anaerobic Digestion (AD) Tests AD tests were carried out in Spain (August 2019) and in Italy (November 2019), adopting batch mode at 37 • C and Duran glass bottles or Erlenmeyer flasks (250 mL working volume) connected to 2 L inert-foil gas-bags (Supelco, Bellefonte PA, USA). Each reactor was filled with inoculum and substrate at a 1:1 ratio (expressed as volatile solids, VS) and 10 g L −1 BC. NaHCO 3 was added to adjust pH at 7.5. The temperature was controlled by a water bath set at 37 ± 1 • C and mixing was provided by means of magnetic stirring (RO15, IKA, Staufen, Germany). For every set of reactor, three bottles were destined for biogas measurement by water displacement with a Drechsel bottle containing a salt saturated, 5% sulfuric acid solution with methyl orange [37,38], and three bottles were destined for methane measurement. Biogas and methane monitoring happened differently in Spain and in Italy. In Spain, biogas was characterized through gas chromatography (GC-FID) (see Section 2.5); in Italy, each reactor for methane measurement was connected to a 100 mL glass bottle containing an alkaline washing solution (3N NaOH) and the outgoing gas flow was measured by water displacement. In all tests, biogas and methane production from the inoculum were evaluated along the tests. Biogas and methane volumes were measured every 2-7 days and corrected to standard temperature and pressure (STP, 0 • C and 100 kPa). Net values of biogas and methane production were calculated subtracting the contributes of the inoculum. The performance and kinetics of anaerobic digestion in batch mode can be examined by many models, based on the assumption that bacterial growth is proportional to the production of biogas. In this study, the cumulative biogas production curves were fitted by the modified Gompertz model according to the equation: where B(t) is the cumulative biogas production (Nm 3 kgVS −1 ) at time t (day); P is the biogas potential of the substrate (Nm 3 kg −1 VS); R max is the maximum biogas production rate (Nm 3 kg −1 VS d −1 ); λ is the lag phase (day); e is the base of the natural logarithm. The goodness of fit of the functions was estimated by two parameters, the coefficient of determination (R 2 ) and the standard error of the estimates (SEE) [39], defined respectively in Equations (6) and (7): where Y p and Y o are the predicted and experimental data, respectively; Y is the arithmetic mean of the experimental data; n and m are respectively the numbers of experimental values and parameters. The SEE is the standard deviation of the residual values, i.e., the difference between the experimental and predicted values. Analytical Techniques and Procedures Total solids (TS), volatile solids (VS), pH, conductivity, nitrogen, ammonium and organic matter contents were measured according to the American Public Health Association Standard Methods [40]. In this study, volatile solid values were used to estimate the cumulative biogas potential. Different authors reported that the method of oven drying gives inaccurate values of volatile solids when the sample contains high values of volatile fatty acids [41]. Deviations in VS values caused by the effect of volatilisation of fatty acids when performing these measurements were corrected by adding the amount of total volatile fatty acid concentration to the value measured of volatile solids by the oven dry method, thus methane potential considered VS obtained from oven dry measurements and those associated with the presence of volatile fatty acids (VFAs) in the sample. Sensitivity Analysis All analyses were carried out in triplicates and average values were reported along with standard deviations. Statistical tests of experimental data were carried out using data analysis extension of Microsoft Excel 2016. The kinetic parameters were estimated using non-linear regression analysis by means of the SOLVER function of Microsoft Excel 2016. This function fits experimental data with the method of least-squares. Performance of EO Pre-Treatments The results of EO pre-treatment tests (Table 4 and Figure 2a-d) showed a rapid decrease in the content of the different parameters as the EO duration increased, with the exception of the IC parameter (which was unaltered) and VFAs, particularly acetic acid concentration, which increased. For all cases studied, TOC and COD decreased progressively under prolonged EO time achieving 20% organic material removal (Figure 2a). Nevertheless, these results were lower compared with other studies where EO was evaluated as a pre-treatment in the degradation of WL [25,45]. in the organic matter [44,46]. The EO pre-treatment caused a decolourisation of the sample which was also associated with the decrease in organic carbon. The concentration of ethanol and polyphenols decreased when increasing the duration of EO treatment (see Figure 2c). Ethanol concentration experienced a linear decrease with the increment in the treatment time (39 − 10.1 × t; R 2 = 0.814), while in the case of polyphenols the behaviour is better described by a logarithmic approximation (23.6 − 5.13 × Ln(t); R 2 = 0.989) indicating a marked effect in the initial times. WL analysed in this study has a high content of polyphenols that can hinder AD. The EO achieved a reduction between 25% to 60% in the elimination of this polyphenols (Figure 2d), which seems appropriate as a pre-treatment option when considering the subsequent AD of WL. The experiments of Candia-Onfray et al. [47] treating winery wastewater using BDD electrodes reached almost complete mineralisation of organic matter when applying 20, 40 and 60 mA cm −2 at 0.1 L of aqueous solutions containing 3490 mg L −1 of COD. In the present study, current density was lower (18.8 mA cm −2 ) and WL sample was characterised by a higher content of organic matter and lower conductivity (see Table 2), which reduced the EO performance when using the BDD cell. The aforementioned increase in VFAs and acetic acid contents (Table 4 and Figure 3) could be explained assuming that EO could promote the conversion of complex organic molecules into simpler species. Three types of short-chain VFAs were measured (acetic, propionic and butyric acids), with the highest concentration corresponding to acetic acid. This acid is a type of substrate that can be directly assimilated and transformed into methane through methanogenesis, thus the observed enhancement on its production could be directly associated with an expected increase in methane production due to EO. Nevertheless, the content of acetic acid must be within values below those capable of causing inhibition of the digestion process due to the activity of methanogens could be reduced by the VFA accumulation [48]. In most cases, moderate inhibition was reported at acetic acid concentrations of 900-1800 mg L −1 for initial propionic acid concentrations in the range between 740-1850 mg L −1 [49]. The characteristic dark colour of WL (Figure 2b) is due to the presence of melanoidins, which are a recalcitrant pigment produced by the Maillard reaction between amino and carbonyl groups present in the organic matter [44,46]. The EO pre-treatment caused a decolourisation of the sample which was also associated with the decrease in organic carbon. The concentration of ethanol and polyphenols decreased when increasing the duration of EO treatment (see Figure 2c). Ethanol concentration experienced a linear decrease with the increment in the treatment time (39 − 10.1 × t; R 2 = 0.814), while in the case of polyphenols the behaviour is better described by a logarithmic approximation (23.6 − 5.13 × Ln(t); R 2 = 0.989) indicating a marked effect in the initial times. WL analysed in this study has a high content of polyphenols that can hinder AD. The EO achieved a reduction between 25% to 60% in the elimination of this polyphenols (Figure 2d), which seems appropriate as a pre-treatment option when considering the subsequent AD of WL. The experiments of Candia-Onfray et al. [47] treating winery wastewater using BDD electrodes reached almost complete mineralisation of organic matter when applying 20, 40 and 60 mA cm −2 at 0.1 L of aqueous solutions containing 3490 mg L −1 of COD. In the present study, current density was lower (18.8 mA cm −2 ) and WL sample was characterised by a higher content of organic matter and lower conductivity (see Table 2), which reduced the EO performance when using the BDD cell. The aforementioned increase in VFAs and acetic acid contents (Table 4 and Figure 3) could be explained assuming that EO could promote the conversion of complex organic molecules into simpler species. Three types of short-chain VFAs were measured (acetic, propionic and butyric acids), with the highest concentration corresponding to acetic acid. This acid is a type of substrate that can be directly assimilated and transformed into methane through methanogenesis, thus the observed enhancement on its production could be directly associated with an expected increase in methane production due to EO. Nevertheless, the content of acetic acid must be within values below those capable of causing inhibition of the digestion process due to the activity of methanogens could be reduced by the VFA accumulation [48]. In most cases, moderate inhibition was reported at acetic acid concentrations of 900-1800 mg L −1 for initial propionic acid concentrations in the range between 740-1850 mg L −1 [49]. The current efficiency (CE) values (Figure 3b) were approximately 30%. The pronounced increase of CE values with the working time indicated a better removal of organic matter, which was related to the application of higher current density when testing values from 11 to 18 mA cm −2 . The current density is directly associated with the number of hydroxyl radicals generated (capable of The current efficiency (CE) values (Figure 3b) were approximately 30%. The pronounced increase of CE values with the working time indicated a better removal of organic matter, which was related to the application of higher current density when testing values from 11 to 18 mA cm −2 . The current density is directly associated with the number of hydroxyl radicals generated (capable of reacting with organic matter). These parameters, along with the applied treatment time, are well-known factors to improve EO. Effect of EO Pre-Treatments on AD The results of AD tests (Figure 4) demonstrated a positive effect of EO on biogas, considering the final gas volumes and production rates. Final biogas volume measured for WL (around 0.18 L kg −1 VS) was not far from literature values [9]. EO improved biogas final volumes after 1 h of EO, and up to about 330 L kg −1 VS after 1.5 h of EO. The average methane content in all digesters was 58%. reacting with organic matter). These parameters, along with the applied treatment time, are well-known factors to improve EO. Effect of EO Pre-Treatments on AD The results of AD tests (Figure 4) demonstrated a positive effect of EO on biogas, considering the final gas volumes and production rates. Final biogas volume measured for WL (around 0.18 L kg −1 VS) was not far from literature values [9]. EO improved biogas final volumes after 1 h of EO, and up to about 330 L kg −1 VS after 1.5 h of EO. The average methane content in all digesters was 58%. The transformations involving the organic matter during the EO process allowed an effective assimilation of the substrate by microorganisms. The results regarding the reduction or removal of the different parameters may not be considered adequate in terms of the individual performance of the EO process ( Figure 2; Table 4). However, the elimination/decrease of complex substances as melanoidins, polyphenols and alcohols are adequate to allow further degradation of WL in a subsequent stage of AD. Nevertheless, methane production was unstable in all reactors and particularly at the beginning of AD. A lag phase ( Table 5) that varied from 0.033 to 7.38 days was observed in the pre-treated WL samples, which could be explained considering the high concentration of VFAs and polyphenols in WL (Table 2). The transformations involving the organic matter during the EO process allowed an effective assimilation of the substrate by microorganisms. The results regarding the reduction or removal of the different parameters may not be considered adequate in terms of the individual performance of the EO process ( Figure 2; Table 4). However, the elimination/decrease of complex substances as melanoidins, polyphenols and alcohols are adequate to allow further degradation of WL in a subsequent stage of AD. Nevertheless, methane production was unstable in all reactors and particularly at the beginning of AD. A lag phase ( Table 5) that varied from 0.033 to 7.38 days was observed in the pre-treated WL samples, which could be explained considering the high concentration of VFAs and polyphenols in WL (Table 2). The EO pre-treatment caused partial oxidation of the organic material and the complex molecules present in the sample, melanoidins, polyphenols and higher organic acids were oxidized and transformed into simpler molecules such as acetic acid when the pre-treatment was prolonged. The EO treatment was more effective for the longest time of pre-treatment application (see Figures 2 and 3 and Table 4). Nevertheless, acidic media (high concentration of VFA), can cause a slow start for the anaerobic digestion, a higher concentration of VFA (acetic, Figure 3) in samples corresponded to extended lag phases. VFAs are inhibitors of the AD process when their concentrations exceed 50-250 mg L −1 [50]. Furthermore, the form in which the acid is present in the solution (as dissociated or undissociated form, which in turn depends on the pH of the solution) is a key factor in causing the inhibition phenomena. Undissociated VFAs can freely enter the cytoplasm through the membrane and be metabolised, whereas the cell membrane remains impermeable to the dissociated ion [51]. Although in this study the AD tests started with a relevant concentration of VFAs (Tables 2 and 4), having an important impact over the lag phase in the pre-treated samples, pH was adjusted to 7.5 (Section 2.4) and no inhibition was observed in methane production after EO pre-treatments (Figure 4). Different studies found that the presence of phenolics substances in wine production liquid wastes from 60 to 667 mg L −1 may result in low biogas production and instability [52][53][54]. The concentrations of polyphenols in the digestion liqueur obtained in this study were 58 to 20 mg L −1 of gallic acid lower to the previously mentioned thresholds. The values of VFA obtained from the different digestion tests would cause acidification of the medium (expecting a pH < 5) and as consequence, the methanogens would have been inhibited. However, the initial addition of sodium bicarbonate neutralised the acid nature of the medium [55] and kept pH values between 6.8 to 7.5 allowing the process to continue. Effect of Biochar Addition on AD The results of AD tests performed adding biochar ( Figure 5 and Table 6) were as follows. The cumulative specific biogas production from WL was analogous to the results previously obtained ( Figure 4). No methane was measured during AD tests supplemented with BC or PS. A one-way analysis of variance (ANOVA) demonstrated that there was not any significant difference between the specific biogas productions of WL in the presence of BC and PS and WL alone (F (2,6) = 0.146, p = 0.87). It was observed that for recalcitrant materials and other complex organic wastes, as it could be the case of WL, an extra disintegration process becomes necessary before AD, either as a pre-treatment or during fermentation. Considering the mass balance of TS and VS (Figure 6a), the solids removal during AD seems scarce, and the amount of total VFAs (Figure 6c) far above the critical values cited above [50]. Energies 2020, 13, x FOR PEER REVIEW 14 of 19 Figure 5. Cumulative specific biogas potential measured for wine lees with biochar (Bio_WL) and pumice stone (PS_WL) and comparison with wine lees (WL). Modified Gompertz Wine Lees Biochar Pumice Stone Figure 5. Cumulative specific biogas potential measured for wine lees with biochar (Bio_WL) and pumice stone (PS_WL) and comparison with wine lees (WL). (hp1)The fermentation of a highly biodegradable substrate rich in inhibitors promoted the production of other inhibitors during the early stages of the AD process ( Figure 3 and Table 4), resulting in low biogas production. (hp2)A lower (than 1:1 value adopted in this study) substrate to inoculum ratio could eventually improve the stability of the system, specifically diluting inhibitory substances and adding a larger amount of active biomass. (hp3)Despite BC addition, which was verified to be effective in the adsorption of inhibitors, the specific biogas production obtained in the first 4 days was lower for WL in the presence of BC than either for WL with PS addition or WL as sole component (Table 5). This may indicate that the addition of BC resulted in an initial decrease in biogas production, possibly because of CO 2 adsorption, and an increase in the lag phase of one order of magnitude. (hp4)Comparing pH values before and after AD tests (Figure 6b), it was observed that any pH drop occurred during the preliminary phases of the AD process was associated with an initial increase of an efficient buffering effect due to the alkalinity added to the system before the tests. (hp5)Specific biogas production measured from WL and PS supplemented digesters seemed analogous to the trend observed for WL alone (Table 6). PS was used as inert support for microbial attachment and growth (see Section 2.3). In this work, the presence of PS did not seem to play any positive role at the dose evaluated. (hp6)The adsorption of BC was not selective and it is possible that nutrients and useful metabolites such as nitrogen source were also adsorbed. The addition of BC to the AD of WL presented an analogous trend compared to WL alone. However, the increment in VFAs (Figure 6c) may indicate that degradation was slightly enhanced, but this in turn led to higher concentration of VFA. VFAs, in addition to inhibitors already present in WL, may be responsible for the delay in biogas production observed in this work. It should be also mentioned that WL considered in this work was sampled in July, about 10 months after grapes pressing and the beginning of fermentation processes happening in wine-making (e.g., readily biodegradable organic compounds were already fermented, leaving behind less biodegradable and more acidic compounds). Conclusions On the grounds of the results obtained from this work, some conclusions could be made in an attempt to answer the RQs mentioned in Section 1. Firstly, AD seemed an appropriate treatment process for the valorization of WL if the application of EO as pre-treatment is considered. Taking into account the complex nature of WL, a strong oxidation pre-treatment appears crucial to improve biogas production and enhance methanogenesis. Further research is necessary to investigate in detail the operating conditions to assure stable methane production and to evaluate energy needs associated with the coupled configuration regarding pre-treatment and subsequent digestion. Secondly, in the experimental conditions explored, BC did not exhibit any significant benefit, as it is usually reported in the literature. In fact, the mere effect of BC as physical support for biomass growth was not observed. WL are biodegradable, but with a high content of inhibitors and also strongly acidic. It is necessary to explore operative conditions that could prevent any overload of AD and effectively improve methanogenesis.
2020-11-19T09:17:55.123Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "859ed5b754946ddc51def9d2049ff4ad30385e51", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/22/5971/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4caa83b1a707eefa94092ba6525ca4f0d20d4dca", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
119695151
pes2o/s2orc
v3-fos-license
The Role of Surface Plasmons in The Casimir Effect In this paper we study the role of surface plasmon modes in the Casimir effect. The Casimir energy can be written as a sum over the modes of a real cavity and one may identify two sorts of modes, two evanescent surface plasmon modes and propagative modes. As one of the surface plasmon modes becomes propagative for some choice of parameters we adopt an adiabatic mode definition where we follow this mode into the propagative sector and count it together with the surface plasmon contribution, calling this contribution"plasmonic". We evaluate analytically the contribution of the plasmonic modes to the Casimir energy. Surprisingly we find that this becomes repulsive for intermediate and large mirror separations. The contribution of surface plasmons to the Casimir energy plays a fundamental role not only at short but also at large distances. This suggests possibilities to taylor the Casimir force via a manipulation of the surface plasmons properties. In this paper we study the role of surface plasmon modes in the Casimir effect. The Casimir energy can be written as a sum over the modes of a real cavity and one may identify two sorts of modes, two evanescent surface plasmon modes and propagative modes. As one of the surface plasmon modes becomes propagative for some choice of parameters we adopt an adiabatic mode definition where we follow this mode into the propagative sector and count it together with the surface plasmon contribution, calling this contribution "plasmonic". We evaluate analytically the contribution of the plasmonic modes to the Casimir energy. Surprisingly we find that this becomes repulsive for intermediate and large mirror separations. The contribution of surface plasmons to the Casimir energy plays a fundamental role not only at short but also at large distances. This suggests possibilities to taylor the Casimir force via a manipulation of the surface plasmons properties. I. INTRODUCTION The Casimir force is the archetypal mechanical consequence of vacuum fluctuations in the quantized electromagnetic field. In its simplest form, it gives rise to the attraction of two planar mirrors placed in empty space at zero temperature [1]. The corresponding interaction energy E takes a universal form for perfect reflectors, where L is the distance between the mirrors, A their area, and and c the reduced Planck constant and the speed of light. We abbreviate ℵ = 180/π 3 ≈ 5.8052762. As usual in thermodynamics, a negative energy corresponds to a binding energy. The Casimir force was soon observed in different experiments which confirmed its existence [2,3,4]. In recent years, technological improvement allowed to reach a precision in the percent range, which makes an accurate comparison to theoretical predictions possible and has prompted a series of refined calculations [5,6]. Casimir's 1948 derivation of Eq.(1) is based on summing the zero-point energies 1 2 ω of the cavity eigenmodes, taking the difference for finite and infinite separation, and removing the divergences by inserting a high-energy cutoff. He considered an ideal setting with perfectly reflecting mirrors in vacuum. Experiments are however performed with real reflectors, typically metallic mirrors which are good reflectors only at frequencies below the plasma frequency (ω p /2π) or alternatively at wavelengths much larger than λ p = 2πc/ω p . It has been known since a long time that this has a significant effect on the force, in particular at mirror distances of the order of λ p or smaller [7,8,9], and precise investigations have been developed recently [5,10,11,12,13,14,15,16,17,18]. A system made from real material mirrors sustains electromagnetic modes which strongly differ with respect to the ideal case, in particular plasma oscillations and surface plasmons (sometimes called surface plasmon polaritons). These are collective electron density waves with energies ω p around ten electron volts (at typical metallic densities). These waves can be quantized and since ω p is larger than any experimentally relevant thermal energy, one can safely consider that bulk plasma modes are in the ground state [19]. This is not quite true for the surface plasmon modes that are confined to the surface of a metallic mirror. Their electronic excitation is accompanied by an electromagnetic field mode that is evanescent inside the cavity [20]. Surface plasmons play an important role in many fields of physics. Let us only mention the plasmon-assisted light transmission through metallic structures [21,22,23], or dispersion forces between electronic Wigner crystals that are relevant for biomolecular physics [24]. More generally, evanescent electromagnetic waves have a strong impact on the Casimir-Polder interaction between an atom and a surface as well as on the interaction between two surfaces at differences temperatures [25,26]. It is well known, indeed, that the Casimir effect, at short distances, is dominated by the coupling between the surface plasmons that propagate on two metallic mirrors. This has been pointed out in 1968 by Van Kampen and co-workers [27] who computed the Casimir energy for L λ p in terms of quasi-electrostatic (or non-retarded) field modes. In this limit the Casimir energy becomes [7,12,28] which is smaller than Eq.(1). Observe the different power law and the non-universal behavior as the result depends on the material parameter λ p . For metals used in modern experiments, λ p lies in the sub-micron range (107nm for Al and 137nm for Cu and Au). This short-distance regime has been studied in much detail since Van Kampen's paper, investigating, for example, materials with a nonlocal response [29,30]. As the mirror separation increases, retardation has to be taken into account, and Van Kampen's result calls for a generalization. This has been done by Schram in 1973 [31], improving on a previous paper by Gerlach [32]. Schram considered mirrors described by a non-dissipative dielectric function, found the electromagnetic modes vibrating between these mirrors, and got the Casimir energy by summing their zero point energies. Among these modes, we find the retarded version of van Kampen's surface plasmon modes. Schram did not analyze separately their contribution and focused on the total energy, using a calculation based on the argument principle. Summerside and Mahanty investigated the joint effect of retardation and nonlocality on the surface plasmon modes at short distances [30]. In this paper we investigate more closely the influence of surface plasmon modes on the Casimir energy, covering both the non-retarded and retarded domains. This permits to explore the experimentally relevant distance range around one micron where current precision experiments are performed. The plasmon modes are identified in a natural way in the sum over electromagnetic modes of the real cavity. We have shown previously that they have peculiar properties [33]: one of them is purely evanescent while the dispersion relation of the other one changes its character from evanescent to propagating inside the cavity (it crosses the light cone). In addition, the combined plasmonic contribution to the Casimir energy has the peculiarity to change sign as a function of distance L. Here, we derive and expand on these results in more detail and exhibit closed-form expressions valid at all distances. The main idea is to perform a re-parametrization of the dispersion relations that permit to evaluate analytically the relevant inte-grals. We recover van Kampen's result at short distances and discuss explicitly the asymptotic behaviour in the long distance domain where retardation plays an important role. This regime was not covered in a previous paper by one of us [34] that performs an analysis of surface plasmons in the shortdistance (non-retarded) regime. The analysis of the "photonic modes" (corresponding to waves that propagate in the cavity) will be the object of a following paper. For simplicity, we restrict here to zero temperature, the generalization to finite temperature being straightforward. The material is organized as follows. The basic method and the cavity modes are introduced in Sec. II. The dispersion relation of the plasmonic modes is analyzed in Sec. III and Appendix A, and their contribution to the Casimir energy given in Sec. III A. The Secs. III B and C discuss the short and large distance regimes. Our analysis concludes with a discussion of the sign of the Casimir interaction (Sec. III D) and of alternative splittings of the plasmonic dispersion relations that appeared recently in the literature (Sec. III E). II. CASIMIR INTERACTION AND REAL CAVITY MODES In 1973 Schram proved the following mathematical identity [31], exploiting the argument principle [5] The left-hand side has the same structure as Casimir's sum over zero point energies, but in this case the relevant modes are those of the real cavity. The notation [· · · ] L L→∞ signifies the difference of the expression in brackets for finite and infinite mirror distance L. The right-hand side is nothing but the Lifshitz formula for the Casimir energy. Let us recall that Lifshitz adopted in 1955 [7] a fairly different viewpoint and computed the force as the average of the Maxwell stress tensor inside the cavity. He considered the electromagnetic fields as being radiated by fluctuating sources in the medium composing the mirrors, similar to London's derivation of the Van der Waals force between atoms and molecules. The main point of Ref. [31] was to show that the Lifshitz approach yields the same result as the Casimir sum over zero-point energies, provided the mirrors are non-dissipative. This is the case we focus on here. The modes in Eq.(3) are labelled by their polarization µ = TE, TM and the wavevector k ≡ (k x , k y ) parallel to the mirrors; the perpendicular wavevector k z is defined in Eq.(6) below. The r µ k are the reflection amplitudes that we take the same for both mirrors. The mode frequencies ω µ n (k) are related to the zeros and the branch cuts of [31] We adopt here the Fresnel formulas for the reflection amplitudes that for the case of thick mirrors read [36] r where We choose signs for the square roots such that Re [κ i ] > 0 and Im [κ i ] < 0 in Im [ω] > 0. This analytical continuation entails that Eq.(4) has no solutions in the upper half plane. Finally, [ω] is the dielectric function; in the case of a metal the simplest description is given by the plasma model where ω p is the plasma frequency, a constant which can be related to the specific physical properties of the metal. Up to ω ∼ ω p the dielectric constant differs from unity so that the metal behaves different than the surrounding vacuum. For ω ω p the dielectric constant approaches unity and the metal becomes transparent. This is the way the plasma model implements the high-frequency cutoff for the mirror reflectivity. In this model we neglect all the dissipation phenomena and we impose a local response to the electromagnetic field [36]. From a physical point of view, it is a poor approximation to real metals at low frequencies (dissipation and non-locality, i.e., the anomalous skin effect are predominant) and high frequencies (absorption from intraband transitions). But at any rate, its mathematical simplicity allows explicit calculations to be pushed very far and to understand important physical behaviors. We are going to see that our principal result correspond to a frequency range high enough for the plasma model to be a good description of the metal. Let us stress, however, that the choice of the plasma model is the strongest approximation we make and, within this model, all results we discuss are exact. The introduction of the dielectric properties of the mirrors leads to a series of important modifications for the field modes. First of all, even in the simplest case, the plasma model, the dispersion relations ω µ n (k) cannot be written in terms of elementary functions. The results of a numerical calculation are shown in figs. 1 and 2 (see details below). As we can see, imperfect reflection modifies the dispersion relation (solid lines) compared to a perfect reflector (dashed lines). We can distinguish three regions starting from above: Bulk modes occur for ω > ω B (k) = (ω 2 p + c 2 |k| 2 ) 1/2 (shaded above the thick line); they propagate both in the cavity and inside the mirrors. These modes form a continuum that is mathematically represented by a branch cut of Eq.(4) in the complex ω-plane. This has to be taken into account carefully when applying the argument theorem [37]. The associated difficulties have led Schram to work instead with a mirror of finite thickness d where the continuum discretizes [31]. For simplicity, we take here the limit of thick mirrors. Propagating (ordinary) cavity modes: they occur in the region above the light cone and below the bulk continuum, c |k| < ω < ω B (k). These modes are guided between the mirrors (note that the latter behave like a medium optically thinner than vacuum, 0 < [ω] < 1), leading to a discrete set of mode frequencies for a given k. In this region, the reflection coefficients (5) have unit modulus and a frequency-dependent phase. This leads to a shift of the cavity modes relative to perfectly reflecting mirrors, as is visible in Figs.1,2. Evanescent modes lie below the light cone, ω < c |k| (shaded below the diagonal), and are the main focus of this paper. Their electromagnetic field exponentially decreases when going away from the vacuum-mirror interface, while it is allowed to propagate along the interface. Evanescent fields are of great interest in near field optics because they provide the link to sub-wavelength topographic features of a surface. In the context of the Casimir interaction, they are often underes-timated due to their damped nature. We show here, however, that their contribution is all but a small correction, even at large distances [33]. From a mathematical point of view, the optical properties of evanescent modes (reflection and transmission amplitudes) can be obtained from ordinary modes by a well-defined analytical continuation procedure [35]. Solving the dispersion equation (4) in the evanescent sector, one finds two nondegenerate mode frequencies in only one polarization, at least for non-magnetic media. These modes are called "surface plasmons" (or "surface plasmon polaritons") [20,38,39]. Their field amplitude decays exponentially away from the interface, and is associated with oscillating surface charge and surface current densities, as required by the equation of con-tinuity (see Fig.3). On an isolated interface, surface plasmons correspond to the pole of r T M k [ω]; they occur when ε[ω] < −1 (i.e., ω < ω p / √ 2). For two interfaces, two surface plasmons exist and are coupled via their evanescent tails in the cavity. The resulting modes are given by the zeros of Eq.(4) for real κ and κ m , and will be analyzed in detail in the following. Summarizing, we can see that for the TE-polarization, all modes lie above the light cone, while for TM-polarization, two modes enter the evanescent region in at least some range of wavevectors. We refer to these modes as "plasmonic"; they are the retarded generalization of van Kampen's coupled surface plasmon modes. Finally, we can re-write the Casimir energy as These contributions have no physical meaning on their own, i.e. one cannot measure them separately. The only observable is the total Casimir energy, which is the sum of all terms. However, evaluating them separately reveals striking features which suggest new possibilities to taylor the strength and the sign of the Casimir force. In the rest of this paper, we are going to focus our attention on the plasmonic contribution E pl and shall discuss the remaining contributions to the Casimir energy in another paper. III. PLASMONIC MODES We plot again in Fig.4 the dispersion relation of the two modes in the first sum of Eq.(8). They end up for large |k| below the light cone, i.e., the associated field is evanescent both in vacuum and in the mirrors. One branch that we call ω − (k) lies entirely below the light cone. The second one, ω + (k) moves continuously into the cavity mode sector as k is decreased. The inset illustrates the smooth change in the spatial mode function. This mixed character justifies the name "plasmonic" that we use for both modes in the following. We discuss in Appendix A some general features of their dispersion relations that can be obtained explicitly despite the fact that we have to deal with implicit functions. A. Contribution to the Casimir energy The plasmonic contribution is defined as the first sum on the r.h.s. of eq.(8), namely Both modes tend to ω 0 (K) for L → ∞ so that we subtract the zero-point energy for two isolated surface plasmons. We are thus measuring the interaction energy arising from the coupling between the surface plasmons. Replacing the k-summation by an integral and using the scaled variables introduced in (A1), we get To check the convergence at large K, we use the parametrization of Eqs.(A3) and find the estimate provided K max(1, Ω p ). For further details, see the discussion around Eq. (18). The difficulty in Eq.(10) is that the dispersion relations Ω ± (K) are only known implicitly in the general case. We now show that using the parametrization of Appendix A, the integrand can be brought into an explicit and elementary form. For a single body the associated electric field is evanescent and, for a plane interface, the plasmon can be excited only by approaching from the vacuum side a medium with higher index of refraction and illuminating the latter in total internal reflection. Approaching two surfaces, the two respective surface plasmons couple through their evanescent field tails. A frequency splitting occurs giving rise to two new modes, the plasmonic modes. The antisymmetric (ω+) and the symmetric (ω−) mode have higher resp. lower energy than the isolated (non-coupled) mode ω0. The Casimir force associated with ω+ is then an antibinding force (repulsive) while the ω− modes contribute an attractive force. The plasmonic Casimir force arises from the (distancedependent) balance of the two contributions. with c + = c − = 1, c 0 = −2. We call η pl the correction factor for the plasmonic Casimir energy; note that it depends on the distance only via the dimensionless parameter Ω p . For each of the branches Ω a (K), we now change to the integration variable z = (κL) 2 . The Jacobian (the prime denotes the derivative) dK 2 = 2KdK = dz + g a (z)dz (14) with g a (z) defined in Appendix A, leads to The integration paths are now Γ + = −z + . . . ∞, and Γ −,0 = 0 . . . ∞ where z + is defined in Eq.(A8). One of the two terms under the integral can be integrated immediately, leading to Putting the propagating sector of the mode Ω + (K) into a separate integral, the correction factor for the plasmonic Casimir energy can be rewritten as In the first integral, the functions g a (z) are real. For z → ∞, the functions g ± (z) approach g 0 (z) exponentially fast. An expansion in e − √ z leads to a c a g a (z) ≈ −Ω p e −2 where the function f (z/Ω 2 p ) is bounded and tends to 1/(4 √ 2) for z Ω 2 p . This secures the convergence at large z of the first integral in Eq. (17). The second integral is finite because g + (z) is bounded and the integration domain is finite [see Eq.(A9)]. Both the second integral and the third term in Eq.(17) are related to the propagating segment of the plasmonic mode Ω + (K). The great advantage of Eq.(17) compared to Eq.(10) is that now the integrands are expressed in terms of simple analytic functions and there is no need to integrate implicit functions whose evaluation is only possible numerically. We also gain for analytical calculations since the discussion of the distance dependence (via the parameter Ω p ∝ L/λ p ) can be done in a transparent way. We show in the following that one gets asymptotic expressions for small and large values of Ω p , the only variable on which the correction factor η pl depends after the integration. Fig.5 shows a plot of η pl as function of L/λ p = Ω p /(2π). Note the increase linear in L for small distances and a sign change at large L, with a power law ∝ L 1/2 . In the next two sections we analyze these limits analytically. At short distance η pl reproduces exactly the correction factor known for the total Casimir energy. B. Short distance asymptotics The distance enters the correction factor η pl [Eq. (17)] via the dimensionless parameter Ω p , and we get the short-distance asymptotics in the limit Ω p 1. This has been discussed in previous papers [28,32,34,40], but the asymptotics turns out to be tricky at next-to-leading order. The first order expansion in Ω p of the functions g a (z) yields [28] where the numerical constant α ≈ 1.790 arises from The separate contributions of the modes Ω + (K) and Ω − (K) are α + ≈ −12.225 (repulsive) and α − ≈ 14.015 (attractive). The plasmonic Casimir energy in this regime thus scales like A ω p /L 2 and is reduced compared to the perfect mirror case (η pl 1) [41]. The contribution of the propagating part of ω + is of the third order in Ω p [see Eq.(A9)] and can therefore be neglected. The same argument holds for the term (z + ) 3/2 ≈ Ω 3 p . In other words, the plasmonic contribution comes essentially from the evanescent sector (z > 0). We note that the result (19a) yields exactly the short-distance behavior of the full Casimir energy which is thus dominated at short distance by the interactions between surface plasmons [28,32,34,40]. Note that Eq.(19a) follows from an expansion of the g a (z) to first order in Ω p . It is worth stressing that this expansion scheme does not work at higher orders, the series being an asymptotic one and not uniformly convergent. Each integral obtained by this method at higher orders is divergent, except the first one given in Eq.(19a). To avoid this problem, we use an alternative method and write the functions g ± (z) as follows with Now for z > 0, |ρ| is bounded by unity and decays rapidly to zero as z → ∞ [28]. To compute the integral of [g ± (z)] 1/2 , we expand in powers of ρ and get the series is the gamma function. Taking the n = 0 term, the integration over z leads to (19a). Higher order terms can be calculated explicitly, but the resulting expressions are cumbersome and will not be reported here. Including the next-toleading order terms, we find where a ≈ 0.63 and b = ℵ/4 √ 2 ≈ 1.026. This is plotted as gray line(s) in Fig.5, the inset providing a zoom on the cubic and logarithmic terms (see caption). Note that the term aΩ 3 p gives a distance-independent correction to the Casimir energy and cancels when the force is computed. The presence of the logarithmic correction is due to the non-uniform convergence of the asymptotic series. We find from (24) that the Casimir force does not feature a logarithmic correction at short distance. C. Large distance asymptotics The curve η pl (L) in Fig.5 shows that the plasmonic mode contribution is negative (repulsive) at distances L 0.08 λ p . Mathematically, this can easily be seen from the large Ω p asymptotics of η pl . One can check that the integrand of the first integral in Eq.(17) is significantly different from zero only for z ∼ 1. This suggests the following expansion of the g a (z) for Ω p 1 Moreover, the expansion to leading order in Ω 2 p |z| can also be performed in the integral over the propagating sector in (17) because the integration domain is limited to −π 2 ≈ z + ≤ z ≤ 0. Finally, we find that the integrated term in Eq. (17) gives a negligible contribution so that to leading order, This expression can be evaluated numerically, giving as result Γ = 29.75 (i.e. the sum of 8.90 (+ mode, evanescent sector), −7.23 (− mode, evanescent sector), and 28.09 (+ mode, propagating sector). Note the large contribution of the propagating segment and the near cancellation of the two evanescent branches. Since η pl is negative at large distances, the plasmonic contribution provides a repulsive contribution to the Casimir interaction that scales like +A √ ω p c/L 5/2 . This is balanced in the total Casimir energy by the contributions of photonic modes (cavity and bulk modes), recovering the attractive large-distance power law E Cas ∝ −A c/L 3 . D. Cancellations and signs We conclude our analysis by suggesting an interpretation of the signs of the plasmonic contributions to the Casimir energy. It is clear that E pl is due to the shift in the plasmon mode frequency relative to the isolated interface [see Eq. (10)]. This can be also interpreted as a reshuffling of the density of modes due to the coupling by the interface, the total number of modes remaining constant. To make this more quantitative, we re-write the plasmonic Casimir energy as where the mode densities are defined as usual by (a = ±, 0) that depend on the distance L for a = ±. Note that the ω integral in (27) does not converge if taken over the ρ a (ω) alone. This is due to the flat large-k asymptote of the plasmonic dispersion relations. More explicitly, the density of modes can be calculated as where, k a (ω) is the inverse function to ω a (k), and the derivative is just the inverse group velocity at a given frequency ω. We find a behaviour ρ a (ω) ∝ (ω 2 sp − ω 2 ) −2 when ω approaches the asymptotic value ω sp ≡ ω p / √ 2 of the dispersion relation (the surface plasmon resonance in the quasistatic limit). This peak is exactly cancelled in the difference δρ ± (ω) ≡ ρ ± (ω) − ρ 0 (ω) that we plot in Fig.6 for a given distance L. The precise behaviour of the curves changes with the distance (at smaller L, for example, ρ + (ω) is nonzero for ω > ω sp ), but the following qualitative features are stable. (i) The mode ω + (k) shows a gap between 0 and ω + (0), and the difference δρ + (ω) is only due, for ω < ω + (0), to the subtracted isolated surface plasmon (dashed line). Just at this frequency, the mode density ρ + (ω) jumps to a positive value. This behaviour is due to the quadratic shape of the lower band edge in ω + (k). As ω → ω sp , δρ + (ω) > 0 because ω + (k) is shifted upwards relative to ω 0 (k) (the group velocity is smaller). This mode is hence an 'anti-binding one' [34]. (ii) The mode ω − (k) has a linear dispersion for small k, and the difference in mode density can be worked out as the positive quantity δρ − (ω) ∝ ω/Ω p ∝ ωL (dashed line). This mode is hence anti-binding in this region as well. (iii) Near the frequency ω sp , the mode ω + (k) [ω − (k)] gives a repulsive [attractive] contribution to the Casimir energy, respectively. Summing over both modes yields a repulsive or attractive result depending on L, because the relative weight of the binding and anti-binding regions changes. Coming back to frequency shifts, it is easy to see from the large K expansion of Eqs.(A3) that the following inequality holds At short distance, the plasmonic Casimir energy (which is actually the total Casimir energy) is thus attractive, as is well known (see also Eq.18). Let us finally note that as one moves away from the largek regime, retardation becomes increasingly important. The change in sign of the plasmonic Casimir energy can thus be seen as well as a consequence of the finite speed of light. E. Cutting the mode branch Recently, there has been some discussion on the way to split the field modes into photon-like and plasmon-like parts [40,46]. We comment in this section on the numbers one can obtain when the plasmonic mode ω + (k) is segmented in a different way. (The mode ω − (k) is subject to no controversy.) The main conclusion we draw from this discussion is that the large distance behavior is dominated by mode branches near the light line. In addition, the sign is sensitive to the chosen subtraction (renormalization), and it may happen that under this procedure, a pure evanescent branch ends up being counted among photonic modes. We also suggest that the branch of the plasmonic mode ω + (k) that enters the propagating sector is perhaps one of the best examples of Casimir repulsion due to a standing wave mode. Consider the corresponding pressure: it is repulsive due to photons bouncing on the mirrors. The attractive force for a perfect cavity arises, all things told, from the subtraction of a similarly repulsive pressure from a standing wave mode continuum (reflected from the mirrors' backfaces). Now, the counterpart for the plasmonic mode is a single-interface evanescent mode with zero pressure so that the repulsive force survives the subtraction. Bordag [40] is calling 'plasmon mode' only the evanescent branch of ω + (k) that exists for k ≡ |k| > k c ≡ ω p /(c 1 + Ω p /2) (see Fig.7, top). The segment within the light cone actually does not appear explicitly in Eq.(24) of Ref. [40], but is implicitly contained in the total Casimir energy (the photonic contribution is computed by subtracting the plasmonic one). The evanescent segment of ω + (k) is renormalized by subtracting the isolated surface plasmon, ω 0 (k), over the same range k c < k < ∞, as shown in Fig.7 (top). The range 0 < k < k c is left out (although it depends on L via k c ). This subtraction is sufficient to get a vanishing energy as L → ∞ because ω + (k) → ω 0 (k) exponentially fast for k > k c . (In addition, k c → 0.) The integration over the branches chosen in Ref. [40] corresponds to the following cor-FIG. 7: Illustration of different segmentation of the plasmonic modes and the chosen renormalization. Thick lines mark the segments that are taken into account in the different approaches. We write ωpr and ωev for those parts of the mode ω+(k) where the field between the mirrors is propagating or evanescent, respectively. Top: Bordag [40], the modes ω+,0(k) (red, blue) start at the wavevector kc where ω+(k) reaches the light cone. Middle: one possibility suggested by the comment of Lenac [46]. The mode ωev(k) is continued, for 0 ≤ k ≤ kc, by the light line ω = ck (red) and renormalized by the entire branch of ω0(k) (blue). A particular splitting of the Lifshitz formula into propagating and evanescent modes turns out to yield the same result. Bottom: another possibility compatible with Lenac's paper. Only the evanescent branch ωev(k) (k ≥ kc, red) is taken into account and renormalized by ω0(k) (k ≥ 0, blue). rection factor to the Casimir energy: with Here, K c = k c L and z c solves the equation K 2 c = g 0 (z c ) (at this parameter value, the dispersion relation ω 0 (k) reaches k = k c ). We have checked that at short distance, this correction is negligible compared to the leading order η pl ∝ ω p L. At large distance, however, the integrals in (31) are both of order Ω 1/2 p (see Section III C) and the difference Ω 3 0 (K c ) − K 3 c , too. Their contributions come with different signs, leading in the end to a correction factor that is attractive and scales like η B ≈ 1.6240 Ω 1/2 p at large distance. A similar analysis can be done for the mode definition sketched in Fig.7 (middle): the plasmonic mode is continued along the light line for k < k c and renormalized by the entire dispersion branch ω 0 (k). For L → ∞, as k c → 0, the renormalized energy vanishes. The corresponding correction factor is given by η L [Eq. (32)] which does not contain any integrated term. The short-distance behaviour is the same as in the present paper, and at large distance, we have η L (L) ≈ −1.6600Ω 1/2 p . This corresponds to repulsion as with our convention, but with a smaller numerical coefficient. We argue below that this result can also be obtained by a splitting of the Lifshitz formula for the Casimir energy. Let us mention that if the segment 0 ≤ k ≤ k c of the light line is not taken into account (Fig.7, bottom)) then the large distance behaviour shows an attractive term η ∝ Ω 3/2 p . Both results do not fit with the curves presented by Lenac [46], although our calculation tries to follow the spirit of his description. It is not clear to us from his sparse description which renormalization scheme was used in the end. The Lifshitz approach to the Casimir energy leads to the correction factor η L as follows. We write the right-hand side of Eq.(3) in the equivalent form where the dispersion function D µ [ω, k] is defined in (4). This expression has a structure very similar to the so-called "argument principle" where the zeros (and poles) of the argument of the logarithm define the eigenfrequencies of the system (of the reference system), respectively [5], and each mode contributes its zero point energy. In other words, the imaginary part of the logarithmic derivative can be read as a density of modes (suitably renormalized). We isolate the contribution of evanescent modes by restricting the ω-integration domain to 0 ≤ ω ≤ c|k| (so that κ = |k| 2 − ω 2 /c 2 is real as it should for evanescent waves). As discussed in Sec.II, zeros and poles of D µ [ω, k] occur for the plasma model only for TM-polarized evanescent waves. A simple calculation leads to where the functions g ± (z) defined in (A3b, A3c) appear. The factor involving r T E k shows no singularities for evanescent waves. With the change of variable k → z = (κL) 2 , we see that the two factors in the second line of (34) have simple zeros (poles) at the mode frequencies ω ± (ω 0 ), respectively [see Eq.(A3a)]. A calculation using the argument principle and the symmetry property (−ω) = (ω) for lossless response functions, then leads straightforwardly to Eq.(32). IV. CONCLUSION AND DISCUSSION In this paper we evaluate the contribution of plasmonic modes to the Casimir force using the plasma model to describe the optical response of the medium. Simple analytical expressions are found, in particular for the small and large distance asymptotics. We introduced a correction factor η pl (L) that gives the plasmonic contribution to the Casimir energy, E pl (L), in units of the Casimir energy E Cas (L) ∝ −1/L 3 [Eq. (1)]. It turns out that η pl (L) is small but positive at short distance, correctly reproducing van Kampen's result [27]. Quite surprisingly, the plasmonic contribution changes sign at the fairly short distance L/λ p ∼ 0.08. For larger cavity lengths, η pl (L) becomes negative and leads to the unusual scaling E pl ∝ +L −5/2 as L → ∞. This behaviour clearly shows that the plasmonic modes are much more important for the Casimir effect than usually anticipated. They do not only dominate in the short distances limit, but they also give a large repulsive contribution at large distances. We have calculated as well (see also [40]) the photonic mode contribution that turns out to be a monotonous function of the distance L (Fig.8); it actually approaches a constant as L → 0. Its large distance behaviour contains as leading order a negative L −5/2 term that exactly cancels the plasmonic contribution. The Casimir energy is thus the balance of two contributions of equal magnitude which nearly cancel each other. It would be interesting to investigate if a change in the fieldmirror coupling could somehow influence this detailed balance and therefore the value or even the sign of the Casimir force. This could be the case for nanostructured surfaces, since the plasmonic modes are associated with the electron charge density oscillations at the vacuum/metal interface. This route has already been explored within a different context, that of metamaterials in the visible frequency range. It has been shown that arrays of metallic dots or rods [42,43,44] exhibit a strong magnetic response in the visible, including a band with negative magnetic permeability. This behaviour arises again from plasmon modes: they are here concentrated on the metallic particles and their characteristics can be tuned with the particle shape. In the array, the plasmons delocalize and lead to a resonant electric and magnetic response. A significant modification of the Casimir force, even a change in sign, could be realistic with these materials [45]. The limits of validity of our results are imposed by the applicability of the plasma model. The approach is not intended to make quantitative predictions because the response of intra- band transitions in real metals would require a more complicated dielectric function. We are also restricted to fairly short distances where the relevant mode frequencies are sufficiently large compared to the dissipation rate. From Fig.8, one can see, however, that the most striking effects due to plasmonic modes (change in sign and near cancellation of plasmonic and photonic contributions) indeed appear at short distances, L ≤ λ p , where the lossless plasma response is a suitable approximation. We therefore believe that at least for this range of distances our results are quite generally valid. In our description, the main responsible for a repulsive Casimir interaction is the plasmonic mode ω + . This mode crosses the border between the evanescent sector and the propagative sector, and we have it considered as being completely part of the 'plasmonic' set of modes [33]. This is an 'adiabatic' definition that is strongly suggested by the continuous change in the mode function plotted in Fig.4. Other splittings into evanescent and photonic modes have been applied in the literature [40,46], and we have given a brief review in Section III E. The total Casimir energy is of course immune to these wordings. However, if one considers a structured surface, the mode branch ω + will change as a whole, and by analyzing this change, one could easily predict the corresponding modification of the Casimir energy.
2007-09-26T19:58:25.000Z
2007-06-08T00:00:00.000
{ "year": 2007, "sha1": "e76f76f44a4e4fa6ed47ae03457e46eaa9a66b34", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0706.1184", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e76f76f44a4e4fa6ed47ae03457e46eaa9a66b34", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253257656
pes2o/s2orc
v3-fos-license
Training diversity promotes absolute-value-guided choice Many decision-making studies have demonstrated that humans learn either expected values or relative preferences among choice options, yet little is known about what environmental conditions promote one strategy over the other. Here, we test the novel hypothesis that humans adapt the degree to which they form absolute values to the diversity of the learning environment. Since absolute values generalize better to new sets of options, we predicted that the more options a person learns about the more likely they would be to form absolute values. To test this, we designed a multi-day learning experiment comprising twenty learning sessions in which subjects chose among pairs of images each associated with a different probability of reward. We assessed the degree to which subjects formed absolute values and relative preferences by asking them to choose between images they learned about in separate sessions. We found that concurrently learning about more images within a session enhanced absolute-value, and suppressed relative-preference, learning. Conversely, cumulatively pitting each image against a larger number of other images across multiple sessions did not impact the form of learning. These results show that the way humans encode preferences is adapted to the diversity of experiences offered by the immediate learning context. Introduction A large body of decision-making research suggests that humans learn from experience the expected value of different available options (hitherto referred to as these options' absolute values; [1][2][3][4][5]). That is, as people try out different options and observe their reward outcomes, it is thought that they track the average reward associated with each option, and this allows them to choose options with higher expected value. The main benefit of this so-called value learning is that it makes it easy to choose from any set of options, including options that have not been previously considered in relation to one another, simply by comparing their absolute values. In this sense, absolute values act as a "common currency" that serves to generalize preferences across contexts that offer different combinations of options [6][7][8]. Evidence that value learning is implemented by the brain emerged from early foundational work on primates indicating that brainstem dopaminergic neurons instantiate prediction errors-differences between actual and expected reward-that are well suited for algorithmic implementations of value learning [9]. Since then, many human brain imaging studies have shown that activation in the orbitofrontal cortex (OFC) and other regions is correlated with expected value during reward learning and other types of economic decision-making tasks [5,10,11]. New evidence, however, has cast into question whether humans indeed learn absolute expected values or may be instead learning relative preferences among limited sets of options. Two recent studies showed that people's choices reflect relative preferences because when they are rewarded for choosing one out of two options, they do not only form a preference in favor of the option they chose, but also a preference against the option they did not choose [12,13]. Neural data reveal a similar picture. Neural firings in areas considered to encode value such as the OFC and the striatum have been found to encode normalized values that, in fact, have no absolute meaning and can only be interpreted as relative preferences compared to other options sampled in the same context [14,15]. Such relative preference encoding is evident even when each individual option is encountered separately [16,17]. These studies among others [18][19][20][21][22][23][24][25], have led researchers to propose alternative models of learning, according to which humans learn preferences between options without encoding the absolute value of each option [7,8,[26][27][28][29]. Here, we test a novel hypothesis that humans flexibly adapt the degree to which they form absolute expected values and relative preferences based on the opportunities and incentives afforded by the environment. It is well established, across a wide range of machine learning applications, that learning environments that provide a more diverse set of learning exemplars aid generalization of learned information to new input patterns and unfamiliar contexts [30,31]. In the case of learning to maximize reward, the set of exemplars corresponds with the set of possible options, and learning about a broader set of options could make it more clearly evident that the value of an option does not depend on the other options it is pitted againstthat is, that each option has an absolute value. Additionally, the broader the set of options, the greater is the space of possible choice sets (i.e., a choice set is a set of simultaneously available options from which one chooses), some of which have yet to be encountered. The prospect of having to choose among novel sets of previously encountered options makes it worthwhile to form preferences that can be used to choose among such sets, which is precisely what absolute values are best suited for. By contrast, relative preferences produce suboptimal choices among unfamiliar choice sets, since they only encode how valuable options were relative to the other options they were previously pitted against [12,27]. These considerations suggest at least two types of training diversity may support and incentivize value learning. The first type of diversity relates to how many options a person learns about concurrently within a given learning session (henceforth, concurrent diversity), whereas the second type of diversity is the number of alternative options a given option is cumulatively pitted against, across all learning sessions (henceforth, cumulative diversity). These two types of diversity are dissociable since an option can be learned concurrently with fewer other options, yet across multiple learning sessions it may be cumulatively pitted against more other options. To test the impact of concurrent and cumulative diversity on the formation of absolute values, we designed a novel multi-day reward learning experiment comprising twenty learning sessions. In each session, subjects' goal was to maximize their reward by choosing among pairs of images, each of which was associated with a fixed probability of reward. The probabilities were not told to subjects and could only be learned through trial and error, by observing the reward outcomes of chosen images. To manipulate concurrent diversity, we varied how many images subjects learned about concurrently in each learning session. To manipulate cumulative diversity, we varied the total number of other images each image was pitted against over two separate learning sessions. Critically, the multi-session design allowed us to assess the formation of absolute values by asking subjects to choose between images that were never directly paired together during learning. To enhance the distinction between absolute values and relative preferences, we had images with the same reward probability learned against other images with either mostly lower or mostly higher reward probabilities. An absolute value learner would have no preference among these images, whereas a relative preference learner would prefer the option that ranked higher in its original learning context. Results 27 subjects (ages 20 to 30; Mean = 24 ±.5) completed two learning sessions a day of a reward learning task over a period of ten days (Fig 1). In low-concurrent-diversity sessions, subjects learned about three images at a time, whereas in high-concurrent-diversity sessions, subjects learned about six images at a time. Every image appeared in two learning sessions, but lowcumulative-diversity images were pitted against the same images in both sessions whereas high-cumulative-diversity images were pitted against different images. Subjects formed preferences in favor of more rewarding images To validate our task, we first determined whether subjects successfully learned to choose images associated with higher reward probabilities. Subjects indeed tended to choose the more rewarding image out of each pair, doing so with 86% (SEM ±1%) accuracy during learning (Fig 2A; chance performance = 50%). As can be expected, subjects' performance was lower in the conditions that required learning about more images (i.e., high concurrent diversity; Fig 2B, left panel) or about more pairs of images (i.e., high cumulative diversity; Fig 2B, right panel). However, subjects performed considerably above chance in all conditions, and showed gradual improvement with each new image as they tried out choosing it and observed its outcomes ( Fig 2C). These results confirm that the task was effective in getting subjects to form preferences among images based on how often each image was rewarded. Concurrent diversity increased generalization A hallmark of value learning is the ability to generalize learned preferences to novel settings. To test generalization, we had subjects choose between images that had not been previously pitted against each other ('novel pair' testing trials). We compared subjects' accuracy on these trials to accuracy in choosing between images that subjects had encountered during learning ('learned pair' testing trials). Novel and learned pairs involved the same images and presented subjects with similarly difficult choices, in the sense that the time elapsed since learning was roughly the same (novel pairs = 1.88 days ±.08, learned pair = 1.75 ±.08 following learning), as was the average difference in reward rate between the two images that made up a pair (reward rate difference: Δ novel−pair = 48.4% ±2%, Δ learned−pair = 49.1%±3%). However, only novel pairs presented subjects with choices between images learned in different games with different sets of other images. This presents no challenge to an absolute value learner, but a relative preference learner might end up choosing an image with a lower expected value simply because it was learned against worse images (and thus acquired a higher relative value). For this reason, a pure absolute value learner can be expected to perform equally well in choosing between novel and learned pairs, whereas a relative preference learner should perform worse in choosing between novel pairs. We found that subjects successfully chose the image with the higher reward probability in 83% of novel-pair trials (SEM ±2%; chance performance = 50%). This level of accuracy, however, was significantly lower than the accuracy subjects demonstrated on learned-pair trials (Mean = 87% SEM ±3%; bootstrap p = .03). Subjects thus generalized their preferences well, but did not do so perfectly. We therefore asked whether success in generalization was affected by the diversity of learning experiences. To quantify generalization, we computed the drop-off in accuracy from learned-pair to novel-pair testing trials. Since we found no interaction between the effects of concurrent and cumulative diversity (p = .86 bootstrap test), we separately examined each while marginalizing over the other. Strikingly, we found that accuracy did not significantly differ between novel-pair and learned-pair trials for images learned in conditions of high concurrent diversity (Fig 3; Mean = -1% ±2%; p = .68, bootstrap test). Conversely, for lowconcurrent-diversity images, subjects performed substantially worse in choosing between On each trial, subjects were asked to choose one of two circular images. Following their choice, subjects either received or did not receive a reward of 1 coin based on a fixed reward probability associated with the chosen image (0, 1/3, 2/3, or 1). Each game consisted of 48 such 'learning' trials, interleaved with 24 'testing' trials wherein subjects chose between images about which they learned in prior sessions. Outcomes were not revealed in testing trials to prevent further learning. Every image first appeared in 64 learning trials over two sessions before subjects were tested on it. ITI: inter-trial-interval. (B) Daily schedule. Each day, subjects performed two experimental sessions on a specially designed mobile phone app [32] one in the morning (on average, at 8:56 am, and no earlier than 6:00 am) and one in the evening (on average, at 6:12 pm, and no earlier than 4:00 pm). In each session, subjects played two games in which they learned about a total of six images. (C) Experimental conditions. Four experimental conditions were implemented along 10 days of learning, each lasting two to three days. Each condition is illustrated via a representative selection of six pairs of images subjects chose between in two different sessions. Within the low concurrent diversity condition (left two columns), three images were learned over the span of each game, whereas in the high concurrent diversity (right two columns) six images were learned over the span of two games. Thus, in both conditions, each image appeared in 32 learning trials per session. In the low-cumulative-diversity conditions, each image was pitted against the same two images in two consecutive learning sessions, whereas in the high-cumulative-diversity conditions, images were pitted against different images (sometimes in two nonconsecutive sessions; see Methods for details). Three days of training, as opposed to two, were required for high-cumulativediversity conditions so that each image could be pitted against different images in its two learning sessions. Conditions were randomly ordered, and equal in terms of average reward probability and number of images learned per session. Testing trials involved choosing between two images the subject already chose between during learning (Learned-pair trials, 25% of testing trials) or novel pairings of images learned separately (Novel-pair trials, 75% of testing trials). On average, pairs were tested 27 ±.8 times. https://doi.org/10.1371/journal.pcbi.1010664.g001 . The plot shows total (vertical lines) and interquartile (boxes) ranges and medians (horizontal lines). Also shown are mean accuracies predicted by a computational model that was fitted to subjects' choices (circles; see details under Computational Formalization below). B) Effect of training diversity on accuracy during learning. Accuracy was higher in sessions with low (91% SEM ±1%) compared to high (87% SEM ±1%) concurrent diversity (p corrected = .048, bootstrap test), and trended higher in sessions with low (90% SEM ±1%) compared to high (88% SEM ±1%) cumulative diversity (p corrected = .078, novel pairs (Mean = -7% ±2%; p corrected = .004 bootstrap test). This difference between low and high concurrent diversity was neither due to a difference in learned-pair trials nor in novelpair trials (S1 Table), but specifically reflected the drop-off in accuracy between them (p corrected = .015 bootstrap test). In contrast, cumulative diversity did not impact the performance drop-off from learnedpair to novel-pair trials (-3% ±2% vs -4.8% ±2% p corrected = .29 bootstrap test). This was despite the fact that pairs of images were encountered during learning half as many times in bootstrap test). The plot shows individual subject accuracy (circles), group distributions of accuracy levels (violin), group means (thick lines) and standard errors (gray shading). C) Learning curves for each experimental condition. Accuracy in trials involving a given image as a function of how many trials the image previously appeared in. A dropoff in accuracy can be observed for high-cumulative-diversity images (dark) at the beginning of the second session, because these images were then pitted against new images. The plot shows group means (circles), standard errors (vertical lines), local polynomial regression lines ( [33]; curves) and confidence intervals (shading). Given that performance was always above chance (50%), y-axes in panels B and C focus on this range. https://doi.org/10.1371/journal.pcbi.1010664.g002 conditions of high cumulative diversity. For this reason, we expected that learned-pair performance would be compromised by cumulative diversity, and this was indeed the case (as evident by comparing accuracy on learned-pair trials to the level of accuracy achieved at the last 5 trials of learning; Mean low = -2.3% SEM ±1%, Mean high = -4.2% ±1%, p = .03 bootstrap test). However, accuracy on novel-pair trials was similarly compromised by cumulative diversity (Mean low = -5.6% ±2%, Mean high = -7.9% ±2%, p = .026 bootstrap test), and thus no benefit to generalization was observed. These results show that increasing the number of options about which a person concurrently learns improves their ability to generalize their learned preferences to novel choice sets. Concurrent diversity reduces influence of other options' outcomes The observed improved generalization suggested that concurrent diversity enhanced absolute value learning. To further investigate this possibility, we examined another key consequence of absolute value learning, namely, that the preferences it forms depend only on the available images' prior outcomes. By contrast, relative preferences also account for the outcomes of other images against which the presently available images were pitted during learning. These latter outcomes determined how each of the available images ranked compared to other images during learning. Due to accounting for these outcomes, a relative preferences learner should show no preference between images with similar rankings during learning even if their absolute values differ, but favor similarly rewarded images that ranked higher in their original learning context (i.e. a rank-bias). In examining choices between similarly ranked images with different absolute values, we found that concurrent diversity improved subjects' accuracy (S1 Fig; Mean_high = .83±.02; Mean_low = .78±.02; p corrected = .035 bootstrap test), as consistent with a shift towards value learning. By contrast, cumulative diversity impaired performance on such trials (Mean_low = .82±.02 Mean_high = .77±.02 p corrected = .02 bootstrap test), as consistent with its general detrimental effect on overall performance. Visualizing choices between similarly rewarded images (i.e. less than 10% difference in reward rate) showed that subjects preferred images that ranked higher during learning (i.e., that were pitted against images with lower reward probabilities) under low concurrent diversity (Mean low = 63% SEM ±6%, with 50% representing no preference between images) but not under high concurrent diversity (Mean high = 49% SEM ±5%; Fig 4A). This difference between conditions trended towards significance (p = .06 permutation test) and was not evident as a function of cumulative diversity (Mean low = 57% ±7%, Mean high = 48% ±8%). The above measure of rank bias, however, is limited in both sensitivity and validity. First, it ignores the random differences that inevitably exist in the actual outcomes of similarly ranked images. Second, it does not utilize the information that exists in subjects' choices between differently ranked images. Third, it is confounded by the fact that higher ranking images were chosen more during learning, since they were pitted against less rewarding images. This latter confound is important because it means that more outcomes were observed for higher-ranking images, which may have allowed subjects to develop greater confidence regarding their value. To address these challenges, we used a Bayesian logistic mixed model that predicted subjects' choices based on differences between the currently available images' own reward history (β Own ; i.e., proportion rewarded), the number of times each image was chosen (β Times Chosen i.e., sampling bias), and the reward history from trials in which the currently available images were rejected in favor of other images ( Fig 4B). The latter was separated into two separate regressors, for prior outcomes of rejecting a presently available image in favor of the other presently available image (β Current alternative ) and of rejecting it in favor of any other image PLOS COMPUTATIONAL BIOLOGY Training diversity promotes absolute-value-guided choice (β Other ). Importantly, only the latter regressor unequivocally captures the effect of other images' outcomes that are irrelevant for maximizing absolute expected value. To determine whether the effects of different types of outcomes were modulated by training diversity, we included regressors for concurrent and cumulative diversity and interactions between both types of diversity and each type of reward history. The results confirmed that in addition to the strong impact of an image's own reward history (β Own = 1.97 [1.63, 2.31]), preference for an image was inversely influenced by the outcomes of both the current alternative (β Current alternative = -.43 [-.64, -.21]) and of the other images it had previously been pitted against (β Other = -.53 [-.75, -.32]). Thus, the more subjects were rewarded when not choosing an image, the less likely they were to prefer it on subsequent trials. Most importantly, the influence of other images' reward history was reduced by high concurrent diversity (β Other×Concurrent = .39 CI = [.30, .49]). No additional interactions were found (S2 Table), except for an interaction of cumulative diversity with the impact of an image's own outcomes, as consistent with cumulative diversity's general detrimental effect on performance. Thus, concurrent, but not cumulative, diversity reduced the influence of other options' rewards that are irrelevant for inferring absolute value. Finally, to inquire whether the effect of cumulative diversity was indeed specific to trials that exclusively probed absolute value, we examined subjects' accuracy in novel-pair testing trials wherein one of the images had both higher absolute and higher relative value than the other image. Choosing correctly on such trials does not require absolute values. As expected, we found no significant effect of concurrent diversity in these trials (p corrected = .22). Here, too, we found a trend for subjects to perform worse on images learned in high, as compared to low, cumulative diversity (p corrected = .052 bootstrap test). Taken together, the results indicate that concurrent diversity specifically improved accuracy on trials that required absolute values to choose correctly. Computational formalization of value and preference learning Our results evidenced signs of both absolute value and relative preference learning. On one hand, subjects successfully learned reward maximizing choices and generalized well to novel choice sets, as consistent with absolute value learning. On the other hand, subjects performed still better at choosing among familiar choice sets, and they preferred images that were relatively more valuable in their original learning context, as consistent with relative preference learning. Critically, learning about more images concurrently diminished or even eliminated the signs of relative preference learning. We next tested whether this set of results can be coherently explained as reflecting the operation of two learning processes-absolute value and relative preference learning-the balance between which changes as a function of concurrent diversity. To do this, we fitted subjects' choices during both learning and testing with a computational model that combines value and preference learning (as proposed by [26]). We then examined the best-fitting values of the model's parameters to determine the degree to which value and preference learning were each employed in each experimental condition. To formalize absolute value learning, the model represents subject beliefs about the absolute values of images as beta distributions, defined by two parameters a i and b i . This beta distribution represents the reward probability that is believed to be associated with each image given the outcomes obtained for choosing it. Thus, a i and b i accumulate the number of times that choice of image i was rewarded and not rewarded: where O = 1 if the choice was rewarded and O = 0 if it was not. Here, γ serves as a leak parameter allowing for the possibility that more recent outcomes have a greater impact on subjects' beliefs (γ = 1 entails that outcomes are equally integrated, whereas γ<1 entails overweighting of recent outcomes). The decision variable provided by this form of learning is the absolute value (V) of image i, which is estimated as the expected value of the image's beta distribution: To isolate the key computation distinguishing relative preference learning from value learning, we use the same learning rules defined above to generate a relative preference W(i,j), for image i over image i, that accounts for all outcomes observed for choosing between the two images. When applying Eqs 1 and 2 for preference learning, O = 1 if image i was chosen and rewarded or if image j was chosen and not rewarded, and O = 0 if image j was chosen and rewarded or if image i was chosen and not rewarded. To enable preference learning to exhibit preferences among previously unencountered pairs of images, a general relative preference W(i) was computed for each image i by accumulating in the same fashion the outcomes of choosing image i or the other image, across all learning trials involving image i. Accounting for the outcomes of the other images distinguishes the relative preference W(i) from the absolute value V(i). When facing a choice between image i and image j, the probability that the model will choose either image is computed based on a weighted sum of the images' absolute value and relative preference: where W 0 (i) is itself a weighted sum of W(i) and W(i,j), with a free parameter controlling their relative weights. Importantly, β value and β preference are distinct inverse temperature parameters, the respective magnitudes of which determine the degree to which absolute values and relative preferences influence choice. Thus, the values of these parameters that best fit subjects' choices can be used to quantify the degree to which value and preference learning manifested in each experimental condition [34,35]. To this end, we allowed the two inverse temperature parameters to vary as a function of concurrent and cumulative diversity, for either learning or testing trials. Subjects combine value and preference learning To determine whether a combination of value and preference learning was needed to explain subjects' choices, we compared the full model to two sub-models, one that only learns absolute values (β preference = 0) and one that only learns relative preferences (β value = 0), as well as to a number of additional alternative learning models (see Methods). We found that the full model accounted for subjects' choices across both learning and testing trials significantly better than the alternative models ( Fig 5 and S5 Table). Moreover, only the full model was able to recreate in simulation all the behavioral findings, including generalization performance, effect on choice of outcomes for other images, and rank bias (see Concurrent diversity enhances value learning Examining the values of the parameters that best-fitted subjects' choices across all trials showed that value learning generally predominated over relative preference learning (β value = 4 ±.28 vs β preference = 1.88 ±.22), as befitting a task that involves frequent choices between options from different learning contexts. Importantly, however, preference learning manifested to a greater extent in conditions of low concurrent diversity (low concurrent: β preference = 2.6 ±.29; high concurrent: β preference = 1.44 ±.21; p<.001 permutation test), whereas value learning manifested to a greater extent in conditions of high concurrent diversity (low concurrent: β value = 3.84±.31; high concurrent: β value = 4.33±.2 p<.001 permutation test). By contrast to concurrent diversity, cumulative diversity inhibited value learning (low cumulative: = 4.7 ±.5; high cumulative: β value = 3.5 ±.4; p < .001 permutation test) and had no significant impact on preference learning (low cumulative: β preference = 1.89 ±.13; high cumulative: β preference = 1.92 ±.1; p = .32 permutation test). These results indicate that concurrently learning about a broader set of options enhances the use of absolute values for making choices. Testing alternative interpretations A necessary consequence of varying concurrent diversity is an extension of the duration of learning, since twice as many trials are required to learn about twice as many images. This raises the possibility that it was simply the duration of learning, and not the diversity of learning exemplars, that shifted learning from relative preferences to absolute values. To test this interpretation, we implemented a variant of our model where a shift towards absolute-value learning progresses gradually during learning irrespective of concurrent diversity. We compared this model to a matched implementation of a concurrent diversity effect, that is, where a gradually developing shift progresses towards value learning under high concurrent diversity but towards preference learning under low concurrent diversity. Model comparison favored the latter model over all other models (ΔBIC +741). This result indicates that shifts between preference and value learning indeed developed gradually. Most importantly, though, this result confirms again that the direction of the shift depended on concurrent diversity. A second necessary consequence of concurrently learning about more images is that consecutive presentations of an image will be separated by more intervening trials. Larger separation might itself affect the predominant form of learning, either because values and preferences decay during the intervening trials at different rates, or because a larger separation between outcomes affects the degree to which subjects overweight recent outcomes in forming values and preferences. To test the first possibility, we modified our model so as to allow values and preferences to decay during the intervening trials (via the leak parameters γ value , γ preference ). This model fitted the data worse (ΔBIC +220), ruling out decay during intervening trials. To test the second possibility, we modified our model so that the overweighting of recent outcomes (also controlled by the leak parameters) could vary as a function of diversity conditions. This model too fitted the data substantially worse (ΔBIC + 350). Finally, we examined an alternative hypothesis that participants make use of the transitivity of relative preference, thereby inferring a global rank of items without learning the expected value of each item. However, even among pairs of images with no transitive relation between them, subjects were significantly above chance in selecting the higher value image (Mean = .82±.1). Moreover, the effects of concurrent diversity on generalization were significant within this subset of trials as well (p corrected = .003 bootstrap test). Thus, neither the duration of learning, nor the presumed effects of interleaving trials, nor transitive inferences offer a successful alternative explanation for the enhancement of absolute value learning by high concurrent diversity of learning exemplars. Discussion We found that increasing the number of options a person concurrently learns about shapes reward learning in several ways. It first reduces performance during learning, but then leads to more successful generalization, removes a bias in favor of options that ranked higher during learning, and generally decreases the degree to which preference for an option is influenced by presently irrelevant options' outcomes. Computational modeling shows that all of these effects are coherently explained by a shift away from relative preference and towards absolute value learning. These findings offer a meaningful extension of previous demonstrations of absolute value [1,2,5] and relative preference [12,13,27] learning in humans, by identifying key conditions under which the former is diminished in favor of the latter, namely, conditions of high concurrent training diversity. The enhancement of absolute values and inhibition of relative preferences that we observed can best be understood in light of past suggestions that encoding context-specific information aids performance as long as the agent remains within the learning context, but is ill suited for generalizing policies to other learning contexts [37,38]. Relative preference learning is inherently specific to the learning context and impairs generalization to novel choice sets. Our findings show that such context-specific learning is promoted by a learning experience that limits the possibility of encountering novel choice sets, specifically, by reducing the number of options. In this sense, the shift between preference and value learning in our experiment can be thought of as a rational adaptation. This perspective is supported by a recent finding that value learning is enhanced by expectations of having to choose between options from different learning contexts [39]. Here, though, we demonstrate that absolute value learning can be enhanced even absent a direct manipulation of the need to choose between options from different learning contexts. Increasing the number of options is sufficient for this purpose. Conversely, with a low number of options, relative-preference learning remains clearly evident despite subjects being aware of the need to choose across contexts. Our findings agree with prior work showing that emphasizing comparisons between a limited number of specific images, for instance by repeatedly presenting subjects with a choice between the same two options and providing reward information about the foregone option, promotes learning of relative values [27]. However, the process by which the formation of relative values in the latter experiments has so far been explained-namely, normalization to the range of outcomes experienced during learning-cannot explain relative preference learning in our experiment. This is because the range of outcomes in our experiments was the same in all learning sessions. By contrast, the model we proposed here for relative preference learning may coherently account for both our findings and the findings that had previously been attributed to normalization. Though both concurrent and cumulative diversity increased task difficulty, as evident by poorer performance during learning, cumulative diversity did not have the effect of improving generalization. This result has two key implications. First, it contradicts previous suggestions that it is task difficulty per-se that promotes absolute value learning [27]. Second, it suggests that the formation of absolute values is not promoted by the global diversity of learning exemplars encountered during the entire course of learning, but rather, by the local diversity that characterizes the immediate learning context. We successfully ruled out several alternative explanations for the finding that concurrent diversity promotes absolute value learning, including some possible effects of increased duration of learning and greater separation between consecutive choices of an image, both of which are direct consequences of concurrently learning about more images. Another important consequence of such learning, which we have not addressed here, is increased working memory load [40]. Future experiments could disentangle the effects of number of images and working memory load by introducing unrelated tasks during learning, so as to increase working memory load without changing the number of images about which subjects concurrently learn. Another open question remains as to a full functional description of the relationship between concurrent diversity and absolute value learning. Our model, which was tested on three (low diversity) or six (high diversity) concurrently learned images does not allow us to extrapolate to learning with other set sizes. Clarifying the full functional relationship between diversity and value learning can be aided by extending the current experimental approach to testing additional levels of diversity, as well as by further developing a mechanistic understanding of how diversity promotes value learning. Several studies have investigated the neural basis of value [5,9,10,11] and preference [12,14,15] learning in isolation, and the potential instantiation of relative preferences via sampling from memory during choice [41][42][43][44][45]. However, it is yet unknown how the brain arbitrates between preference and value learning. One relevant line of work comprises studies on how concurrent diversity influences the brain regions recruited for learning [40]. Though this work has not examined absolute values and relative preferences, it has shown that increasing the number of items people concurrently learn about strengthens activation in a striatal-frontoparietal network implicated in value learning. Future studies could investigate the involvement of this network and other regions in arbitrating between value and preference learning as environmental conditions change. Conclusion Our findings contribute to the ongoing debate concerning the extent to which people learn absolute values versus relative preferences. We show that absolute-value learning depends on a characteristic of the immediate learning context, namely, the diversity of learning experiences it offers. We find that increased diversity, despite impairing performance in the short term, has the effect of enhancing learning of absolute values which generalize well to novel contexts. Such generalization is essential for making decisions in real life where our experiences are inevitably fragmented across many different contexts. Contact for resource sharing Further information and requests for resources or raw data should be directed to and will be fulfilled by the Lead Contact, Levi Solomyak (levi.solomyak@mail.huji.ac.il). Ethics statement The experimental protocol was approved by the Hebrew University local research ethics committee, and written informed consent was obtained from all subjects. Subjects 27 human subjects (14 male,13 female), aged 20 to 30 (Mean = 24 SEM ±.5), completed the experiment which consisted of 3556 trials [46]. Given the size of the dataset obtained for each subject (an order of magnitude greater than in typical learning experiments) and the effect size found in similar prior literature [27], we expected that a meaningful finding would manifest as at least a large effect (Cohen's D = .8; [47]). We thus selected a sample size that would provide at least 80% power of detecting such an effect (i.e., n> = 26). The experiment was discontinued midway for 3 additional subjects due to failure to complete learning sessions or evidence of random choosing. Subjects were recruited from a subject pool at Hebrew University of Jerusalem as well as from the Jerusalem area. Before being accepted to the study, each subject was queried regarding each of the study's inclusion or exclusion criteria. Inclusion criteria included fluent Hebrew or English and possession of an Android smartphone that could connect to wearable sensors via Bluetooth Low Energy. Exclusion criteria included age (younger than 18 or older than 40), impaired color discrimination, use of psychoactive substances (e.g., psychiatric medications), and current neurological or psychiatric illness. Subjects were paid 40 Israeli Shekels (ILS) per day for participation and 0.25 ILS for each coin they collected in the experimental task, which together added up to an average sum of 964 ±42 ILS over the entire duration of the study. Subjects who missed two sessions of the experiment or who displayed patterns of making random choices were automatically excluded from the study. Random choosing was indicated by chance-level performance or reaction times below 1000 ms, which our previous experience [32] suggested is consistent with inattentive performance. Experimental design To test for value and preference learning, we had subjects perform a trial-and-error learning task over a period of 10 days. On each trial, subjects chose from one of two available images, and then collected a coin reward with a probability associated with the chosen image. Each game included 48 such learning trials involving a set of 3 images with reward probabilities of either {0, .33,.66} or {.33, .66 and 1}. These probabilities were never revealed to the subjects. Subjects were only instructed that each image was associated with a fixed probability of reward. Subjects played four games a day, two in a morning session and another two in an evening session. Over a total of 20 sessions, subjects learned about 60 unique images, each appearing in 64 learning trials over two sessions. To assess whether concurrent or cumulative diversity promotes absolute value learning, we tested subjects on four experimental conditions involving either low or high levels of each type of diversity. Task conditions were randomly ordered across days in order to avoid confounds related to fatigue or gradual improvement in learning strategy. Images learned in low cumulative diversity conditions were learned over the span of two consecutive days. To satisfy the constraints of high cumulative diversity concerning which images are pitted against which, high cumulative diversity conditions spanned three days (see below). All four conditions yielded the same expected payout, since the average reward probability associated with images within each condition was .5. To enhance the distinction between absolute values and relative preferences, we had images with the same absolute value (i.e., equal reward probability) learned against other images with mostly lower reward probabilities (i.e., in games where the probabilities were {0, .33,.66}; low reward context) or mostly higher reward probabilities (i.e., in games where the probabilities were {.33, .66 and 1}; high reward context). Within each day, an equal number of images were learned in the high and low reward contexts. Concurrent diversity Low: In these conditions, the two games within each learning session were independent of one another, each involving a distinct set of three images, with each trial randomly pairing two of the three images. Thus, the number of images subjects had to concurrently track in these conditions was limited to three. High: Each session involved a set of six images, all of which were encountered in both games. Consequently, in these conditions subjects had to concurrently track six images. To equalize low and high concurrent diversity conditions in terms of the number of images each image was pitted against within a game (two), as well as in terms of the total number of pairs of images between which subjects chose within each session, the six images formed only six different pairs. To enhance the impact of high concurrent diversity, unlike in the low concurrent diversity condition, images that were pitted against each other never had a common image that they were both pitted against (Table 1). Cumulative diversity Low: Every image was pitted against the same two other images in two consecutive learning sessions. Thus, in total, subjects chose between each pair of images 32 times. In these conditions, subjects learned about twelve images in the span of two days. High: Every image was pitted against two different pairs of images in two different learning sessions (i.e., against a total of four other images). Thus, over the same number of trials involving the same number of images, subjects encountered twice as many image pairs compared to the low cumulative diversity condition. Correspondingly, subjects chose between each pair of images 16 times, half as much as under low cumulative diversity. To ensure that the opportunity to learn relative preference was not hindered by a change in reward context midway through learning, reward context was always the same (i.e., either high or low) in both learning sessions of a given image. Implementing these criteria made it impossible to have subjects learn about twelve images in the span of two days, and thus, we had subjects learn about 18 images across the span of 3 days (Table 2). Furthermore, pairing each image with different images in two sessions meant that, for some stimuli, its two sessions could not be consecutive. Thus, 58.33% of high cumulative diversity were learned over the course of two days. This extension of the learning period conferred a small benefit to accuracy in testing trials, as shown by a logistic regression on the number of days learning spanned (log odds accuracy improvement = .12 CI = [.05 .19]). Reassuringly, this incidental effect ran counter to the overall effect of high cumulative diversity, which was to impair testing performance. Thus, it did not change the interpretation of the main results. To assess the formation of absolute values, we had subjects choose between images about which they had learned in two previous sessions ('testing' trials). Through the entire course of learning, such testing trials were interleaved with the learning trials (every 3rd trial, 24 testing trials total). Reward feedback was not shown on testing trials, but subjects were informed in advance that these trials would be rewarded with the same probabilities with which they were rewarded during learning. This reward was factored into the final bonus subjects received for their performance. Testing trials always presented a choice between images learned within the same condition. However, some of these images subjects had already chosen between during learning ('Learned Pair Trials'; 25% of testing trials), whereas other trials presented a choice between images about which subjects learned in separate games ('Novel Pair trials'; 75% of testing trials). Half of novel-pair trials were designed to assess how well subjects performed in general. These trials thus presented a choice between two images one which of was preferable to the other both in terms of reward probability and in terms of how it ranked in reward probability compared to the other images it was learned with. The other half of novel-pair trials were designed to distinguish between value and preference learning. Thus, half of these (25% of all novel-pair trials) presented a choice between images with the same expected value but that ranked differently in their original learning context, whereas the other half presented a choice between images with the same relative rank but different values. Pairs that satisfied these criteria were selected in random. Within every session, we tested only the latest condition for which at least two learning sessions were completed. This meant that in most days only one condition was tested. However, equalizing the total number of testing trials across conditions required that there be on average 2 days (range 1-3) in which the morning and evening sessions tested different learning conditions. The variation between subjects emerged because of Sabbath observance, which resulted in some subjects completing only the morning session on Fridays (since sundown prevented the completion of the second session) and either subsequently continuing the following evening (Saturday evening) or the following day (Sunday morning). In the morning session of the final day of the experiment (day 11), to ensure that there were sufficient testing trials of images learned on days 9 and 10, subjects were presented with testing trials from the last learned condition. In the afternoon session, subjects were presented with testing trials that spanned across conditions. However, not all subjects performed these afternoon trials, and some performed them incompletely. Therefore, the data from the afternoon trials were not included in the main analyses. Mobile platform To test learning across multiple well-separated sessions, we modified an app developed by Eldar et al [32] for Android smartphones using the Android Studio programming environment (Google, Mountain View, CA). The app asks users to perform experimental tasks according to a predetermined schedule. Additional features of the app not relevant for the present work include probing of changes in subjects' mental state, including regular mood self-report questionnaires and life events and activities logging, and recording of electroencephalographic (EEG) and heart rate signals derived from wearable sensors connected using Bluetooth. All behavioral and physiological data are saved locally on the phone as SQLite databases (The SQLite Consortium), which are regularly uploaded via the phone's data connection to a dedicated cloud. Table 2. Example arrangement of images across days in conditions of high cumulative diversity. Each number corresponds to an image subjects learned about. The arrangement ensured that every image would be pitted against four distinct images across two learning sessions Brackets group images that were pitted against each other. A) Low concurrent. Each game consisted of three images each pitted against the two other images. B) High concurrent. Each game consisted of six images, with each image pitted against two other images. Day 1 Day 2 Day 3 Morning session Daily schedule Subjects first visited the lab to receive instructions, test the app on their phones, and try out the experimental task (see Initial lab visit section below). Starting from the next day, subjects performed two experimental sessions a day, one in the morning and one in the evening, over a period of 10 consecutive days followed by a rest day, and a final day of testing. Each session began with a 5-minute heart rate measurement during which subjects were asked to remain seated. Following this, subjects put on the EEG sensor and played two games of the experimental task. The app allowed subjects to perform the morning session from either 6AM, 7AM, 8AM or 9AM, as best fitted the subject's daily schedule, and the evening session from 8 hours following this time. Subjects were allowed to adjust the timing of the sessions according to their daily schedule but were required to ensure a gap of at least 6 hours between successive sessions. On average, subjects performed the morning session at 8:56AM (mean SD ± 40 min) and the evening session at 6:12 PM (mean SD± 32 min). Subjects who were religiously observant were allowed to suspend the experiment due to holiday observance as long as they resumed it the following day. Twenty five out of twenty-seven subjects took a holiday break, but only six of those subjects took the break during learning about specific images (they had only completed one of two learning sessions with those images). We evaluated whether these breaks resulted in accuracy drop-off for these images relative to other images within the same condition for which learning was uninterrupted but found no significant drop-off (Mean with break = .87±.04, Mean without break = .84±.03; p = .196 bootstrap test). As part of a larger data collection effort, subjects were also asked to report their mood prior to playing each game as well as twice more throughout the day. Materials The experiment involved 60 images, which were abstract patterns collected from various internet sources. To ensure that images were sufficiently distinguishable from one another we ran a structural similarity analysis that assesses the visual impact of three characteristics of images: luminance, contrast, and structure [48]. We considered as sufficiently distinguishable images with a similarity index of at most .6, and this was verified by visual inspection. Statistical analyses All non-modeling statistical analyses were performed in R using RStudio. Statistical tests were carried out using the bootstrap method with the "simpleboot" package. Correction for multiple comparisons across the two types of diversity was carried out using the Benjamini-Hochberg procedure. Regression analyses Regression analyses were performed using the "brms" package, which performs approximate Bayesian inference using Hamiltonian Monte Carlo sampling. We used default priors and sampled two chains of 10000 samples each. 1000 samples per chain were used as warm-up. To ensure convergence, we required an effective sample size of at least 10000 and a R-hat statistic of at most 1.01 for all regression coefficients. To evaluate an effect of interest, we report the median of the posterior samples of the relevant regression coefficient and their 95% high density interval (HDI). A reliable relationship is said to exist between a predictor and an outcome if the 95% HDI excludes zero. Examining rank bias. To calculate whether subjects preferred images ranked higher in the original learning context, we calculated each subject's ranking of each image based on how many times they chose the image relative to the other images it was pitted against (best, second-best, or worst). If the difference in choice frequency between two images that were pitted against each other was below 10%, indicating no established ranking between them, then the pair was excluded from the analysis (8% of trials). These rankings were then averaged across the two learning sessions in which an image was learned about to generate an overall rank for the image. Finally, we tested for a ranking bias by examining subjects' choice between differently ranked but similarly rewarded images (defined as less than a 10% difference in percent rewarded outcomes). Bias was defined as a tendency to choose the image that had been ranked higher during learning. Examining the influence of other options' outcomes. To determine whether training diversity modulated the influence on choice of past outcome of non-presently relevant images, we used a Bayesian logistic mixed model predicting subjects' choices. The predictors are described below for a choice between image A and image B: 5. Concurrent-concurrent diversity condition. All two-way interactions between each type of reward history and each type of diversity condition: Concurrent×Own, Concurrent×Current Alternative, Concurrent×Other, Cumulati-ve×Own, Cumulative×Current Alternative, Cumulative×Other. To provide a concrete example, consider the following five-trial segment for which we calculate the corresponding value of each regressor: Trial 1: A vs B, A is selected and rewarded Trial 2: B vs C, B is selected and is not rewarded Trial 3: B vs C, C is selected and rewarded Trial 4: A vs C, A is selected and is not rewarded Current trial. Trial 5: A vs C "Own" is calculated as the proportion of times A was rewarded when A was chosen (Trial 1, Trial 4 for a total of A_own = 1/2) minus the proportion of times image C was rewarded when chosen (Trial 3; C_own = 1/1). Thus, b own ¼ 1 = 2 À 1 ¼ À 1 = 2 . "Current alternative" is calculated as the proportion of times the subject was rewarded when they rejected image A in favor of image C (no such trial exists so A_current alternative is set to the null value of 1/2), minus the proportion of times the subject was rewarded when they rejected image C in favor of image A (trial 4 -C = 0). Thus β current alternative = 1 "Other" is calculated as the proportion of times the subject was rewarded when they rejected image A in favor of any image other than C (no such trials exist so A_other = 1/2) minus the proportion of times the subject was rewarded when they rejected image C in favor of any image other than A. In our case, image C was rejected in favor of image B (trial 2) and image B is not rewarded (0/1) so the value of b other ¼ 1 2 The probability of choosing image A over image B was thus modelled as: where σ represents the logistic function. To account for between-subject variation, we included random intercepts as well as random slopes for all predictors. Computational formalization Whereas the main components of the computational model are described in the main text, here we detail precisely how preference and value learning were influenced by diversity conditions. On each trial, a set of per-subject β preference−baseline and β value−baseline parameters were modulated by the following main effects and interaction parameters. are ratios that represent the impact of training diversity on the relative influence of prior outcomes on choices in learning compared to testing trials; the more either ratio diverges from 1, the greater the impact diversity has on the relative influence of prior outcomes on choices in learning compared to testing trials. b value preference � learning testing is a ratio that represents the relative influence of value and preference learning on choice differs in learning and testing trials. The more this ratio diverges from one, the more preference and value learning are differentiated in the sense that one algorithm influences choices more during learning trials and the other algorithm influences choices more during testing trials. Using these main effect and interaction parameters, β preference and β value can be computed for each trial type. Thus, for example, in a learning trial of low concurrent but high cumulative diversity we can calculate the inverse temperature for the value and preference algorithm as follows: Alternative models To identify the computations that guided subjects' choices we compared the model presented in the main text to several variations of this model, in terms of how well each fitted subjects' choices. These included a model that only learns absolute values (β preference = 0), a model that only learns relative preferences (β value = 0), a model with leak parameters (γ preference and γ value ) that vary across conditions (BIC +240), and a non-Bayesian learning model that, instead of beta distributions, learns expected values and relative preferences based on a Rescorla-Wagner update rule with fixed learning rates. This latter model includes all main-effect and interaction parameters included in our winning model except for the learning rates α value , α preference , which replaced the leak parameters γ value , γ preference ( [49]; BIC +2370 relative to winning model). To test for a gradual shift towards absolute-value encoding as a function of time (see Ruling Out Alternative Explanations), we implemented a model that scales the inverse temperature parameter for value learning, β value , by e τ�trial(i,t) , and the inverse temperature parameter for preference learning, β Preference , by 1 e t�trialði;tÞ . Here τ is a free parameter controlling the degree of the shift, as a function of the number of trials that elapsed since image i was chosen at trial t. These inverse temperatures apply specifically to the outcome obtained that trial. To additionally determine whether our data might be better accounted for by prior work which suggested that humans gradually shift towards value learning in similar RL experiments [39], we implemented an alternative model which assumes that the shift towards absolutevalue encoding grows with the number of trials that elapsed since the beginning of the experiment. This model, too, did not fit the data well as the original model that did not assume a continuous gradual shift (+BIC 353), likely because subjects were made aware from the beginning of the experiment that they would need to generalize learned values. Finally, we tested a beta-binomial model which allowed for asymmetries in the learning process from rewarded and non-rewarded outcomes. This model indeed improved the fit to the data but did not alter any of the main findings (S3 Fig) Model fitting We fit model parameters to subjects' choices using an iterative hierarchical importance sampling approach [32] using MATLAB. We first used 2.5 × 10 5 random settings of the parameter from predefined group-level distributions to compute the likelihood of observing subjects' choices given each setting. We approximated posterior estimates of the group-level prior distributions for each of our parameters by resampling the parameter values with likelihoods as weights, and then re-fit the data based on the updated priors. These steps were repeated iteratively until model evidence ceased to increase. To derive the best-fitting parameters for each individual subject, we computed a weighted mean of the final batch of parameter settings, in which each setting was weighted by the likelihood it assigned to the individual subject's decision. Parameter initialization Across both models, β preference−baseline and β value−baseline were initialized by sampling from a gamma distribution (k = 1, θ = 1), leak parameters (γ preference and γ value ) were initialized by sampling from a beta distribution with (α = 9 and β = 1) and all other parameters were initialized by sampling from a lognormal distribution (μ = 0, σ = 1). Model comparison For each model we estimated the optimal parameters by likelihood maximization. We then applied the Bayesian Information Criterion (BIC) to compare the goodness of fit and parsimony of each model. A so-called 'integrative BIC' [50] can be computed as follows: BIC = -2 ln L + k ln n, where L is the evidence in favor of each model, estimated as the mean likelihood of the model given random parameter settings drawn from the fitted group-level priors, k is the number of fitted group-level parameters and n is the number of subject choices used to compute the likelihood. This method has shown high reliability and efficacy in detecting differences within and between subjects [50][51][52][53]. We validated the model comparison procedure by simulating data using each model and using the model comparison procedure to recover the correct model (S3 Table). To validate the BIC model comparison results, we also performed model comparison using the Akaike information criterion (AIC) (S5 Table). Statistical tests of parameter fits Statistical significance of each interaction parameter was measured using a two tailed permutation test. First, we calculated the mean of the log fit across the 27 subjects to generate a summary statistic of how much the parameter deviates from 1. A mean of zero indicates that the parameter of interest does not significantly scale the inverse temperature in either direction. A mean significantly different from zero indicates the condition modulates the inverse temperature parameters in favor of either absolute values or relative preferences. Thus, for each parameter of interest, we generated a null distribution composed of 1000 random permutations of the data, randomly shuffling the condition of interest (e.g., whether the subject is in low or high concurrent diversity, which corresponds to inverting the impact of the parameter). We then applied the full model fitting procedure to each permuted data set and computed the p value by comparing the actual parameter fit to the distribution of parameter fits for the permuted data. We validated our parameter fits through simulating data using the best fitting parameters for each subject and then recovering those parameters (S4 Table). Falsification of alternative models In addition to model comparison, we examined whether alternative models generate predictions that are falsifiable by the data. Simulations from a Preference-only model could not account for subjects' ability to perform generalization at a high level across conditions (Posterior model prediction: mean = .66 ±.1 vs Real data mean = .83±.2.), whereas a valueonly model could not account for the difference in accuracy between learned-pair and novelpair trials (MeanΔ concurrent low = 0±1, MeanΔ concurrent high = 0±.1; p = .55 bootstrap). Furthermore, such a model was unable to account for the effects of other images' outcomes on image choices. Specifically, the effect of other image's outcomes was not significant CI = [-5.31,4.84] nor was it modulated by concurrent diversity CI = [-4.45,5.76]. Thus, neither value learning nor preference learning could account for subjects' behavior alone. Dryad DOI https://doi.org/10.5061/dryad.1rn8pk0xr [46] Supporting information S1 Table. Validation of Parameter Recovery. We validated our parameter fits through simulating data using the best fitting parameters for each subject and then recovering those parameters. Our correlation between simulated and recovered parameters was at least .74 for all parametes of interest that capture the effects of the experimental conditions, and at least .51 for all other parameters. To determine whether the model successfully captured individual differences in our experiment, we examined how parameter fits correlated with model-agnostic measures of behavior. As expected, we found that β value was significantly correlated with generalization performance (r = .7) while β preference correlated with our measure of rank bias (r = .5). We then validated the best-fitting model thoroughly by simulating, for each subject, 1000 data sets using their best fitting parameters and analyzing the simulated data in the same fashion in which we analyzed the real data. This procedure showed the model uniquely accounted for all of our behavioral findings (Fig 1) To account for previous findings of asymmetric in learning from positive versus negative reward prediction errors [54] we implemented a beta binomial model with asymmetric update rates. This modification did not alter any of the main findings. Namely, preference learning manifested to a greater extent in conditions of low concurrent diversity (low concurrent: β preference = 3.07 ±.21; high concurrent: β preference = 1.76 ±.12; p<.001 permutation test) whereas value learning manifested to a greater extent in conditions of high concurrent diversity (low concurrent: β value = 4.04±.18; high concurrent: β value = 4.54±.14 p<.001 permutation test). Furthermore, as in our winning model, cumulative diversity inhibited value learning (low cumulative: = 4.99 ±.3; high cumulative: β value = 3.78 ±.3; p < .001 permutation test) and had no significant impact on preference learning (low cumulative: β preference = 1.60±.12; high cumulative: β preference = 1.87±.14; p = .11 permutation test) (TIF)
2022-11-03T06:17:33.730Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "9701fe834f2bb1ce4f314922b4c56a277f759ac4", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1010664&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a70d0517abeffee173393ddd82cb5af5d15a171e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252979914
pes2o/s2orc
v3-fos-license
The Comparative Performance of Phytochemicals, Green Synthesised Silver Nanoparticles, and Green Synthesised Copper Nanoparticles-Loaded Textiles to Avoid Nosocomial Infections In the current study, a sustainable approach was adopted for the green synthesis of silver nanoparticles, green synthesis of copper nanoparticles, and the investigation of the phytochemical and biological screening of bark, leaves, and fruits of Ehretia acuminata (belongs to the family Boraginaceae). Subsequently, the prepared nanoparticles and extracted phytochemicals were loaded on cotton fibres. Surface morphology, size, and the presence of antimicrobial agents (phytochemicals and particles) were analysed by scanning electron microscopy, dynamic light scattering, and energy-dispersive X-ray spectroscopy. The functional groups and the presence of particles (copper and silver) were found by FTIR and XRD analyses. The coated cotton fibres were further investigated for antibacterial (qualitative and quantitative), antiviral, and antifungal analysis. The study revealed that the herb-encapsulated nanoparticles can be used in numerous applications in the field of medical textiles. Furthermore, the utility of hygienic and pathogenic developed cotton bandages was analysed for the comfort properties regarding air permeability and water vapour permeability. Finally, the durability of the coating was confirmed by measuring the antibacterial properties after severe washing. Introduction Healthcare-associated infections (HAIs), also called hospital-acquired infections (HAIs) or nosocomial infections (NI), develop in patients during hospitalisation and are a continuing concern within hospitals. The risk of these infections is on the rise despite the efforts to control them. These infections may cause disability or preventable death in humans [1,2]. Contaminated textiles (containing bacteria and viruses) might be a major source of crossinfection. The most common textile contributions in hospitals (wards, operation theatres, rooms, ICUs, and surgical areas) include bed sheets, pillow covers, outlets, surgical drapes, covering, panels, curtains, patient gowns, doctors' gowns and socks, etc. [3]. The textiles are composed of natural fibres (which contain voids in structure, porosity, moisture, and natural contents) and are an excellent place for the growth, replication, and survival of pathogens [4]. Typically, pathogenic microorganisms such as Escherichia coli, Staphylococcus Aureus, vancomycin-resistant Enterococci, and Clostridium have been found on hospital textile surfaces [5,6]. The presence of these microbes on hospital textiles contributes to the spread of infections in the hospitalised patients and the staff. Additionally, bacterial strains (after Ethicillin, now Methicillin-resistant Staphylococcus Aureus) and virulent strains (the new SARS-CoV-2 strain) have developed resistance to last-resort drugs [7]. The rate of The cotton fabric used in hospital areas (bleached, plane weave, areal density of 150 g per meter square) was obtained from Arif Textile Mills, private limited, Faisalabad, Pakistan. The fruit, leaves, and bark of E. acuminata were obtained from Bagh-e-Jinnah, Lahore, and then submitted to the Botany Department, Government College of Science, Lahore, Pakistan. Tannic acid, sodium hydroxide, binder (LATEX), and silver nitrate were supplied via Sigma Aldrich. DI water was collected from the reagent water system (Milli Q SP, Millipore, Milford, MA, USA) and was utilised during the process of synthesis. All the received chemicals were used without any further purification. Copper (II) chloride (CuCl 2 ·2H 2 O) with 98% purity was obtained from Riedel-de Haen, Germany, and L-ascorbic acid with 99% purity was purchased from Merck, Germany. The chemicals were of analytical grade and used with no further purification or chemical treatment. L-ascorbic acid was used both as a capping and a reducing agent. LB (Luria-Bertani) agar-based broth and Lennox broth were provided by Merck for anti-bacterial testing. All the chemical solutions were prepared freshly for use during all the chemical reactions. The selection of reducing agent depends upon the ions which are going to be reduced. For the silver nanoparticles, we used tannic acid, which acts as reducing and capping agent. A strong stabilizing agent is required to limit the formation of complex ions of silver during the synthesis process. Secondly, for the reduction of copper ions (due to the transition ionic state of copper +1, +2), a strong reducing agent is required. Lactic acid is a stronger reducing agent compared to tannic acid. Both silver and copper have different surface potentials, so we used different organic materials during their synthesis processes [9]. Silver Green Synthesis The green synthesis of silver nanoparticles was executed by one-pot mixing of 0.5 M silver salt AgNO 3 , 1 mM tannic acid (TA), and 0.3 M sodium hydroxide (NaOH) in the ambient environment. The whole assembly was continuously stirred with a magnetic stirrer to homogenize the solution. The speed of stirring was kept constant (150 rpm), and temperature was maintained at 45 • C for 25 min. Subsequently, the solution was cooled down to room temperature and the prepared nanoparticles were centrifuged for purification. The parameters were set at 900 rpm for 20 min and the collected material was washed three times using deionised water. The schematic for the preparation of silver nanoparticles is shown in Figure 1. temperature was maintained at 45 °C for 25 min. Subsequently, the solution was cooled down to room temperature and the prepared nanoparticles were centrifuged for purification. The parameters were set at 900 rpm for 20 min and the collected material was washed three times using deionised water. The schematic for the preparation of silver nanoparticles is shown in Figure 1. Copper Green Synthesis Solution one was prepared via dissolving the copper salt (CuCl2·2H2O) (0.03 M) in 200 mL of DI water. Solution two was prepared by L-ascorbic acid (1.0 M solution) in DI water, separately. Then, 100 mL of copper chloride solution was added into airtight flasks and heated at 100 °C continuously using a water bath shaker (mechanical/electrical heated), followed by drop-wise addition of 2 molar L-ascorbic acid solution into each flask. The solution was mixed and heated continuously until the colour changed to yellowish, orange, light brown, and finally, chocolate brown colour, as depicted in Figure 2. The completion time of the whole process was 20 h. The final product was then stored for 12 weeks, with no occurrence of dispersion or sedimentation, as checked without any magnification. Copper Green Synthesis Solution one was prepared via dissolving the copper salt (CuCl 2 ·2H 2 O) (0.03 M) in 200 mL of DI water. Solution two was prepared by L-ascorbic acid (1.0 M solution) in DI water, separately. Then, 100 mL of copper chloride solution was added into airtight flasks and heated at 100 • C continuously using a water bath shaker (mechanical/electrical heated), followed by drop-wise addition of 2 molar L-ascorbic acid solution into each flask. The solution was mixed and heated continuously until the colour changed to yellowish, orange, light brown, and finally, chocolate brown colour, as depicted in Figure 2. The completion time of the whole process was 20 h. The final product was then stored for 12 weeks, with no occurrence of dispersion or sedimentation, as checked without any magnification. temperature was maintained at 45 °C for 25 min. Subsequently, the solution was cooled down to room temperature and the prepared nanoparticles were centrifuged for purification. The parameters were set at 900 rpm for 20 min and the collected material was washed three times using deionised water. The schematic for the preparation of silver nanoparticles is shown in Figure 1. Copper Green Synthesis Solution one was prepared via dissolving the copper salt (CuCl2·2H2O) (0.03 M) in 200 mL of DI water. Solution two was prepared by L-ascorbic acid (1.0 M solution) in DI water, separately. Then, 100 mL of copper chloride solution was added into airtight flasks and heated at 100 °C continuously using a water bath shaker (mechanical/electrical heated), followed by drop-wise addition of 2 molar L-ascorbic acid solution into each flask. The solution was mixed and heated continuously until the colour changed to yellowish, orange, light brown, and finally, chocolate brown colour, as depicted in Figure 2. The completion time of the whole process was 20 h. The final product was then stored for 12 weeks, with no occurrence of dispersion or sedimentation, as checked without any magnification. Extraction of Phytochemicals All parts of the plant were washed properly with fresh water to get rid of undesirable substances and kept at room temperature for drying. All dried parts were blended in an electric grinder to convert the long parts into staple fibres. Subsequently, these ground flakes were further refined to micro/nano-scale by using the ball-milling process. Highenergy ball-milling (planetary, Fritsch pulverisette 7, Weimar, Germany) was used to carry Nanomaterials 2022, 12, 3629 5 of 23 out dry pulverisation. Zirconium balls (10 mm) and a sintered corundum container (80 mL) were used for drying milling up to 60 min, while the BMR was kept at 10:1 with 700 rpm speed. Then, 400 g of powder of each part, such as leaves, bark, and fruit, was soaked with 1200 mL of Dichloromethane, separately, while 250 g of powder of leaves, bark, and fruit was macerated with 750 mL of methanol for 14 days, with frequent shaking at ordinary temperature. After maceration, it was filtered by using Whatman No.1 filter paper, and the obtained filtrate was subjected to a rotary evaporator to obtain crude extract of the plant material. Resulting crude extracts were stored in vials and placed in a refrigerator at 4 • C. The schematic for extraction and the milling process is shown in Figure 3. duction time. Extraction of Phytochemicals All parts of the plant were washed properly with fresh water to get rid of undesirable substances and kept at room temperature for drying. All dried parts were blended in an electric grinder to convert the long parts into staple fibres. Subsequently, these ground flakes were further refined to micro/nano-scale by using the ball-milling process. High-energy ball-milling (planetary, Fritsch pulverisette 7, Weimar, Germany) was used to carry out dry pulverisation. Zirconium balls (10 mm) and a sintered corundum container (80 mL) were used for drying milling up to 60 min, while the BMR was kept at 10:1 with 700 rpm speed. Then, 400 g of powder of each part, such as leaves, bark, and fruit, was soaked with 1200 mL of Dichloromethane, separately, while 250 g of powder of leaves, bark, and fruit was macerated with 750 mL of methanol for 14 days, with frequent shaking at ordinary temperature. After maceration, it was filtered by using Whatman No.1 filter paper, and the obtained filtrate was subjected to a rotary evaporator to obtain crude extract of the plant material. Resulting crude extracts were stored in vials and placed in a refrigerator at 4 °C. The schematic for extraction and the milling process is shown in Figure 3. Pre-Treatment of Cotton Fabric The cotton fabric was pre-treated before the deposition of the prepared silver, copper, and phytochemical particles on cotton fibres. Pre-treatment was carried out with citric acid. A solution of 20 g/L of citric acid was made and cotton fabric was dipped in it at 90 °C for 2 h, then washed and dried at 90 °C for 50 min. The purpose of the pre-treatment of cotton fabric with citric acid is to enhance the number of carboxyl groups (-COOH) on the surface of cotton fabric. Moreover, the treatment of cotton with citric acid as a result of the chemicals can produce an ester bond, which initially increases and then tends to stabilize. In a similar study, a shift in the amount of free carboxyl was noticed on the surface of the cotton fabric. The concentration of free carboxyl content in the pristine cotton fabric was around 17 mmol/kg, which is mainly due to the by-products of the oxidative bleaching of the cotton fabric. The amount of free carboxyl groups in the cotton increased (from 16.67 to 727 mmol/kg) with the ad- Pre-Treatment of Cotton Fabric The cotton fabric was pre-treated before the deposition of the prepared silver, copper, and phytochemical particles on cotton fibres. Pre-treatment was carried out with citric acid. A solution of 20 g/L of citric acid was made and cotton fabric was dipped in it at 90 • C for 2 h, then washed and dried at 90 • C for 50 min. The purpose of the pre-treatment of cotton fabric with citric acid is to enhance the number of carboxyl groups (-COOH) on the surface of cotton fabric. Moreover, the treatment of cotton with citric acid as a result of the chemicals can produce an ester bond, which initially increases and then tends to stabilize. In a similar study, a shift in the amount of free carboxyl was noticed on the surface of the cotton fabric. The concentration of free carboxyl content in the pristine cotton fabric was around 17 mmol/kg, which is mainly due to the by-products of the oxidative bleaching of the cotton fabric. The amount of free carboxyl groups in the cotton increased (from 16.67 to 727 mmol/kg) with the addition of CA. When the concentration of CA reached a particular point, the fabric surface became saturated, and the amount of free carboxyl groups reached its peak at around 1070 mmol/kg. The variations in the hydroxyl level on the surface of cotton indicate the saturation of the samples [20]. Application of Prepared Particles and Phytochemicals of Cotton Fabric Three different concentrations (0.25 g, 0.5 g, 1 g) of silver particles, three different concentrations (0.25 g, 0.5 g, 1 g) of copper particles, and three different concentrations of phytochemicals (0.25 g, 0.5 g, 1 g) were dissolved in 200 mL of water. Then, 0.5 g of binder was dissolved in each solution. The pH was maintained at 5-6 with the help of citric acid. A 100% wet pick-up was maintained for the treatments. In each solution, the cotton fabric was soaked for 30 min followed by padding and drying at 90 • C for 20 min. The schematic illustration for the coating of different types of antimicrobial agents on cotton fabric is shown in Figure 4. Moreover, the application process of the prepared particles and Nanomaterials 2022, 12, 3629 6 of 23 phytochemicals of the cotton fabric is shown in Figure 5. The design of experiments for the developed samples is presented in Table 1. Three different concentrations (0.25 g, 0.5 g, 1 g) of silver particles, three different concentrations (0.25 g, 0.5 g, 1 g) of copper particles, and three different concentrations of phytochemicals (0.25 g, 0.5 g, 1 g) were dissolved in 200 mL of water. Then, 0.5 g of binder was dissolved in each solution. The pH was maintained at 5-6 with the help of citric acid. A 100% wet pick-up was maintained for the treatments. In each solution, the cotton fabric was soaked for 30 min followed by padding and drying at 90 °C for 20 min. The schematic illustration for the coating of different types of antimicrobial agents on cotton fabric is shown in Figure 4. Moreover, the application process of the prepared particles and phytochemicals of the cotton fabric is shown in Figure 5. The design of experiments for the developed samples is presented in Table 1. Scanning electron microscopy (SEM, FEI Quanta 50, Hertfordshire, UK) was used for the morphological investigation of silver nanoparticles, copper nanoparticles, and phytochemicals' deposition on the surface of cotton fabric. XRD analysis was performed using PAN analytical X'pert PRO equipment (Malvern, UK). Dilute dispersion of each particle was prepared using DI water in a beaker followed by its ultrasonication in an ultrasonic bandelin probe before characterisation of particle distribution using the Malvern Zetasizer. FTIR Analysis of Cotton Fabrics Infrared spectra both for untreated and treated cotton fabrics were recorded using a Scanning electron microscopy (SEM, FEI Quanta 50, Hertfordshire, UK) was used for the morphological investigation of silver nanoparticles, copper nanoparticles, and phytochemicals' deposition on the surface of cotton fabric. XRD analysis was performed using PAN analytical X'pert PRO equipment (Malvern, UK). Dilute dispersion of each particle was prepared using DI water in a beaker followed by its ultrasonication in an ultrasonic bandelin probe before characterisation of particle distribution using the Malvern Zetasizer. FTIR Analysis of Cotton Fabrics Infrared spectra both for untreated and treated cotton fabrics were recorded using a Nexus Nicolet 470 spectrometer equipped with an ATR (attenuated total reflection) Pike-Miracle accessory. Antimicrobial Testing Qualitative and quantitative measurements were performed for testing the antibacterial activity of cotton fabrics treated with cuprous oxide particles. Qualitative Test (Zone of Inhibition Measurement) Bacterial Strain Preparation Gram-positive and Gram-negative bacterial strains viz., Staphylococcus aureus (CCM-3953) and Escherichia coli (CCM-3954) for this study were obtained from the Microorganism Czech Collection Lab (Masaryk University Brno, Czech Republic). Fresh bacterial suspensions were prepared by growing an overnight single colony within a nutrient bath at 37 • C. Before antibacterial testing, the turbidity of the sample was adjusted to 0.1 optical density at 600 (OD 600). The agar plates were prepared freshly before starting antibacterial tests. The cotton swab (sterilised) was dipped inside the culture suspension and then cells were homogeneously spread on the agar plates. After preparation, these plates were utilised for antibacterial testing [22]. Determining Zone of Inhibition The coated cotton fabrics (6 × 6 mm 2 ) with CuO particles were directly placed on inoculated agar plates. The detailed procedures are described in [23]. The untreated cotton fabric used as a control was tested in parallel. The inoculated agar plates and samples were then placed at 37 • C for 24 h. The zone of inhibition was measured as the mm (total diameter) of textile fabric coated with CuO particles along with the zone where there is inhibition of bacterial growth. All experiments were conducted in triplicate and the mean value was calculated. Quantitative Test (Reduction Factor) Quantitative measurements were performed using the standard AATCC test method (100-2004). It is a quantitative method which uses the reduction factor and explains the percentage reduction of inoculated bacterial concentration due to the sample effect. It results in bacterial colonies survivors (CFU) and then the inhibition degree (%) is calculated from such number. Therefore, it is necessary to perform a comparison of treated and untreated samples (standardised). First, a sample cut in dimensions of 18 × 18 mm 2 was placed for 30 min in a sterile container. Then, a respective bacterial strain (100 µL) with Nanomaterials 2022, 12, 3629 8 of 23 a 10 5 CFU/mL concentration was applied for each test. Incubation was performed in a thermoset for 24 h at 37 • C followed by addition of physiological solution (10 mL). After vortexing, a pipette was used for taking 1 mL, and then it was inoculated on a Petri dish using blood agar (for each sample there were inoculated triplets). The result is calculated as the sum of the colonies number of all three dishes [24,25]. Antifungal Activity Assessments The AATCC 100-2004 standard testing protocol was followed to evaluate the antifungal activity of the treated fabric sample. For this purpose, the Candida albicans fungal specie was used. Equation (1) was employed to calculate the antifungal activity in terms of the percentage reduction: where A and B represent the number of spores on control (untreated) and treated cotton samples, respectively. Antiviral Activity Behrens and Karber's method was used for the calculation of virus titres' reduction from the starting viral titre of infectivity (10 7 ). Vero-E6 cultures were grown in Dulbecco's Modified Eagle Medium (DMEM), which included 9% foetal bovine serum (FBS) and 2% penicillin-streptomycin amphotericin (PSA). The coronavirus infected Vero-E6 cultures at a ratio of 1:3 in polyethylene pots to develop virus strains after one day. The virucidal effect of the produced viral stocks was studied under a microscope. The cell line was combined with 10% FBS and frozen at 90 • C. The supernatant was filtered by moderate centrifugation at 5 to 7 • C at 3700 rpm for 30 min. In the experiment, the supernatant was employed as viral stocks, and all macroscopic residue was removed. To evaluate virus titre, Vero-E6 cell lines were cells that were plated at a density of 2 × 105 in 96-well plates and incubated at standard conditions (24 h at 37 • C) in 6% CO 2 . From 10 1 through 10 8 , each specimen was diluted 10 times. All dilutions were injected into cell lines and incubated at 6% CO 2 for 3 days. Behrens and Karber's protocol was followed for the evaluation of coronavirus titre in cultured cell lines. After that, the control and treated fabric samples of dimensions 20 × 20 mm 2 were taken and placed in the vials. Viral loads (100 µL) were run through the control and treated fabrics, and viral loads recovered in containers were disinfected using the filter. The coronavirus was diluted from 10 1 to 10 8 . All dilutions were seeded into Vero-E6 cell lines and incubated at 37 • C in 6% CO 2 for 3 days. Behrens and Karber's approach was used to calculate the titres of the coronavirus in the cultured cell lines. Evaluation of Comfort Properties Air permeability can be defined as the perpendicular airflow rate to a specific known area. The airflow is maintained under the prescribed differential of air pressure between two material surfaces. The SDL air permeability tester (Per ISO 9237 standard) was used for test performance. The air pressure difference of 100 Pa was set between two substrate surfaces. Moreover, a TH4 bending rigidity tester was used for the stiffness measurement of both untreated and treated cotton fabric. Durability Durability is measured to check the stability of fabrics in service. For this reason, washing of the coated fabrics was performed according to ISO standard (105-C01). All the samples of fabric were stirred using standard detergent having a 50:1 liquid ratio. Then, samples were stirred for 35 min (600 rpm speed) at 40 • C. Afterward, the fabrics were conditioned and dried in a standard atmosphere for 24 h. The durability was further confirmed with antimicrobial results. Phytochemicals' Screening Analysis The preliminary screening of phytochemicals extracted from each part of the plant (leaves, bark, and fruit) was carried out. The extraction indicated the presence of all active agents, and the results are shown in Table 2. The leaf extract contained phytochemicals such as flavonoids, saponins, sugars, phenols, steroids, glycosides, and tannins. In seed extract, only one phytochemical, alkaloids, was absent. Several substances have already been found and reported, including alkaloids, saponins, tannins, sterols, flavonoids, and triterpenoids. Flavonoids have a wide range of antifungal, antiviral, and antibacterial actions. Flavonoids also have antioxidative, cytotoxic, chemo-preventive, and antiproliferative properties. Flavonoids have a fundamental structure derived from the C15 body of flavones, and iso flavonoids have been discovered to be harmful to fungi. Thus, the leaf extract of M. champaca exhibited antimicrobial activity due to the presence of flavonoids and other phytochemicals. Extractive Yield of Leaves, Bark, and Fruit of E. acuminata The percentage extractions from each part of the plant (leaves, bark, and fruit) against two different solvents is shown below. It is clear that methanolic extraction of phytochemicals was higher as compared to the DCM solvent. It is also noticeable that the amount of phytochemicals was higher in the fruit (3.63% against DCM, and 6.5% methanol). FTIR Analysis Cotton fabric was pre-treated using citric acid to enhance its carboxyl moieties. Based on the assumption, more sites will be provided for silver and copper particles' attachment with a greater number of added carboxyl groups. Infrared spectra were then recorded using an FTIR spectrometer (Nicolet Nexus 470) equipped with an ATR Pike-Miracle accessory. Figure 6 shows the FTIR spectra obtained for untreated and treated cotton fabric. The broad peak observed at 3300 cm −1 corresponds to the stretching of the O-H bond. In contrast, the broad peak observed in the 2800-3000 cm −1 region is attributed to C-H stretching. Moreover, for the cotton grafted with citric acid, the absorption band for the carboxylic group appears clearly at 1732 cm −1 , which is related to the carboxylic acid form [12]. XRD Analysis The phase composition of deposited copper and silver nanoparticles was evaluated using XRD analysis (2θ range: 20-80° with a step degree of 0.02°). The phase purity of synthesised Ag nanoparticles can be obvious from the perfect indexing of all the diffraction peaks to the structure of silver, as shown in Figure 7a. As compared to the untreated fabric of cotton, four peaks appeared for silver particles at 2θ values of 77.5, 64.5, 44.3, and 38.1, thus attributed to diffraction planes (3 1 1), (2 2 0), (2 0 0), and (1 1 1), respectively, having a cubic structure as reported in the International Diffraction Centre Data (data number JCPDS 04-0783 card) [13,28]. Moreover, no distinctive peaks were seen for other impurities, such as AgO. XRD patterns for copper nanoparticles are presented in Figure 7b. The phase purity of copper particles can be clearly seen from the perfect indexing of all the diffraction peaks to the structure of copper. The appearance of copper diffraction peaks (2θ) at 74.2°, 59.5°, and 43.3° represents copper diffraction planes (2 2 0), (2 0 0), and (1 1 1), respectively [29]. The copper particles were examined for their crystalline structure from the appearance of sharp peaks. In contrast, the broadening of the peaks indicated the copper particles' formation at the nanoscale since no specific impurity peaks were detected, except the appearance of the Cu2O peak (2θ) at 38°, respectively [30]. Different approaches have been used to investigate and document the phytochemistry of E. acuminata (Family: Boraginaceae) [26]. The most important biologically active components include phenolic acids, polyphenolic compounds, and flavonoids (i.e., methyl rosmarinate, free caffeic acid, rosmarinic acid, luteolin-3-glucuronide, luteolin-7glucuronide, flavones, 6-hydroxyluteolin-7-glucoside, apigenin-7-glucoside, and apigenin-7-glucuronide) [27]. Both alcoholic and aqueous extracts of Ehretia acuminata richly contain flavonoids, especially luteolin-7-glucoside and rosmarinic acid [28]. FTIR analysis is a fundamental technique for the detection of functional groups. The appearance of a distinctive peak at 3377 cm −1 is attributed to the stretching vibrations of the OH group in polyphenols and the OH group present in sugar rings, whereas the peak observed at 1678 cm −1 might be due to the stretching vibrations of the C=O bond inside aromatic rings of various phenolic compounds, which include flavonoids and polyphenols present in the aqueous extract of Ehretia acuminata. Meanwhile, the band at 1471 cm −1 is correspondent to the stretching mode or vibrations of the C=C group. The occurrence of absorption peaks at 2873 and 2946 cm −1 is related to the stretching vibration of C-H methyl groups [29]. Similarly, the peaks observed at 1290 and 1386 cm −1 are relative to the strong stretching (C-O) of ester groups and medium bending vibrations of the C-H group, respectively [29]. Furthermore, the absorption bands that appeared at 1253 cm −1 are related to the C-O vibration of hydroxy flavonoids [30], while the band at 1162 cm −1 corresponds to the strong stretching of the C-O group of tertiary alcohols and the band seen at 1058 cm −1 is related to the stretching (C-O) of primary alcohols. In contrast, the absorption bands at 815 cm −1 are related to the C-H bending vibration. The major FTIR peaks and associated functional groups are shown in Table 3 [30]. XRD Analysis The phase composition of deposited copper and silver nanoparticles was evaluated using XRD analysis (2θ range: 20-80 • with a step degree of 0.02 • ). The phase purity of synthesised Ag nanoparticles can be obvious from the perfect indexing of all the diffraction peaks to the structure of silver, as shown in Figure 7a. As compared to the untreated fabric of cotton, four peaks appeared for silver particles at 2θ values of 77.5, 64.5, 44.3, and 38.1, thus attributed to diffraction planes (3 1 1), (2 2 0), (2 0 0), and (1 1 1), respectively, having a cubic structure as reported in the International Diffraction Centre Data (data number JCPDS 04-0783 card) [13,28]. Moreover, no distinctive peaks were seen for other impurities, such as AgO. Surface Characterisations SEM analysis was performed at each step. Figure 8A shows the SEM analysis of simple cotton fibres without any treatment and coating. Figure 8b shows the microstructures after treating with citric acid. The treatment was performed to enhance the carbonyl groups on the surface of cotton cellulosic structures. The treated fibre surface condition showed minor roughness as compared to simple untreated cotton fibres. XRD patterns for copper nanoparticles are presented in Figure 7b. The phase purity of copper particles can be clearly seen from the perfect indexing of all the diffraction peaks to the structure of copper. The appearance of copper diffraction peaks (2θ) at 74.2 • , 59.5 • , and 43.3 • represents copper diffraction planes (2 2 0), (2 0 0), and (1 1 1), respectively [29]. The copper particles were examined for their crystalline structure from the appearance of sharp peaks. In contrast, the broadening of the peaks indicated the copper particles' formation at the nanoscale since no specific impurity peaks were detected, except the appearance of the Cu 2 O peak (2θ) at 38 • , respectively [30]. Surface Characterisations SEM analysis was performed at each step. Figure 8A shows the SEM analysis of simple cotton fibres without any treatment and coating. Figure 8B shows the microstructures after treating with citric acid. The treatment was performed to enhance the carbonyl groups on the surface of cotton cellulosic structures. The treated fibre surface condition showed minor roughness as compared to simple untreated cotton fibres. Nanomaterials 2022, 12, x FOR PEER REVIEW 13 of 24 Figure 8. SEM analysis of (A) untreated cotton, (B) cotton treated with citric acid, (C1) phytochemicals-coated, (D1) copper particle-coated, and (E1) silver particle-coated cotton fibres, with EDX spectra of (C2) phytochemical-coated, (D2) copper particle-coated, and (E2) silver particle-coated cotton fibres. To check the successful deposition of phytochemicals, copper (Cu), and silver (Ag) nanoparticles on the surface of a fabric, SEM analysis was performed. SEM images in Figure 8D1 and Figure 8E1 show the deposition of Cu-NPs and Ag-NPs in the nanometric range. The nanoparticles' deposition was found to be denser and uniform, whereas the size of the phytochemical particles extracted showed that the prepared particles lie in the micro-to the nano-range. The morphological aspects of particles showed them as rough surfaces with irregular clusters and quasi-spherical shapes along with small agglomeration. Moreover, microparticles can be seen as evenly distributed on the surface of a fabric. It is emphasised that agglomeration was not seen on the larger portion of prepared materials, as evident from Figure 8C1. Table 4 shows the elemental composition for fabrics coated with phytochemicals, silver, and copper, respectively. From the table, it can be observed that copper and silver contents increased with the increase in concentration. To gather more facts regarding composition and the elemental presence within phytochemicals, EDX spectral analysis was conducted to find out the elemental composition. The oxygen (O) and carbon © peaks are attributed to the presence of phytochemicals on the cotton fibre surface. In all such elements, the presence of oxygen and carbon is in a high concentration, while calcium, magnesium, and silicon are found to be in the moderate range. Trace elements are determined in trace quantities through the percentage (%) abundance of elements such as Ca, K, Cl, Si, and Mg within the sample. . SEM analysis of (A) untreated cotton, (B) cotton treated with citric acid, (C1) phytochemicalscoated, (D1) copper particle-coated, and (E1) silver particle-coated cotton fibres, with EDX spectra of (C2) phytochemical-coated, (D2) copper particle-coated, and (E2) silver particle-coated cotton fibres. To check the successful deposition of phytochemicals, copper (Cu), and silver (Ag) nanoparticles on the surface of a fabric, SEM analysis was performed. SEM images in Figure 8D1,E1 show the deposition of Cu-NPs and Ag-NPs in the nanometric range. The nanoparticles' deposition was found to be denser and uniform, whereas the size of the phytochemical particles extracted showed that the prepared particles lie in the micro-to the nano-range. The morphological aspects of particles showed them as rough surfaces with irregular clusters and quasi-spherical shapes along with small agglomeration. Moreover, microparticles can be seen as evenly distributed on the surface of a fabric. It is emphasised that agglomeration was not seen on the larger portion of prepared materials, as evident from Figure 8C1. Table 4 shows the elemental composition for fabrics coated with phytochemicals, silver, and copper, respectively. From the table, it can be observed that copper and silver contents increased with the increase in concentration. To gather more facts regarding composition and the elemental presence within phytochemicals, EDX spectral analysis was conducted to find out the elemental composition. The oxygen (O) and carbon © peaks are attributed to the presence of phytochemicals on the cotton fibre surface. In all such elements, the presence of oxygen and carbon is in a high concentration, while calcium, magnesium, and silicon are found to be in the moderate range. Trace elements are determined in trace quantities through the percentage (%) abundance of elements such as Ca, K, Cl, Si, and Mg within the sample. Particle Size Distribution Based on the Brownian movement of particles, the DLS technique was used for particle size measurement. Figure 9a,b reveal the particle size distribution of copper and silver particles. The particles were known to exhibit multi-modal distribution with size-changing from mm to nm. The average particle sizes found for silver and copper were 600 and 500 nm, respectively. The particle size of the extracted phytochemicals was found to be from the nano-to the micro-range. The average particle size and zeta potential of the extracted phytochemicals were about 312 nm and −41.6 mV, as shown in Figure 9. The particles were found to be in the nano-micro regime with the highest negative potential, which proved the even distribution and stability of the nanoparticles in the suspension. Particle Size Distribution Based on the Brownian movement of particles, the DLS technique was used for particle size measurement. Figure 9a,b reveal the particle size distribution of copper and silver particles. The particles were known to exhibit multi-modal distribution with size-changing from mm to nm. The average particle sizes found for silver and copper were 600 and 500 nm, respectively. The particle size of the extracted phytochemicals was found to be from the nano-to the micro-range. The average particle size and zeta potential of the extracted phytochemicals were about 312 nm and −41.6 mV, as shown in Figure 9. The particles were found to be in the nano-micro regime with the highest negative potential, which proved the even distribution and stability of the nanoparticles in the suspension. Antibacterial Activity of the Coated Fabrics The antibacterial activity of all coated fabrics was tested against qualitative and quantitative measurements. Zone of Inhibition Test (Qualitative Measurements) The zone of inhibition test is considered a qualitative assessment. The test was conducted against both bacterial types, Gram-positive (S. aureus) and Gram-negative (E. coli). Clear inhibition zones around all types of fabric samples incubated for 24 h at 37 °C in the dark are represented in Figure 10. It is obvious from the figures that fabrics coated with phytochemical particles show less zone of inhibition, whereas the copper (green synthesis)-coated fabrics revealed the most significant antibacterial zones against S. aureus and E. coli bacterial strains. Three repetitive tests were conducted against each sample and then the average value was calculated for each agent, as shown in Figures 11-13. The results confirmed that deposited copper and silver particles induce strong sterilisation towards S. aureus and E. coli owing to the free-standing nature of particles. However, the highest sensitivity is depicted by S. aureus in comparison to E. coli. The increase in the zone of inhibition for S. aureus was from 4 to 5 mm, whereas it increased from 3 to 4.5 mm for E. coli with an increase in the concentration of copper particles. It is worth mentioning Antibacterial Activity of the Coated Fabrics The antibacterial activity of all coated fabrics was tested against qualitative and quantitative measurements. Zone of Inhibition Test (Qualitative Measurements) The zone of inhibition test is considered a qualitative assessment. The test was conducted against both bacterial types, Gram-positive (S. aureus) and Gram-negative (E. coli). Clear inhibition zones around all types of fabric samples incubated for 24 h at 37 • C in the dark are represented in Figure 10. It is obvious from the figures that fabrics coated with phytochemical particles show less zone of inhibition, whereas the copper (green synthesis)coated fabrics revealed the most significant antibacterial zones against S. aureus and E. coli bacterial strains. Three repetitive tests were conducted against each sample and then the average value was calculated for each agent, as shown in Figures 11-13. The results confirmed that deposited copper and silver particles induce strong sterilisation towards S. aureus and E. coli owing to the free-standing nature of particles. However, the highest sensitivity is depicted by S. aureus in comparison to E. coli. The increase in the zone of inhibition for S. aureus was from 4 to 5 mm, whereas it increased from 3 to 4.5 mm for E. coli with an increase in the concentration of copper particles. It is worth mentioning here that the annulus of the zone of inhibition increases with an increase in the load particles' concentration, which indicates that a higher content of nanoparticles will affect the antibacterial activity, similar to the previous study [8]. The antibacterial activity of coated fabrics can be credited to the combined physical and chemical interaction of bacteria with particles. Nanoparticles can be added to the cell through endocytotic mechanisms. Then, the ion's cellular uptake increased as ionic species were subsequently released in the cells through nanoparticle dissolution [31]. This resulted in a higher intracellular concentration inside cells for further enormous oxidative stress. through endocytotic mechanisms. Then, the ion's cellular uptake increased as ionic species were subsequently released in the cells through nanoparticle dissolution [31]. This resulted in a higher intracellular concentration inside cells for further enormous oxidative stress. The antibacterial activity of coated fabrics can be credited to the combined physical and chemical interaction of bacteria with particles. Nanoparticles can be added to the cell through endocytotic mechanisms. Then, the ion's cellular uptake increased as ionic species were subsequently released in the cells through nanoparticle dissolution [31]. This resulted in a higher intracellular concentration inside cells for further enormous oxidative stress. terials 2022, 12, x FOR PEER REVIEW 16 of 24 Figure 12. The average value of the zone of inhibition against silver particle-coated cotton fabric. Figure 13. The average value of the zone of inhibition against copper particle-coated cotton fabric. Reduction Factor (Quantitative Test) For quantitative measures, the AATCC-100 method was adopted, and samples were tested against both S. aureus and E. coli bacterial strains. The antibacterial action of all samples regarding reduction in terms of log CFU/mL and percentage reduction calculated from log CFU/mL values are shown in Figure 14. The figure on the left side depicts Reduction Factor (Quantitative Test) For quantitative measures, the AATCC-100 method was adopted, and samples were tested against both S. aureus and E. coli bacterial strains. The antibacterial action of all samples regarding reduction in terms of log CFU/mL and percentage reduction calculated from log CFU/mL values are shown in Figure 14. The figure on the left side depicts Figure 13. The average value of the zone of inhibition against copper particle-coated cotton fabric. Reduction Factor (Quantitative Test) For quantitative measures, the AATCC-100 method was adopted, and samples were tested against both S. aureus and E. coli bacterial strains. The antibacterial action of all samples regarding reduction in terms of log CFU/mL and percentage reduction calculated from log CFU/mL values are shown in Figure 14. The figure on the left side depicts the reduction in inoculated bacterial concentration (log CFU/mL). It was observed that log values were significantly reduced for all treated samples as compared to log values of untreated fabric, and the reduction in the log value was increased with the increase in concentrations of nanoparticles (copper and silver) on fabric. In the case of samples loaded with the highest concentrations of copper and silver nanoparticles, the log value decreased from 5.43 to 0, suggesting a 99.99% reduction in inoculated bacterial colonies (figure on the right side, Figure 14) for both S. aureus and E. coli. The log CFU/mL value for untreated cotton fabric was increased from 5.43 to 5.44, revealing ineffectiveness of the control sample against both tested microbes. The reduction percentage of untreated and treated cotton fibres is shown in Table 5. the control sample against both tested microbes. The reduction percentage of untreated and treated cotton fibres is shown in Table 5. Figure 14. Antibacterial activity in terms of log CFU/mL (left) and percentage reduction (right) of fabrics treated with phytochemicals, silver, and copper nanoparticles and untreated cotton fabric. Figure 15 shows selected pictures of bacterial growth concentration for untreated cotton fabric and treated (PH3, C3, and S3) samples, which support the above trend. The untreated sample was shown to be ineffective against bacterial growth, whereas the textiles coated with copper particles were found to be highly effective. However, with a higher concentration of copper particles, there was a significant improvement in colony reduction with an effectiveness of greater than 99% for both types of bacteria. Figure 15 shows selected pictures of bacterial growth concentration for untreated cotton fabric and treated (PH3, C3, and S3) samples, which support the above trend. The untreated sample was shown to be ineffective against bacterial growth, whereas the textiles coated with copper particles were found to be highly effective. However, with a higher concentration of copper particles, there was a significant improvement in colony reduction with an effectiveness of greater than 99% for both types of bacteria. Nanomaterials 2022, 12, x FOR PEER REVIEW 18 of 24 Antifungal Activity of Treated Samples The antifungal activity was evaluated according to the standard test method of qualitative measurement. The fungus C. albicans was selected for this purpose. The activity was checked for all phytochemical-, silver particle-, and copper particle-coated cotton fibres. The percentage reduction of fungi (C. albicans) with raw cotton and samples loaded with copper, silver, and phytochemicals is presented in Table 6. The raw cotton fabrics without any antifungal agent can provide a suitable environment for microorganisms to grow. The copper particle-coated cotton fibres showed maximum antifungal activity (as copper particle-coated fibre also showed the best antibacterial properties). The percentage reduction of silver nanoparticles was also higher as compared to phytochemicals. Antifungal Activity of Treated Samples The antifungal activity was evaluated according to the standard test method of qualitative measurement. The fungus C. albicans was selected for this purpose. The activity was checked for all phytochemical-, silver particle-, and copper particle-coated cotton fibres. The percentage reduction of fungi (C. albicans) with raw cotton and samples loaded with copper, silver, and phytochemicals is presented in Table 6. The raw cotton fabrics without any antifungal agent can provide a suitable environment for microorganisms to grow. The copper particle-coated cotton fibres showed maximum antifungal activity (as copper particle-coated fibre also showed the best antibacterial properties). The percentage reduction of silver nanoparticles was also higher as compared to phytochemicals. Antiviral Effectiveness The graph in Figure 16 shows the virus infectivity titre log against contact time (0 h and 60 min). Behrens and Karber's method was used for the calculation of virus titres' reduction from the starting viral titre of infectivity (10 7 ). Figure 16a shows the infectivity titre change of coronavirus (0 h and 60 min) at 25 • C for untreated cotton fabric and fabrics treated with phytochemicals, copper, and silver nanoparticles. It was observed that there was a significant decrease in the infectivity titre for fabrics coated with copper and silver particles after 60 min as compared to 0 h, whereas no reduction in the virus activity titre was observed in case of phytochemical-treated fabric and untreated cotton fabric. Similarly, Figure 16b depicts the virus percent reduction for untreated and treated fabrics. The fabrics treated with copper and silver showed 84% and 55% reductions in virus titre, respectively, whereas phytochemical-treated fabric and untreated fabric remained ineffective against the virus. The antiviral action shown by the fabrics treated with copper and silver nanoparticles could be due to the binding of metallic NPs with glycoproteins at the viral surface, working as an inhibitory action for viruses. The graph in Figure 16 shows the virus infectivity titre log against contact time (0 and 60 min). Behrens and Karber's method was used for the calculation of virus titr reduction from the starting viral titre of infectivity (10 7 ). Figure 16a shows the infectiv titre change of coronavirus (0 h and 60 min) at 25 °C for untreated cotton fabric and fa rics treated with phytochemicals, copper, and silver nanoparticles. It was observed th there was a significant decrease in the infectivity titre for fabrics coated with copper a silver particles after 60 min as compared to 0 h, whereas no reduction in the virus activ titre was observed in case of phytochemical-treated fabric and untreated cotton fabr Similarly, Figure 16b depicts the virus percent reduction for untreated and treated fa rics. The fabrics treated with copper and silver showed 84% and 55% reductions in vir titre, respectively, whereas phytochemical-treated fabric and untreated fabric remain ineffective against the virus. The antiviral action shown by the fabrics treated with co per and silver nanoparticles could be due to the binding of metallic NPs with glycopr teins at the viral surface, working as an inhibitory action for viruses. Air Permeability Air permeability is the exchange of air when heat and perspiration are generat from a wound [32]. The results of fabric air permeability are shown in Table 7. The permeability results were found for all types of fabrics (treated and untreated). From the results, it is clear that the application of very fine nanoparticles to the fa rics had very little effect on air permeability. Air permeability of the untreated fabric w about 142 mm/s, while the air permeability for all other nanoparticle-coated fabrics w in the range of 129 to 136 mm/s, showing that there was a minor decrease in air perme bility even after depositing the nanoparticles. There are two factors responsible for th phenomenon. Firstly, there may be a relaxation shrinkage in the fabric structure due dipping in the nanoparticle solution, causing the yarns to come close and hinder the flo of air. Secondly, nanoparticles have deposited on the yarn structure and interstices a reduce the fabric air gap spacing (pore size). The stiffness of fabric describes its ability to resist deformation and keep standi without support. This property is important regarding comfort and desirable drapin Stiffness can be calculated from the bending length and flexural rigidity. The stiffness coated and uncoated fabric samples was found, and average values are presented in T Air Permeability Air permeability is the exchange of air when heat and perspiration are generated from a wound [32]. The results of fabric air permeability are shown in Table 7. The air permeability results were found for all types of fabrics (treated and untreated). From the results, it is clear that the application of very fine nanoparticles to the fabrics had very little effect on air permeability. Air permeability of the untreated fabric was about 142 mm/s, while the air permeability for all other nanoparticle-coated fabrics was in the range of 129 to 136 mm/s, showing that there was a minor decrease in air permeability even after depositing the nanoparticles. There are two factors responsible for this phenomenon. Firstly, there may be a relaxation shrinkage in the fabric structure due to dipping in the nanoparticle solution, causing the yarns to come close and hinder the flow of air. Secondly, nanoparticles have deposited on the yarn structure and interstices and reduce the fabric air gap spacing (pore size). The stiffness of fabric describes its ability to resist deformation and keep standing without support. This property is important regarding comfort and desirable draping. Stiffness can be calculated from the bending length and flexural rigidity. The stiffness of coated and uncoated fabric samples was found, and average values are presented in Table 7. Fabric stiffness was found to increase with the increase in the concentration of particles. The reason is that the coating increased the inter-fibre friction and abrasion at fibre crossover points [33]. However, the effect of the increase in rigidity overall was insignificant. Durability against Washing The durability of conductive fabrics under washing and rubbing has been a critical challenge. To investigate these properties, samples PH3, S3, and A3 were selected. These three samples were selected because they provided satisfactory results regarding antibacterial properties. A standard washing of these three samples was performed. The antibacterial activity was again checked for the washed samples and reported in Figure 17. There was an insignificant decrease in antibacterial properties. The reason is that the coating of antibacterial agents is so condensed and compact. The retention of the coating and particles was present on the fibrous surface even after severe washing (20 washing cycles). Conclusions The copper-and silver-coated anti-pathogenic textiles have already been extensively studied due to their potential applications in hospitals. However, to overcome the hygienic issues (mostly occurring due to synthetic antibacterial agents), the present study was focused on a sustainable approach for the green synthesis of silver nanoparticles, green synthesis of copper nanoparticles, and the investigation of the phytochemical and biological screening of bark, leaves, and fruits of Ehretia acuminata (belongs to the family Boraginaceae). Surface morphology, size, and the presence of antimicrobial agents (phytochemicals and particles) were analysed by scanning electron microscopy, dynamic light scattering, and energy-dispersive X-ray spectroscopy. The functional groups and presence of particles (copper and silver) were found by FTIR and XRD analysis. The antibacterial activity of coated fabrics was tested against qualitative and quantitative measurements. All the particles showed excellent reduction percentages, but in case of qualitative measurements, the phytochemicals showed no activity. The overall strongest Conclusions The copper-and silver-coated anti-pathogenic textiles have already been extensively studied due to their potential applications in hospitals. However, to overcome the hygienic issues (mostly occurring due to synthetic antibacterial agents), the present study was focused on a sustainable approach for the green synthesis of silver nanoparticles, green synthesis of copper nanoparticles, and the investigation of the phytochemical and biological screening of bark, leaves, and fruits of Ehretia acuminata (belongs to the family Boraginaceae). Surface morphology, size, and the presence of antimicrobial agents (phytochemicals and particles) were analysed by scanning electron microscopy, dynamic light scattering, and energy-dispersive X-ray spectroscopy. The functional groups and presence of particles (copper and silver) were found by FTIR and XRD analysis. The antibacterial activity of coated fabrics was tested against qualitative and quantitative measurements. All the particles showed excellent reduction percentages, but in case of qualitative measurements, the phytochemicals showed no activity. The overall strongest antibacterial effect was found for the fabrics coated with green synthesised copper particles (zone of inhibition about 5 mm and quantitative reduction percentage 99.99%). The coated cotton fibres were further investigated for antiviral and antifungal analysis. There was a significant decrease in the infectivity titre for fabrics coated with copper and silver particles after 60 min as compared to 0 h, whereas no reduction in virus activity titre was observed in case of phytochemicaltreated fabric and untreated cotton fabric. The fabrics treated with copper and silver showed 84% and 55% reductions in virus titre, respectively. Furthermore, the copper particle-coated cotton fibres showed maximum antifungal activity (as copper particle-coated fibre also showed the best antibacterial properties). The percentage reduction of silver nanoparticles was also higher as compared to phytochemicals. Moreover, the developed fabrics were analysed for comfort properties regarding air permeability and stiffness. The particles were so fine that they did not block the pores of fabrics, and hence we ensured the air permeability through the structure, while stiffness was also improved. The discharge of toxic metals from the treated textiles through wastewater has a negative impact on the environment due to their biocidal effect. If the concentration of metals in wastewater is higher than their MIC, they will kill the microbes necessary for the sewage biodegradation. In case of a concentration lower than their MIC, these metals will induce resistance towards those microbes, which is also highly undesirable. The fabrics developed in this study were highly durable to washing (20 industrial washing cycles) and did not release toxic metals into the environment during washing. Therefore, the present study was also focused on the development of durable antibacterial fabrics where the particles (copper, silver, phytochemicals) are firmly attached to the fabric surface (less leaching into the environment), having a zone of inhibition in the range of 3-5.5 mm. The antibacterial activity of treated fabrics after washing was studied to establish the washing durability of the deposition. The finding that particles are strongly adhered to the fibres and interspaces was further supported by the fact that particles remained attached to the surface of fabric during washing. Furthermore, by comparing copper and silver, the antibacterial effect of copper was higher than silver due to the transition in its ionic states. The proposed approach is facile, cost-effective, and offers odourless workwear. The developed antibacterial fabrics could be effectively applied in the field of hospital textiles for the fabrication of antibacterial surgical gowns, panel covers, bed sheets, coveralls, curtains, table and chair covers, etc.
2022-10-19T16:06:54.426Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "bb24912cc0605c7df88a7a15e0007ae79b6ac146", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/20/3629/pdf?version=1665912355", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65e5a976b01bcd472219fbd278b67b446c257863", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
88512919
pes2o/s2orc
v3-fos-license
Techniques for clustering interaction data as a collection of graphs A natural approach to analyze interaction data of form"what-connects-to-what-when"is to create a time-series (or rather a sequence) of graphs through temporal discretization (bandwidth selection) and spatial discretization (vertex contraction). Such discretization together with non-negative factorization techniques can be useful for obtaining clustering of graphs. Motivating application of performing clustering of graphs (as opposed to vertex clustering) can be found in neuroscience and in social network analysis, and it can also be used to enhance community detection (i.e., vertex clustering) by way of conditioning on the cluster labels. In this paper, we formulate a problem of clustering of graphs as a model selection problem. Our approach involves information criteria, non-negative matrix factorization and singular value thresholding, and we illustrate our techniques using real and simulated data. Introduction Non-negative factorization of a matrix whose columns are sub-probability vectors We consider an n × p non-negative matrix X such that the rank of X is r and where W , H ≥ 0, 1 W = 1 ∈ R r , and 1 H ≤ 1 ∈ R p . Note also that 1 X ≤ 1 . We call the integer r the inner dimension of the decomposition W H. In general, a non-negative factorizable matrix can be transformed to a form stated in (1) by way of the so-called "pull-back" map (c.f. [1]). Given such X, generally speaking, finding the pair (W , H) is known to be an NP-hard problem (c.f. [2] and [3]), and even validating the uniqueness of the factorization remains a challenging task (c.f. [4] and [5]). As such, one is often interested in an algorithm for obtaining such a factorization approximately by numerically solving the following optimization problem: where D is some measure of discrepancy between X and the product W H. For example, one can iterate until X − W H F is sufficiently small. For data encountered in practice, the error made by estimates W and H can not be known because true W and H are unknown. For more detailed discussion on non-negative factorization beyond a brief summary of the topic in this section we refer the reader to [1]. We now introduce a rank estimation criterion expressed in terms of deviation of W and H from their respective fixed points together with X − W H F . Let We denote by r the inner dimension used to obtain an approximate nonnegative factorization ( W , H). In general, the bigger the value of r is, the smaller ε( W , H) is. On the other hand, it can be argued that for r > r, the rth singular value approximates zero, and as such, ε( W ) and ε( H) are expected to be a large number if not infinity. It motivates our formulation behind our FIC formula. In Lemma 2.1 and 2.2, we motivate functions in (3) and (4). Our analysis to come will depend on the following assumption (Condition 1) which reduces our problem to estimating the inner dimension of the nonnegative matrix. Under Condition 1, if the matrix is observed When there is no noise, then one can see that for r > r, ε(W ) = ε(H) = ∞ and ε(W, H) = 0, Hence, for such simple cases, while the exact analysis of r ≤ r with respect to its FIC value remains elusive, our FIC will always suggest r ≤ r. Condition 1. A non-negative matrix X is a rank r matrix, and there exists a unique non-negative factorization W H with inner dimension r. Related works The choice for the inner dimension is often left to the practitioner's discretion with little guiding principles for an informed choice. Previous related works of the model selection aspect of using NMF include [6], [7] and [8]. In [6], so-called core consistency metric is introduced for a PARAFAC tensor model. In [7] and [8], the core consistency is used to estimate the model dimension. It has been shown in [9] that there is a close relationship between non-negative matrix factorization and K-means clustering. Since the rank of a matrix is the number of non-zero singular values, a reasonable and popular approach, especially for noisy data, is to find the "effective" number of non-zero singular values. For examlpe, in [10], an automatic "elbow" finding algorithm has been proposed, and their theory is developed based on the theory of multivariate normal random vectors. Fixed Point Error Criterion Note that whenever the inverses of HH and W W exist, we have that where X = U ΣV is a singular value decomposition. Note that the indexes of the extreme points of the rows of U match the indexes of the extreme points of the rows of W , and similarly, the indexes of the extreme points of the rows of V match the indexes of the extreme points of the columns of H. This interplay between singular value decomposition and non-negative factorization has been exploited, for example, in [11] and [4] as we do in this paper as well. Lemma 2.1. Under Condition 1, Proof. Since the rank of W must be r, U W and W U are full rank square matrices, whence their inverse exist. Next, we observe that and by rearranging non-singular matrices, it follows that HH = (U W ) −1 Σ 2 (W U ) −1 . Hence, we have (HH ) −1 = (U W ) Σ −2 U W . Next, we note that W HH = U ΣV H , and therefore, By a similar argument, we can see that Lemma 2.2. Under Condition 1, any pair (W, H) of n × r and r × p nonnegative matrices that satisfies (8) and (9) replacing the pair (W , H) is such that their product yields X. Proof. Consider the problem of finding a pair (W, H) such that where Z := W H. First, we write Z = U Σ (V ) . Then, note the following equations: where (13) is obtained from (12) by left and right multiplying X and Z to (12). Then, simplifying (13) with singular value decomposition, we have Therefore, we have I = ΣV Z U Σ −2 , or equivalently, Σ = V Z U . Hence, On other hand, since both X and Z have rank r, it follows, from Z = U U Z + (I − U U )Z, that the rank of (I − U U )Z must be zero, or equivalently, Z = U U Z = X as desired. Combining Lemma 2.1 and 2.2, we have an if-and-only-if statement, and we list Theorem 2.1 for future reference: Our next result in Theorem 2.2 gives an upper bound on the fixed point error ε(W ) when W may be different from W , which will be used in the proof of Theorem 3.1. Note that we do not explicitly state the error bound associated with H, but by symmetry, i.e., by transposing the matrix X, one can obtain a similar bound. Theorem 2.2. There exist γ 1 , γ 2 , γ 3 and γ 4 , depending only on X, such that for any rank r matrix X = U ΣV = W H with inner dimension r decomposition, Proof. Let M := X(X) (U Σ −2 U ), and note that using the singular value decomposition, where the fact that W , H, W and H are of rank r is used for the identity. Now, Then, we let E = U − M U , and note that Also, we have Hence, using the triangular inequality, Application to Random Dot Product Graphs In this section, we consider our fixed point error ε(W ) with respect to a case where instead of observing X exactly, a noisy version of X is observed. To this end, we consider a random dot product graph model, which was originally introduced in [12]. Definition 3.1. Let E be a subset of R r such that, for all ω, ω ∈ E, 0 ≤ ω, ω ≤ 1. For any given n ≥ 1, let Y be a n × d matrix whose rows are elements of E. The adjacency matrix A of a random dot product graph (RDPG) with latent positions Y is a random n × n symmetric non-negative matrix such that its entries take values in {0, 1} and Given an n × d matrix of latent positions Y , the random dot product model generates a symmetric (adjacency) matrix A whose edges {A ij } i<j are independent Bernoulli random variables with parameters {P ij } i<j where P = Y Y . Random dot product graphs are a specific example of latent position graphs [13]. Our analysis in this section is an asymptotic one in which the number n of rows is taken to infinity. As such, to indicate this explicitly, we may write, for example, ε n (W ) instead of ε(W ). But, for the most parts, to simplify our notation, we suppress our notation's dependence on n. Let A be an n × n random matrix such that P = E[A|Y ] = Y Y , where the rows {Y i } n i=1 of Y form a sequence of independent and identically distributed random probability vectors, i.e., Y 1 = 1, and conditioning on Y , each A ij is an independent Bernoulli random variable. Let Y be the adjacency spectral embedding of A at rank r. That is, given that A = U Σ V is the singular value decomposition with its singular values in the decreasing order, Y = U + Σ 1/2 + , where U + is the first r columns of U and Σ + is the upper r × r diagonal submatrix of Σ. The matrix Y is known as the adjacency spectral embedding, which is shown to have good properties. For more detailed analysis of Y , we refer the interested reader to [14] and to the references therein. The following condition is a key assumption in [14] to which we make references for several inequalities used in the proof of Theorem 3.1. Our discussion on estimation of the rank for non-negative factorization connects to the random dot product model by a simple observation that even if n × r matrix Y is not non-negative, if P = Y Y is a non-negative matrix with rank r, then there exists an n × r non-negative matrix W such that P = W W (c.f. [4]). Our next condition is a stronger version of this observation. Condition 4. P = U ΣU is the eigenvalue decomposition, where Σ is a nonsingular r × r diagonal matrix whose diagonal values are in decreasing order. There exists a constant ξ 0 < ∞ such that almost surely, for all n, τ (P ) := Σ 11 /Σ rr ≤ ξ 0 . Our next result characterizes the asymptotic error rate for our fixed point formula: Theorem 3.1. Let {a n } be a sequence such that lim n→∞ log(n)/a n = 0. Under Condition 1, 2, 3 and 4, almost surely, lim n→∞ ε n ( W )/a n = 0, where ε n (·) denotes the fixed point error for X := P/n = Y Y /n with W : Proof. Note that 1 Ae j ≤ n for all j = 1, . . . , n. First, by Proposition 4.5 and Theorem 4.6 in [14], for each ε > 0, for all sufficiently large values of n, with probability 1 − ε, U − U + F ≤ 4δ −2 r 2r log(n/ε)/n, where C is the constant in Remark 3.7 of [14] which depends only on the (common) distribution of Y i and δ r denotes the smallest eigenvalue of the second moment matrix ∆ = E[Y i Y i ]. Now, since X = P/n and P = U ΣU , X = U (Σ/n)U is an eigenvalue decomposition of X. Now, we write U = U and Σ = Σ/n. Note that using/adapting the equations (16) and (17) from the proof of Theorem 2.2, where we implicitly used the fact that Σ = Σ/n to , for example, simplify Σ + F Σ −1 F . Appealing Proposition 4.5 of [14] once again, we also have that where · denotes the spectral norm, i.e., the largest singular value of the matrix. Therefore, we have the following inequality from which our claim follows: For Near Separable Cases experiments, we use FastConicalHull and FastSepNMF which are implementations of the so-called "Xray" algorithm in [11] respectively, and the linear programming algorithm in [15], and their Matlab source codes can be found on the web 1 . For Not Separable Cases and Sensor Network Data experiments, we use the pe-nmf option in nmf function that solves numerically the following problem: 1 http://bit.ly/1eDKNem where 1 W = 1 and 1 H = 1 , and α, β ≥ 0 are chosen appropriately. More specifically, the particular numerical algorithm solving the aforementioned problem for our experiment is known as pattern-expression non-negative factorization (c.f. [16]). Unless said otherwise, we set α = 0 and β = 1. Near Separable Cases A case where the rank and inner dimension coincide but errorfully observed Using the example in [17], we conduct a Monte Carlo simulation comparing the so-called "Xray" algorithm in [11] and the algorithm in [15]. In particular, for each Monte Carlo simulation, we randomly generate a 50 × 100 matrix M . To do this, we sample the entries of 50 × 10 non-negative matrix W uniformly from [0, 1] independently, and subsequently, we normalize the column so that each column is a probability vector, i.e., W ← W diag(1 W ) −1 . Then, we generate a 10 × 100 matrix H by sampling each column from a Dirichlet distribution whose parameter is sampled uniformly from [0, 1] 10 . Then, M = W H. Then, the observed matrix X is obtained from M by adding normally distributed errors: where N ij is a normal random variable with mean zero and standard deviation 1, the matrix |N | = (|N ij | : i, j), and ε > 0. If X ij < 0 after adding noise, then we set the value of X ij to zero to enforce the non-negativity. The results are reported in Table 1 and Table 2 respectively. The results are comparable. We also mention that using the usual L 2 minimization results in a poor performance; our experiment results in r = 1 for all 100 Monte Carlo experiments for all ε = 0.1, 0.2, 0.3. Swimmer Data Sets The swimmer data set is a often-tested data set for bench-marking NMF algorithms (c.f. [18] and [17]). In our present notation, each column of 220 × 256 data matrix X is a vectorization of a binary image, and each row corresponds to a particular pixel. Each image is a binary images (20-by-11 pixels) of a body with four limbs which can be each in four different positions. It is known that the matrix X is 16-separable while the rank of X is 13. such, Condition 1 is not met, and application of our FIC criteria using FastConicalHull and FastSepNMF yields the estimated r as 13 while using nnmf yields r = 1. Not Separable & Not unique Cases In this experiment, using the parameters for the example considered in [4] and [5], we generate Monte Carlo samples of the matrix X whose expected value is parametrized by κ ∈ [0, 1] and λ ∈ (0, ∞). To this end, we let X be a matrix of independent Poisson random variables such that where Λ = λI, It can be shown that W H is uniquely factorizable if and only if κ ∈ [0, 0.5) (c.f. [4]). In particular, by changing κ from 0 to 1, we can gradually transition from a uniquely factorizable model to one that can be decomposed into more than two distinct but equally viable solutions. We focus on correctly estimating r = 3 from X diag(1 X) −1 . In Table 3 and Table 4 we consider the problem of estimating r while fixing κ = 0.1 and κ = 0.5. We perform our experiments while varying the event intensity λ from 10 to 10000. For given λ, i.e, each row, each entry counts the number of times out of 100 Monte Carlo experiments that resulted in choosing r as the inner dimension based on the baseline method and our FIC criteria. For each γ, we conduct 100 Monte Carlo simulation experiments for performance of choosing r achieving the minimum FIC value. For comparison, we use as the Table 3: The baseline procedure "dimSelect • svd" is used for choosing r for each of 100 Monte Carlo simulation experiments. The true rank r is 3. For κ = 0.1, the baseline procedure performs well for all values of λ, but for κ = 0.5, the baseline procedure performs poorly. baseline method the procedure of "dimSelect • svd" for selecting the underlying dimension (or rather estimating the rank of the matrix), where the baseline method first computes the singular values of X and then, use the "elbow" finding method in [10] to estimate the rank. The result is reported in Table 3, and we note that the performance of our method is more robust than the baseline method, with respect to non-uniqueness of the underlying factorization. Sensor Network Data We next consider data collected over six hours and thirty minutes from a group of 19 actors working in an office. Each actor is equipped with one or more Bluetooth device(s) that detect other Bluetooth devices when in proximity. The overall idea behind collecting such data is similar to the one in [19]. There are 22 sensors in total, and they are associated with office staffs (doctors and nurses) as well as stationary objects such as a desk and a room. Three individuals were associated with two sensors and the rest are given a single sensor. The original Figure 1: A tale of two motifs. The first motif occurs during the first few hours of the day and the second motif dominates during the rest of the day. Bob, Ali and Chu wear two badges, the first one in the front and the second on the back. Each of all the other workers wear one badge, and some stationary objects (desks and break-rooms) are also equipped with a badge. The darker a square is, the stronger the association between entities, meaning that given that a new interaction is occurred, the probability of the interaction being between i and j is higher if the square is darker. The data matrix whose rank is estimated is an n × p non-negative matrix X = X diag(1 X) −1 , where X ,t is the number of times that sensor i detected sensor j during tth interval and = 22 × (i − 1) + j. Specifically, n = 22 × 21 = 462 and p = 13. As displayed in Table 5, the best choice in terms of FIC for rank r was 2. The values of ε( X) decreases monotonically while the graphs of the values of ε( W ) and ε( H) exhibit convexity in r. In Figure 1, the basis elements from fitting the rank r = 2 is illustrated in an "adjacency" matrix-like layout. An inspection of H for r = 2 yields that the estimated model is nearly separable, where the first pattern in Table 5 dominates during the first few hours in the morning, and the second pattern in Table 5 dominates during the rest of the day. Table 5: Selecting r for sensor network data. FIC suggests that r = 2 and this coincides with the suggestion made by using "dimSelect • svd".
2015-01-10T22:06:33.000Z
2014-06-24T00:00:00.000
{ "year": 2014, "sha1": "6562e5d22161b82c8d1fe700bdcb471c7080ef4b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6562e5d22161b82c8d1fe700bdcb471c7080ef4b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
213708441
pes2o/s2orc
v3-fos-license
Negative Trade-offs Between Community Forest Use and Hydrological Benefits in the Forested Catchments of Nepal's Mid-hills Widespread community forestry practices in Nepal's mid-hills catchments involve removal of forest products—including firewood, litter, fodder, and medicinal herbs—by the local communities. Uncertainty is growing about how sustainable the management of these catchments is and whether it can meet traditional needs and maintain ecosystem services, particularly water. As part of a broader study on the hydrological effects of community forestry practices, we measured selected soil properties, including saturated hydraulic conductivity (Ks), bulk density (BD,) and soil organic carbon (SOC) across 4 depths (0–10, 10–20, 20–50 and 50–100 cm) in 3 types of community forest sites—broadleaf, pine-dominated, and mixed—in the Roshi Khola catchment of Kavre district. The same measurements were made at a minimally disturbed religious forest site in the catchment that had higher Ks values than the mixed and broadleaf sites, signifying a lower degree of forest use-related disturbance. Likewise, SOC values for the religious forest were significantly higher (P < 0.05) and BD values significantly lower than the pine-dominated and mixed forest sites, particularly at shallower depths (0–50 cm). Importantly, comparison of the median Ks values (16–98 mm h–1) with rainfall intensities measured at the catchment showed the less intensively used pine-dominated site to be conducive to vertical percolation with possible greater contributions to subsurface storage even during high-intensity rainfall events. These results highlight the critical role of forest use practices in landscape hydrology and have implications for the management of the forested catchments in the broader Himalayan region, particularly in relation to the negative local perceptions of the role of pine plantations on declining water resources. Introduction During the early half of the 20th century, forest to farmland conversion and high local demand for forest products, including timber, firewood, livestock fodder, and compostable litter, caused significant loss of forest cover in Nepal's mid-hills and gave rise to the widely publicized but contested ''theory of Himalayan environmental degradation'' (Gilmour 1988;Ives 2004;Hofer and Messerli 2006). The alleged effects, mainly episodes of large-scale flooding and landslides (Eckholm 1976), prompted local and international initiatives to reforest the area as a remedial measure that concurrently fulfilled traditional forest needs. For example, a Nepal-Australia forestry project supported the planting of 20,000 ha of the central mid-hills during the early 1990s (Collett et al 1996), while the World Bank proposed planting at annual rates of 50,000 and 10,000 ha until 1990 and 2000, respectively (Sattaur 1987). The reforestation programs largely used fast-growing species of pine, such as Pinus roxburghii, due to the species' high adaptability to the nutrient-poor soils of the mid-hills (Gilmour et al 1990). Importantly, forest development activities increased focus on community involvement, as customary forest policies systematically alienated local forest users (Acharya 2005;Springate-Baginski and Blaikie 2007) leading to the inception of Nepal's community forestry policy in the late 1970s (Cribb and AusAID 2006). At present, more than half (over 2.2 million ha) of the mid-hills catchments contain naturally grown or planted species of broadleaf and pine, more than two-thirds of which are managed by nearly 7 million local users organized as members of the Community Forest User Groups (hereafter CFUGs) (DFRS 2015). Forestation is commonly associated with improved landscape stability and hydrological conditions through, for instance, improved soil infiltration (Buytaert et While the time taken for results to be apparent varies from years (Van Noordwijk et al 2003) to decades (Bonell et al 2010), the varied nature of forest management practices confounds the processes, including the ensuing hydrological regime (Farley et al 2004;Bonell and Bruijnzeel 2005;Wohl et al 2012;Julich et al 2015;Ochoa-Tocachi et al 2016;Marín et al 2018). In the lesser Himalayas, where communities rely heavily on local forests for food, fuel, and income (Breu et al 2017;Chakraborty et al 2018), forestry activities are known to affect many aspects of forest functioning. These activities commonly involve regular planting and harvesting of forest products by local communities. For instance, the persistent harvesting of forest litter and understory in southern China negatively affects the soil's structural complexity and supply of organic matter (Brown et al 1995), while cattle grazing diminished soil nutrient availability and soil hydraulic conductivity in forests in southern India (Mehta et al 2008). In the mid-hills of central Nepal, soil hydraulic conductivity was negatively affected by sustained forest use (Gilmour et al 1987;, consisting of collection of litter, firewood, fodder, and medicinal herbs that typically constitute CFUG activities in the region. However, the likely hydrological effects of forest use are nonuniform across forested catchments because the intensity or regularity of CFUG activities is determined by varied community needs, as well as forest type and condition. For instance, pine forests are frequented less by CFUG members (oral communication, 2016, Rajendra KC,), because pine needles are not as suitable for composting or livestock fodder as broadleaf (Gautam and Edward 2001;KC et al 2015). Further, the evolving nature of forest ecosystems through successional change, for example broadleaf species integrating into pine plantations (Gilmour et al 1990), as reported from parts of the mid-hills (Gautam et al 2002;DFO Kavre 2014b), obscures the poorly understood forest-water relationships in the region. Clearer understanding of these relationships is critical given growing concerns about increased water shortages during the dry season in the mid-hills (CBS 2017;Poudel and Duex 2017) that are frequently attributed to pine plantations (Bhatta et al 2015;Sharma et al 2016). Additionally, the forested areas of the mid-hills catchments, managed mostly by local CFUGs (DFRS 2015), are vital for the local and regional water supply (Rasul 2016), which is significantly affected by the region's highly seasonal climate (~85% of the annual rainfall occurs during June-September; Merz et al 2003). As part of a larger study to examine the hydrological effects of the community forestry practices in Nepal's midhills, this paper compares selected soil properties from 3 types of unequally used community forest (CF) sites-a broadleaf, a pine-dominated, and a mixed pine and broadleaf forest-with a minimally used religious mixed species forest in the central hill district of Kavre. The specific soil properties are texture, bulk density (BD), soil organic content (SOC), and saturated hydraulic conductivity (K s ). Further, the paper compares the K s results with rainfall intensities measured at the research site to infer the possible hydrological pathways. Finally, the broad implications of the present findings for likely effects on dry season flows are discussed. Study area The study area was the northwestern part of Roshi Khola catchment of Kavre district, Nepal ( Figure 1). The climate varies from subtropical to warm temperate with annual mean temperature of 17 6 0.218C and rainfall of 1330 6 84 mm as shown by the 15-year (2001-2015) records of the Department of Hydrology and Meteorology of Nepal (DHM 2016). The rainfall patterns are highly seasonal, with 60-90% of the annual rainfall occurring during the monsoon period of June to September (Hannah et al 2005;Merz et al 2006). The elevation and aspect influence the microclimate such that the north-facing slopes are moister and cooler than the south-facing slopes (Gautam et al 2003). Typical of Nepal's mid-hills, the soils in the study area are weakly developed and relatively shallow (,100 cm). They are moderate to poorly drained with silt or silt-loam texture and are acidic (pH 4.0-4.3). Forests in the area encompass naturally grown or planted species of broadleaf and pine (mainly Pinus roxburghii), managed primarily by the CFUGs (DFRS 2015). The area under pine forest increased as a result of reforestation programs conducted primarily during the 1980s (Karki and Chalise 1995). Current study sites were a forested catchment of Indreswar Thalpu (Nga) Community Forest (hereafter referred to as the experimental CF) located 27834 0 10"N, 85830 0 15"E at an elevation of 1710 masl that encompassed stretches of planted pine, natural broadleaf, and mixed forests. After pervasive forest loss, the sites were revegetated naturally and through plantation during the late 1970s and early 1980s, mostly through the auspices of the Nepal-Australia forestry project. In the early 1990s, the forest management responsibilities were officially handed over to the local CFUG (Indreswar Thalpu, Nga) (DFO Kavre 2014a). Thus, organized CFUG activities in those sites have persisted for nearly 30 years (see Table 1). The current Operational Plan (OP) document of the CFUG (2014/15 to 2024/25) shows that 174 households (total population 790) rely considerably on local forest products. In particular, there is a high annual demand for livestock fodder (.3200 tonnes) and compostable litter (.1600 tonnes), mainly from broadleaf species. However, the annual production levels of the forest are insufficient to meet these demands. This is mainly because of the predominantly agriculture-based lifestyle of the community as well as the significant presence of the pine species in the forest that have low use value for fodder and litter production. Nevertheless, the forest provides a surplus supply of firewood and timber that is occasionally sold in the local markets to earn additional income for the CFUG. This income contributes to funding community development activities, including the construction and maintenance of the local infrastructure, as well as employment (eg wages for the forest watcher). A religious forest ( Figure 1) with a similar forestry history and soil type was used as a control site that currently undergoes minimal community use because there is a much lower need to obtain forest products for religious purposes. The individual sites are described in the next section based on the current OP documents, local community insights, and our field assessments, including a forest inventory conducted mostly during January to September 2015. An analysis of the soil profile for each of the forested sites showed the soils to be the fine to fine-loamy derivatives of weathered sandstone, schist, or phyllite of the order Inceptisol (Soil Survey Staff 1994) that transitioned to the parent material at a depth of about 50 to 60 cm. Further details about the location and topography of the sites are provided in Table 2. Description of sites Broadleaf forest site: This site was regenerated naturally through community initiatives that primarily involved fencing off the area to restrict traditional forest use. The dominant vegetation consists of Schima wallichii, Castanopsis tribuloides, and Myrsine capitellata along with shrub species, such as Cleyera japonica, Eurya acuminata, Lyonia ovalifolia, as well as Rhododendron arboreum in the higher elevations Figure 2A). This site undergoes high disturbance due to persistent community use, as the abundance of relatively low-lying (mean height~9 m) broadleaf vegetation is collected for firewood, fodder, and litter ( Figure 2E). There is minimal presence of ground cover on the site except in the less accessible, steeper sections that occasionally have a thin distribution of common grass species. Pine-dominated forest site: This site is dominated by P. roxburghii, which was planted mostly during the late 1970s to early 1980s. However, due to the proximity to the broadleaf site, the broadleaf species appear randomly along with occasional patches of Nephrolepis fern as ground cover ( Figure 2B). The low species diversity and dominance of pine species on the site has lower value for local users because the pine needles are less suitable as livestock fodder or compostable litter than broadleaf vegetation. Occasional signs of trampling are seen as some local residents visit the site for leisure and, seasonally, to collect wild berries and mushrooms. This level of forest-use intensity entails low to moderate disturbance on the site. Mixed forest site: This site borders both the broadleaf and the pine-dominated site ( Figure 2C). The broadleaf species, including S. wallichii and C. tribuloides, are mixed with P. roxburghii, although broadleaf species dominate the lower elevations closer to the broadleaf site. While local CFUG members use the site consistently, it also experiences disturbance due to occasional visits for leisure by commuters along the road on its northern boundary ( Figure 2F). The road is unsealed and supports vehicles, mostly during the dry season (October-May). Minor signs of erosion are seen on the site. Religious forest: The dominant vegetation in this site is a mix of planted and naturally propagated Pinus wallichiana, Quercus semecarpifolia, P. roxburghii, and Alnus nepalensis ( Figure 2D). There is some regeneration and shrub species, including Taxus wallichiana, E. acuminata, and R. arboreum, along with the ground cover of common grass species in steeper sections. Based on the current OP document (2014-2024), the age of the vegetation varies from 5 to 30 years. Although this site is part of the historically degraded national forest, the local community has used the forest for religious purpose since the mid-1980s, until the government formally handed over management duties to the Mukteswar Mahadev religious committee in 2014. Current forest management activities include tree planting, restricted access, and occasional removal of forest products for religious events. Soil sampling and analysis Multiple field visits and meetings with the local forest users were held to obtain an in-depth understanding of the forested sites. Samples were collected during March to mid-April 2015 from 6 (5 for the religious forest) representative points located approximately 20-30 m apart in each site along an approximate ''S'' shape ( Figure 1B). As applied in other parts of the mid-hills, the sampling strategy ensured that the sampling sites were not clustered and were distributed evenly (Shrestha et al 2007). The sampling equipment (EijkelKamp Agrisearch Equipment, the Netherlands) comprised chromium-plated stainless steel rings (100 cm 3 ) fitted to an Edelman auger. This was used to draw minimally disturbed core samples from 4 depths ( Importantly, obtaining representative measures of K s is difficult because it is naturally highly variable (Zimmermann et al 2006) and is affected by the methods of measurement (Paige and Hillel 1993;Reynolds et al 2000;Hangen and Vieten 2018;Zhang et al 2019). As such, our sample size may not be sufficiently large to account for such variations, so the K s data presented here need careful interpretation. The samples were drawn from the midrange of each depth, except for the deepest layer (50-100 cm), where the depth to the parent material affected the sampling decision. The samples were analyzed at the laboratory facilities of the Kathmandu University, located approximately 5 km from the experimental CF. Texture was determined by the soil hydrometer method (Gee and Bauder 1986), BD by the core method (Blake and Hartge 1986), SOC by the dry combustion method (Nelson and Sommers 1982), and pH using a glass calomel electrode probe in a soil water ratio of 1:1 (McLean 1982). Saturated hydraulic conductivity (K s ) was determined using the constant head method based on the Darcy equation given as where V ¼ volume of water flowing through the soil sample, L ¼ sample length, A ¼ cross-sectional area of the sample, t ¼ time taken, and H 2 -H 1 ¼ hydraulic head difference. The K s measurements and apparatus design are based on procedures described by Klute and Dirksen (1986: 694-696). The apparatus comprised a rack to hold 4 core samples that incorporated a constant head maintained by a common water supply. Water was siphoned individually to the soil cores, and the percolated volume was recorded every 10 minutes until 3 constant measurements were obtained. The core method used here is relatively simple, cost-effective, and reliable, particularly in complex landscapes (Ilek and Kucza 2014) such as these. Rainfall intensity The rainfall data used to infer the dominant hillslope hydrological pathways were recorded at a nearby location (about 270 m from the experimental CF; Figure 1A, Weather station 1) during the respective monsoon periods of 2015 and 2016. Rainfall was recorded using a tipping-bucket rain gauge (Onset Computer Corporation, USA) at 30-minute intervals. A rainfall event was categorized as an event that measured a minimum of 5 mm in total and occurred after a dry period of at least 3 hours from the preceding event . For each event, the maximum 30-minute (I 30max ) and 60-minute (I 60max ) rainfall intensities (expressed as equivalent hourly rainfall intensities) were determined by computing the maximum rainfall over the corresponding periods (Ghimire et al 2013). Data analysis A nonparametric Kruskal-Wallis test (Kruskal and Wallis 1952) for nonnormal data was used in R (version 3.4.0) to statistically compare the results of the selected soil properties (BD, SOC. and K s ) of the various forest types. Dunn's multiple comparison test (Dunn 1964) with Bonferroni correction was further used to compare the results across 4 depths. A difference was considered significant when P , 0.05. The K s values obtained were used to infer the likely hydrological pathways with respect to rainfall intensities based on the daily rainfall data collected as described. In doing so, median surface and subsurface K s values for each forest type were compared with the selected percentiles of maximum rainfall intensities (eg over 30 minutes, I 30max ) to estimate the rainfall at the soil surface (Gilmour et al 1987;Bonell et al 2010;Ghimire et al 2013). This is important because the K s distribution and rainfall intensities strongly influence the hydrological pathways in areas with concentrated rainfall such as these (Zimmermann et al 2006;Germer et al 2010). Results and discussion BD and SOC measurements as influenced by the intensity of forest use The BD measurements, as expected, generally increased with depth for all sites ( Figure 3D). In particular, median values for the mixed forest site were significantly higher than those of the religious forest at the 3 upper depths (P , 0.007, 0.01, and 0.003 at 0-10, 10-20, and 20-50 cm, respectively). While site attributes could account for this difference, the higher values for the mixed forest suggests increased foot trafficrelated compaction that occurs due to the site's proximity to the road ( Figure 2F). Although the contribution of increased foot traffic is difficult to categorize here, BD as a measure of compaction increases with increased frequency and intensity of forest management activities (Osman 2013), including thinning (Tarpey et al 2008) and harvesting (Whitford and Mellican 2011). Thus, the generally higher median values for the broadleaf site compared with the religious forest could reflect the degree of community use. The median values of SOC ( Figure 3C) ranged from approximately 2% to 8%. The maximum, 11%, was obtained from the religious forest and the minimum, 1%, for the pinedominated site. As expected, SOC decreased with depth. The values for the religious forest were significantly higher than those for the pine-dominated site at all corresponding depths, suggesting the reduced contribution of pine needles to SOC compared with that of accumulated litter in the religious forest. The SOC levels for the pine-dominated site are consistent with those in other parts of the mid-hills (Shrestha and Singh 2008;Aryal et al 2013), including China (Yang et al 2010) and India (Sharma et al 2011). In Nepal's mid-hills, inherent factors affecting the SOC levels include forest type, climate, and topography (Bajracharya and Sherchan 2009). These are further affected by community forestry practices, including removal of biomass. The higher SOC levels for the religious forest compared with other sites indicate reduced removal of litter, fodder, or firewood, allowing higher biomass accumulation and decomposition. Similar effects, such as increased SOC levels and associated nutrient availability due to prolonged length of litter retention, have been reported in other parts of the mid-hills (Schmidt et al 1993) and globally, including parts of South and North America. In these cases, the persistent removal of aboveground organic matter reduced soil carbon (Hofstede et al 2002;Powers et al 2005). Conversely, the retention of harvesting residue conserved organic matter and improved site quality and productivity in south Australia (Hopmans and Elms 2009) K s measurements and inferred hillslope hydrological pathways The median K s values generally remained higher for the pine-dominated and religious forest (in the shallower depths), likely indicating the lower degree of anthropogenic disturbance related to forest use in these sites (Ziegler et al 2004;Zimmermann et al 2006). The median values ranged from approximately 16-98 mm h -1 with maximum for the pine-dominated and minimum for the mixed forest sites ( Figure 3B). While the values were generally lower for more intensively used sites, that is, the broadleaf and mixed forests, the consistently higher median values for the pinedominated site were significant at 3 depths (P , 0.001, 0.004, and 0.003 at 0-10, 10-20, and 50-100 cm, respectively) compared with the mixed forest site. Similar results showing an inverse relationship between K s and disturbance have been reported in other tropical landscapes (Zwartendijk et al 2017) and in parts of the mid-hills, using in situ methods comprising a constant head well permeameter combined with ring infiltrometers (Gilmour et al 1987) and disc permeameter . The mid-hills studies showed that forestation improved soil infiltration, particularly in the less-disturbed natural forests. This is believed to improve hydrological outcomes in tropical landscapes (Ilstedt et al 2007) through reduced compaction and increased macroporosity due, for instance, to increased SOC levels (Lal 1988;Neary et al 2009). Notably, however, the mixed and broadleaf forest sites of the present study had lower K s values, despite the higher SOC levels, compared with the pine-dominated site (Table 3). This underlines the critical role of anthropogenic disturbance on soil hydraulic conductivity, as has been found in other parts of the lesser Himalayas (Bonell et al 2010). As Figure 3B shows, the median K s values did not vary significantly among sites at the deepest layer (50-100 cm) but varied widely at the shallower depths (0-50 cm). This is probably a function of the vegetation cover and forest use rather than inherent site qualities, such as soil type. Interestingly, a comprehensive analysis of the global database on these relationships (Jarvis et al 2013) reported that soil texture has only a weak effect on soil hydraulic conductivity, particularly at shallower depths (,30 cm), compared with SOC, BD, and land use. Thus, due to those variations, as well as the presence of an impeding layer, the shallower depths largely govern hydrological pathways causing water to pond or flow laterally, depending on rainfall intensity ( Figure 4A). For instance, overland flow or ponding is probable in the mixed forest site with maximum 30-minute (I 30max ) or 60minute (I 60max ) rainfall intensities of 47.8 mm h -1 and 34.6 mm h -1 , respectively. The intensities were derived from a total of 103 rainfall events recorded during the monsoon periods of 2015 and 2016 that highlight the significant contribution of monsoonal rainfall to the annual totals ( Figure 4B). Specifically, the monsoonal totals were 874 mm (97% of the June-December rainfall) in 2015 and 1030.9 mm (82% of the annual rainfall) in 2016. These values are comparable to the long-term measurements of 944 mm (76% of the annual totals) recorded close to the present study area (Weather station 2, Figure 1A). The frequency and distribution of the maximum 30-minute (I 30max ) and 60-minute (I 60max ) rainfall intensities are presented in Figure 4C. However, the observed patterns at the mixed forest site are unlikely to be the dominant flow path because the median values of I 30max (11.2 mm h -1 ) and I 60max (7.6 mm h -1 ) suggest vertical percolation with overland flow or ponding probable only beyond the 80% percentile (21.8 mm h -1 ) of I 30max . In fact, percolation to varying depths occurs at all other sites, even under the maximum of I 30max , until ponding or lateral flow occurs ( Figure 4A), with the pine-dominated site allowing percolation to the deepest layer (20-50 cm). Even though higher rainfall intensities for shorter intervals, for example a maximum hourly equivalent of 88.8 mm h -1 to 130 mm h -1 for 5-minute intervals, have been used to infer the hydrological pathways in other parts of the midhills (Gilmour et al 1987;Ghimire et al 2013, studies have recognized such rainfall patterns to be less dominant in the region. Hydrological implications of the sustained community forestry practices Forest-use practices strongly influence the hydrological outcomes of many tropical and subtropical landscapes (Ilstedt et al 2007), which often support the traditional lifestyle of many local communities. In Nepal's mid-hills, the forest fodder and litter are mixed with animal dung, which constitutes the primary source of soil enrichment (Pilbeam et al 2005;Giri and Katzensteiner 2013), including improvement of N, P and K levels (Balla et al 2014). While all forest types of the mid-hills (national, private, or community forests) supply these products, community forests alone contribute more than 50% of the litter supplied (Adhikari et al 2007). Yet, the hydrological effects of the sustained removal of these forest products are uncertain, even though the resulting reductions in soil microbial activity (Ding Ming et al 1992) and SOC levels are known to reduce rainfall infiltration (Franzluebbers 2002). Further, the corresponding increase in compaction, resulting from higher foot traffic and trampling, exacerbates the situation because it impedes soil hydraulic conductivity (Startsev and McNabb 2000). The lower K s values of the broadleaf forest site are indicative of this effect as it undergoes higher foot traffic due to the greater use value of the forest products, while the mixed forest site has higher foot traffic due to its proximity to the road and has correspondingly low K s values. This could hamper the replenishment of soil and groundwater reserves, contributing to reduced dry-season flows in the area, even though water use by vegetation is an important consideration in evaluating these effects . Indeed, removal of litter and woody debris has been found to cause increased soil loss and runoff (Hartanto et al 2003), while the compaction related to forest use accelerates erosion, shallow landslides , and floods (Alaoui et al 2018). Moreover, a recent study (Upadhayay et al 2018) showed that community forestry practices induce higher sediment loss, than that lost from agricultural land in Nepal's mid-hills catchments. Such a situation confounds reported land use-related social and environmental consequences in the region (Gardner and Gerrard 2003;Jaquet et al 2016) and is ironic because much of the forest in the mid-hills was established to curb sediment loss. Conclusion Increased forestation is widely believed to improve hydrological conditions, particularly in tropical and subtropical environments. However, land-use history and prevalent management regimes, such as community forestry practices, might have a greater effect on forest-water relationships, as shown by the results of this study. Specifically, broadleaf and mixed forest sites showed higher compaction (BD) and lower hydraulic conductivities (K s ) than the minimally used religious forest, which is likely the result of the higher foot traffic and increased trampling associated with greater use of the sites by CFUG members. The K s values of the broadleaf and mixed forest sites were lower, despite their higher SOC values, than the pinedominated site, even though higher levels of SOC improve FIGURE 4 (A) Potential hydrological pathways for the various study sites after a 30-minute rainfall intensity of 47.8 mm h -1 ; (B) monthly rainfall distribution during the study period; (C) frequency distribution of maximum 30-minute (left) and 60-minute (right) rainfall intensities recorded at the study sites in the Roshi Khola catchment of Kavre, Nepal. soil infiltration of forested sites. With growing debate about the role of pine plantations on reduced dry season streamflows in parts of the mid-hills of Nepal, this preliminary study suggests that a more nuanced understanding of the impact of community forestry on catchment hydrology is needed. It also highlights the need for increased research, particularly in view of the prevailing community forestry practices in the broader mid-hills region. A C K N O W L E D G M E N T S We are grateful to the members of the Indreswar Thalpu CFUG and the Mukteswar Mahadev religious committee for their help in our work; we received invaluable support to conduct our field activities as well as background information about the local landscape. We thank the people of Malpi and Dhungkharka community for their support, especially Bharat Mijar, Hari KC, Balan Mahat, and Prem Timilsina for the logistic help during the fieldwork. We thank the District Forest Office of Kavre for the valuable information about the community forestry practices in the research area and the overall support of our project. We thank the anonymous reviewers for the constructive comments on the paper.
2020-02-20T09:15:11.347Z
2019-08-15T00:00:00.000
{ "year": 2020, "sha1": "56a86f3ce6ceab20033b7a15f7130b512380acdd", "oa_license": "CCBY", "oa_url": "https://bioone.org/journals/mountain-research-and-development/volume-39/issue-3/MRD-JOURNAL-D-18-00066.1/Negative-Trade-offs-Between-Community-Forest-Use-and-Hydrological-Benefits/10.1659/MRD-JOURNAL-D-18-00066.1.pdf", "oa_status": "GOLD", "pdf_src": "BioOne", "pdf_hash": "1bcaab0ec6509f4fba6ad404e50a197d98c1c5ca", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
56299218
pes2o/s2orc
v3-fos-license
EFFECTS OF TRAFFIC CONTROL ON THE SOIL PHYSICAL QUALITY AND THE CULTIVATION OF SUGARCANE ( 1 ) The cultivation of sugarcane with intensive use of machinery, especially for harvest, induces soil compaction, affecting the crop development. The control of agricultural traffic is an alternative of management in the sector, with a view to preserve the soil physical quality, resulting in increased sugarcane root growth, productivity and technological quality. The objective of this study was to evaluate the physical quality of an Oxisol with and without control traffic and the resulting effects on sugarcane root development, productivity and technological quality. The following managements were tested: no traffic control (NTC), traffic control consisting of an adjustment of the track width of the tractor and sugarcane trailer (TC1) and traffic control consisting of an adjustment of the track width of the tractor and trailer and use of an autopilot (TC2). Soil samples were collected (layers 0.00-0.10; 0.10-0.20 and 0.20-0.30 m) in the plant rows, inter-row center and seedbed region, 0.30 m away from the plant row. The productivity was measured with a specific weighing scale. The technological variables of sugarcane were measured in each plot. Soil cores were collected to analyze the root system. In TC2, the soil bulk density and compaction degree were lowest and total porosity and macroporosity highest in the plant row. Soil penetration resistance in the plant row, was less than 2 MPa in TC1 and TC2. Soil aggregation and total organic carbon did not differ between the management systems. The root surface and volume were increased in TC1 and TC2, with higher productivity and sugar yield than under NTC. The sugarcane variables did not differ between the managements. The soil physical quality in the plant row was preserved under management TC1 and TC2, with an improved root development and increases of 18.72 and 20.29 % in productivity and sugar yield, respectively.Index terms: agricultural mechanization, soil compaction, root system, technological variables, Saccharum sp. RESUMO: CONTROLE DE TRÁFEGO E SEU EFEITO NA QUALIDADE FÍSICA DO SOLO E NO CULTIVO DA CANA-DE-AÇÚCAR INTRODUCTION Sugarcane is a key crop in the current scenario of the Brazilian agriculture, economically one of the most important, with prospects for expansion of the crop in the next years.One possibility of improving the profitability of the sugarcane production system is the use of mechanical harvesting without burning, which is a worldwide acknowledged, rational management technique, which triggers a series of environmental and economic benefits. In the production system of harvested sugarcane without burning, agricultural machinery is used in all activities related to tillage, cultural practices, and harvest.This causes heavy tractor traffic on the soil, leading to a low efficiency of the machinery, high operational costs and mainly to soil compaction.The reason is that the traditional management system uses a spacing of 1.4-1.5 m and a track width of the vehicles of less than 2.0 m, i.e., the machinery pass over the plant stumps, which can reduce the productivity and longevity of sugarcane fields (Souza et al., 2005;Braunack & McGarry, 2006). Research has shown the effect of compaction on the soil physical quality, increasing soil density and mechanical strength (Materechera, 2009;Cavalieri et al., 2011) and decreasing the pore volume, particularly of macropores (Streck et al., 2004;Souza et al., 2006;Braunack & McGarry, 2006).Soil compaction also affects the soil structure, changing the aggregate stability and modifying the particle arrangement (Tullberg et al., 2007;Roque et al., 2010;Vezzani & Mielniczuk, 2011).These changes create a less favorable environment for the development of the cane root system (Alvarez et al., 2000;Otto et al., 2009Otto et al., , 2011)), reducing the productivity (Paulino et al., 2004;Braunack & McGarry, 2006).Agricultural managers are concerned about the restrictions in the soil for root development in sugarcane fields, for hampering the commercial production (Smith et al., 2005). Studies have also shown the effect of agricultural management on the quality of the raw material of R. Bras.Ci.Solo, 38:135-146, 2014 sugarcane (Wiedenfeld, 2008;Larrahondo et al., 2009).Improvements in soil properties can also contribute to increase the sugarcane quantity and quality (Meyer & Wood, 2001;Souza et al., 2005;Braunack & McGarry, 2006).In the sugarcane industry, the commercial value of cane is based on the quality of the raw material, measured by technological variables (Meyer & Wood, 2001;Consecana, 2006).Thus, improving the sugarcane quality in the phases of the agricultural production process improves the competitiveness of the company on the domestic and international markets (Larrahondo et al., 2009).However, there are few studies in the literature focused on the soil physical quality and its effects on the technological variables of sugarcane. A solution to decrease the effect of soil compaction by agricultural machinery traffic on the crop development is the adoption of a system of controlled traffic or agricultural traffic control (Tullberg et al., 2007;Materechera, 2009).In this system, the field consists of zones used exclusively for traffic and others exclusively for plant growth, concentrating the passing of the tires along defined tracks, so a smaller area is affected, though more intensely (Braunack & McGarry, 2006;Vermeulen & Mosquera, 2009).The traffic zones can remain in the same place for one crop cycle or be maintained over several cycles, as commonly done for sugarcane.There is still a need to validate a traffic control management adapted to the growing conditions of Brazilian sugarcane fields that would fit the peculiarities of this heavily mechanized production system. The use of the management with traffic control in agricultural areas that preserve the physical soil quality in the growing region of the root system, resulting in improvements in soil physical quality and contributing to the sustainability of crops (McHugh et al., 2009;Qingjie et al., 2009).This improvement in soil physical quality is associated with a reduction in the trafficked area (Laguë et al., 2003).Another advantage is the practicality and economic viability of this promising innovative technology for agricultural production systems, since it can increase profits by up to 50 % (Tullberg et al., 2007;Kingwell & Fuchsbichler, 2011). Another technique of increasing importance in the agricultural sector that can be used together with the traffic control is the assisted steering system, popularly known as autopilot.The autopilot corrects the lateral alignment of the agricultural machinery and allows a displacement with minor deviations from the central alignment along a straight or curved course.In crops that cover large areas, such as sugarcane, the parallelism between rows promotes greater efficiency in traffic, since the implements have constant width, avoiding tractor wheel traffic on the crop row, which affects the aerial part and root system (Braunack & McGarry, 2006;Vermeulen & Mosquera, 2009).Therefore, it is possible that the traffic control, consisting of the adjustment of the vehicle track width and use of an autopilot, reduces soil compaction in the plant rows, improving the root development, productivity and technological variables of sugarcane.Thus, the purpose of this study was to evaluate the physical quality of an Oxisol under managements with and without traffic control and their effects on the root development, productivity and technological variables of sugarcane. MATERIAL AND METHODS The experiment was conducted in a commercial sugarcane field (Saccharum sp.) in Pradópolis, State of São Paulo (latitude 21 o 18' 67'' S and 48 o 11' 38'' W; 630 m asl).The area has a history of intensive sugarcane cultivation for over 30 consecutive years, in the last 12 years without burning at harvest.The climate is mesothermal with dry winters (Cwa), according to the Köppen classification, with an average annual rainfall of 1,400 mm, concentrated between November and February. The experiment was carried out on a 16.20 ha field with undulated relief.The soil was classified as a Latossolo Vermelho distrófico típico (LVd), with a moderate clayey horizon (Embrapa, 2006) and Oxisol -Typic Kandiudox (Soil Survey Staff, 2010) (Table 1).The sugarcane variety RB855453 was planted on August 29, 2007, being harvested for the third time in the experimental period in 2010.The following management systems were installed: no traffic control (NTC), with spacing between plant rows of 1.5 m and a track width of the tractor and sugarcane trailer (loading truck of harvested cane) of 2.0 m; traffic control (TC1) with plant row spacing of 1.5 m and adjustment of track width of tractor and trailer to 3.0 m and traffic control (TC2) with plant row spacing of 1.5 m and adjustment of track width of tractor and trailer to 3.0 m and use of an autopilot at planting and the subsequent harvests.The adjustment of the track width of the tractor and sugarcane trailer alters the area directly in contact with the tires due to the overlapping of the vehicle tracks.In the systems with and without traffic control, 47 and 73 % of the total cultivation area were driven over by the tires of the tractor and sugarcane trailer, respectively.In all three management systems, the area impacted by the harvester was 56 %.The adoption of traffic control resulted in the coining of the term "sugarcane seedbed region", which is the soil strip of at least 0.40 m width on either side of the plant row that is untouched by the traffic tires of the tractor and sugarcane trailer, which is concentrated in the center of the inter-rows. The experimental area was prepared by the mechanical removal of ratoon sugarcane of the previous crop and subsoiling to 0.45 m, in the planting rows only, on July 15, 2007.Before soil tilling 2.5 Mg ha -1 limestone was applied and 20 Mg ha -1 filter cake at planting.In July 2009 and 2010, after harvest, 280 and 260 kg ha -1 of N-P-K fertilizer (32-00-02) were applied, respectively, and 100 m 3 ha -1 of vinasse.For the mechanical operations, the tractor model Case-IH Magnum was used, with all-wheel-drive, maximum engine power of 270 HP (198 kW), mass of 11.7 Mg, Trelleborg TM900 tires (600-70 R30 in front and 650-85 R38 rear tires), with a tire pressure of 110 kPa and 150, respectively, to pull the implements.At harvest on June 10, 2010, a track harvester series A-7700 Case IH was used, track width 1.88 m, on caterpillar tracks, maximum engine power of 335 HP (246 kW) and mass of 18.5 Mg and a tractor Case-IH, pulling a loading truck with three compartments, with average total mass of 40 Mg and Trelleborg tires Twin404 600-50 R22, 5 with inflation pressure of 110 kPa. Disturbed and undisturbed soil was sampled from the three treatments under the plant row (PR), the inter-row center (IRC) and the seedbed region (SB), at a distance of 0.30 m from the plant row.For this purpose, trenches (0.90 × 0.40 × 0.40 m) were opened, perpendicular to the PR direction.Disturbed and undisturbed soil was sampled from the center of the layers 0.00-0.10,0.10-0.20 and 0.20-0.30m to determine the physical properties.Undisturbed soil was collected in volumetric cylinders (diameter 0.05 m, height 0.05 m) to measure bulk density (Bd) and total porosity.The microporosity was determined by subjecting the cylinders to a voltage of 6.0 kPa after soil saturation and macroporosity was measured by the difference between the total porosity and microporosity (Embrapa, 1997).The compaction degree was assessed by the relationship between Bd and maximum density, identified by standard proctor test (Carter, 1990;Reichert et al., 2009).The compaction curve obtained by the standard proctor test (Bd = -0.90** + 14.66 ** x -23.39 ** x 2 , R 2 =0.96 ** , n=7), indicated a maximum soil density of 1.40 Mg m -3 related to the optimum moisture content of 0.31 kg kg -1 . The soil penetration resistance was determined by an impact penetrometer with a cone angle of 30 o .The soil penetration at the rod of the device (cm/impact) was transformed into soil penetration resistance as described by Stolf (1991).The soil water content was determined by the gravimetric method (Embrapa, 1997).The penetration resistance and water content were determined at the same points and layers as of soil sampling. Undisturbed soil samples were manually removed for analysis of the aggregate stability and soil organic matter and packed in plastic containers.The aggregate stability was measured by wet-sieving (Kemper & Chepil, 1965).The aggregates were obtained by manual manipulation of the soil clods.Aggregates with a diameter of 2.00 to 6.35 mm were separated by wet-sieving, using sieves of 2.0, 1.0, 0.5 and 0.125 mm mesh.The aggregate mass in the different sieve classes was calculated.The following aggregation indices were used: mean weight-diameter, calculated as the mean sieve diameter weighted by the soil mass; stable aggregates, as the percentage of aggregates with a diameter >2.0 mm, and aggregate stability index, which is the percentage of aggregates with a diameter >0.125 mm (Wendling et al., 2005). The crop was harvested mechanically without burning on June 10, 2010, by a self-propelled harvester, accompanied by a tractor and sugarcane trailer.The sugarcane trailer was weighed empty and full, immediately after the harvest of each plot to determine the productivity, using a specific weighing scale.Each plot consisted of 14 plant rows with a length of 50 m.At harvest, 10 sugarcane stalks in sequence were collected from the center row of each plot to determine the technological variables: cane fiber, brix (total soluble solids) in cane juice, cane juice purity, apparent sucrose content and total recoverable sugar (Consecana, 2006).The sugar yield was calculated as the product of productivity and total recoverable sugar. Immediately after harvest, samples were collected from each plot for analysis of the root system.In each plot one trench was opened perpendicular to the plant rows and between two plant rows to collect 18 monoliths (width 0.25, height 0.10, length 0.10 m), according to Vasconcelos et al. (2003) and Otto et al. (2009Otto et al. ( , 2011)).The monoliths were collected with a stainless steel box, with openings in the top and bottom, which was driven into the soil in the trench wall at the same points and layers used for soil sampling.Roots were separated from the soil by washing in water and sieving (2.0 mm) (Vasconcelos et al., 2003).The root images were scanned in an optical scanner (resolution 400 dpi) to analyze root diameter, density, surface area and volume using Soil layer pH (1) P K Ca Mg H+Al SB CTC V Clay (2) Silt Sand m mg dm -3 mmol c dm -3 % g kg -1 2001); (2) Pipette method at low rotation (Camargo et al., 1986). The experiment was arranged in a randomized block design (n=4), with three management systems distributed in 12 plots, for analysis of productivity and technological variables.For the data analysis of the soil properties and root system, the same design was considered, with a split-plot arrangement [management system (n=3), sampling site (n=3), soil layer (n=3)], totaling 108 observations.In each plot, three trenches were opened (replication) to collect soil samples (n=324); roots were collected only from the central trench in duplicate (n=216).Statistical analysis was performed using the SAS program (SAS, 2008), by analysis of variance (p<0.05).In case of significance for interactions or between levels of the isolated factors, the Tukey test was applied (p<0.05). RESULTS AND DISCUSSION Analysis of variance was not significant (p<0.05) for the properties of soil and those related to root development for the triple interactions (management system × sampling site × soil layer), but the double interactions were statistically significant (management × sampling site and/or management × soil layer).When the double interactions were not significant by the F test, the factor levels were analyzed separately. Higher total porosity and macroporosity in the plant row were observed in the managements TC1 and TC2 and lower soil Bd and compaction degree under TC2 than NTC, also in the plant row (Table 2).This was due to the absence of traffic of tires of the tractor and trailer over or near the plant rows under TC1 and TC2, owing to the adjustment of the track width, with or without the use of an autopilot.These results are in agreement with Qingjie et al. (2009), which indicated that the management with control of agricultural traffic was efficient in improving soil physical conditions in China.McHugh et al. (2009) also observed a reduced Bd of 1.40 to 1.25 Mg m -3 in the crop row after 22 months of implementation of the traffic control after 30 years of conventional management. The soil physical properties under the inter-row center and the region of the seedbed did not differ significantly between the treatments, with the exception of TC2, where total porosity was lowest under the inter-row center (Table 2).The use of an autopilot in management TC2 shifts the position of the vehicle tires into the middle of the sugarcane interrow -which are permanent traffic lines -increasing the compaction.This higher compaction was a result of stress distribution in the soil due to the traffic that occurred throughout the three cycles of sugarcane cultivation with traffic control.However, a higher soil compaction in the vehicle tracks improves traffic conditions and increases the traction efficiency (Laguë et al., 2003;Kingwell & Fuchsbichler, 2011).This scenario is beneficial to the agricultural activity, improving soil conditions for crop growth and for machinery traffic. Under TC1 and TC2, there was an increase in total porosity and macroporosity in the direction of the interrow center to the plant row (Table 2).This behavior was not observed for management NTC, due to tire traffic on the seedbed and/or plant row, while the impacted area under TC1 and TC2 was smaller, agreeing with Laguë et al. (2003) and Braunack & McGarry (2006).In the traffic-free areas, soil conditions are ideal for sugarcane development, since there is no obstacle to the development of the root system, influencing the productivity and longevity of the sugarcane field positively (Braunack & McGarry, 2006). The Bd, compaction degree and soil penetration resistance decreased from IRC>SB>PR for TC2, which did not occur in the management systems TC1 and NTC, where the values were similar under the interrow center and seedbed.The use of an autopilot in TC2 reduced errors in parallelism of traffic of agricultural machinery, avoiding tractor and trailer traffic on the plant rows (Vermeulen & Mosquera, 2009), which resulted in this variation in soil physical properties from the inter-row center to the plant row.The soil water content (data not shown) did not differ significantly between management systems, with an average value in the plots of 0.176 kg kg -1 .Soil penetration resistance is a property influenced by the soil water content and given the horizontal and vertical uniformity of the soil water content during the sampling period, the apparent effect on penetration resistance must be associated with differences caused by the soil management. Soil Bd ranged from 1.10 to 1.17 Mg m -3 in the plant row and 1.30 to 1.35 Mg m -3 in the interrow center (Table 2).These density values agreed with Neves et al. (2003), who observed a density of 1.42 Mg m -3 in compacted areas and 1.18 Mg m -3 in uncompacted areas in Latossolos Vermelhos distroférricos with clay contents of approximately 700 g kg -1 .Otto et al. (2009) observed a soil Bd of 1.46 to 1.64 Mg m -3 in a Latossolo Vermelho with medium texture.The average compaction degree in the treatments ranged from 78.6 to 96.2 %, in agreement with Carter (1990) and Reichert et al. (2009). The average total porosity was 0.535 and 0.572 m 3 m -3 , in the inter-rows and plant rows, respectively.These results agreed with Neves et al. (2003), who also observed a total porosity of 0.48 m 3 m -3 in compacted areas and 0.60 m 3 m -3 in uncompacted areas.Otto et al. (2011) reported a total porosity of 0.39 and 0.49 m 3 m -3 in vehicle tracks and plant rows in a Latossolo Vermelho with approximately 300 g kg -1 clay.In the management systems, sampling sites and soil layers, the macroporosity was greater than 0.10 m 3 m -3 (Table 2), which is considered a minimum threshold for soil aeration of macropores, required for the development of the root system (Xu et al., 1992).This indicates a probable absence of limitations to soil aeration, even in periods of heavy rainfall.The average microporosity ranged from 0.355 to 0.389 m 3 m -3 at the sampling sites, following the changes in macroporosity and total porosity, which disagrees with Streck et al. (2004).Paulino et al. (2004) found microporosity ranging from 0.273 to 0.320 m 3 m -3 , respectively, in a scarified and harrowed Latossolo Vermelho with 260 g kg -1 clay. In the plant row, values of soil penetration resistance ranged from 1.62 to 2.54 MPa ( Table 2. Physical properties of an Oxisol measured in the plant row (PR), inter-row center (IRC) and seedbed (SB) in the studied layers under managements with traffic control consisting of the adjustment of the track width and use of an autopilot (TC2) and only adjusted track width (TC1) and no traffic control (NTC) Means followed by the same capital letter in the column and lowercase in the line did not differ statistically (p<0.05). sugarcane.Penetration resistance only reached a level considered restrictive to root growth (>2 MPa) (Otto et al., 2011) in the plant row in the management NTC, indicating the soil physical degradation.Considering the trend of using reduced tillage at sugarcane planting, the lower penetration resistance in the plant row reduces the energy consumption and wear of implements and improves the efficiency of agricultural machinery for soil tillage, since the active parts (furrowers) will work in a traffic-free area (Laguë et al., 2003;Kingwell & Fuchsbichler, 2011). There was no difference in Bd, compaction degree, total porosity, and macroporosity between the evaluated soil layers, although microporosity and penetration resistance decreased in the deeper layers of the three management systems (Table 2).Thus, the increased penetration resistance at the surface reduced macroporosity and increase the proportion of micropores.In sugarcane cultivation, soil disturbance occurs at crop replanting, usually every five years, so that the traffic effect is accumulated mainly at the surface (Cavalieri et al., 2011).It is noteworthy that, in the traffic control, the soil remains compacted by machinery in the crop inter-row center, which does not limit the root growth (Otto et al., 2009) and can increase the longevity of sugarcane. The management systems did not differ in mean weight-diameter, stable aggregates and aggregate stability index (Table 3).These results agreed with Roque et al. (2010), whereas Braunack & McGarry (2006) found differences in the stability of soil aggregates of the seedbed region in sugarcane fields under different tillage systems.The agricultural machinery traffic can cause compaction of the soil aggregates, leading to aggregate rupture and the formation of a massive soil structure, however, the wet sieving process is not able to distinguish stable aggregates from massive soil structures, showing no significant difference between the results for the respective management systems (Severiano et al., 2008).The indices of soil aggregation were less sensitive to changes in management than soil Bd and porosity. In all management systems, the mean aggregation was greater in the plant row than in the seedbed region and inter-row center, which may be explained by the values of total organic carbon (Table 3).The results indicated the preservation of structural quality in the plant row, which is beneficial to the development of sugarcane roots.The highest content of total organic carbon in PR is related to the greater root development and the concentrated application of filter cake there.These results agree with the higher aggregation levels in the plant row, since organic matter is a major cementing agent, responsible for the formation and stabilization of aggregates (Vezzani & Mielniczuk, 2011).The sugarcane root system also contributes to soil aggregation by the mechanical action of the roots and the excretion of substances with aggregating action (Wendling et al., 2005;Vezzani & Mielniczuk, 2011). For the management systems, the mean weightdiameter, stable aggregates and aggregate stability index differed with soil depth; the values were highest in the 0.00-0.10m layer and decreased in deeper layers (Table 3).Total organic carbon also decreased with depth, which again justifies the reduction of soil aggregation indices in deeper layers.These results corroborate Wendling et al. (2005) and Roque et al. (2010). The root surface and volume did not differ between management systems with traffic control (Table 4).In TC2, the surface area and root volume were greater in the seedbed region and the plant row, due to less soil compaction.Paulino et al. (2004) also observed an increased surface area and density of sugarcane roots with the increase of macroporosity and reduced Bd.The increased surface area and root volume favors the development of the aerial part of the crop, since explore a greater soil volume can be explored, and from the greater area of soil-root contact, absorb more water and nutrients.This improved soil exploitation is important, especially for nutrients with low soil mobility, such as P (Smith et al., 2005). The management systems differ for root dry matter (RDM), with highest values in TC2 in the three soil layers (Table 4).These results are associated to improvements in soil physical quality promoted by adjusting the track width and use of an autopilot, corroborating Otto et al. (2009Otto et al. ( , 2011)).The RDM decreased when soil compaction exceeded 86 %, corresponding to a Bd of 1.20 Mg m -3 , and above 93 % (Bd 1.30 mg m -3 ) the soil physical restrictions were more severe for root development (Figure 1).The values of RDM were lower than those reported by Vasconcelos et al. (2003) in a sugarcane field harvested mechanically (1.33 g dm -3 ), eight months after planting (layer 0.00-0.20 m) in the fifth crop cycle, in São Paulo state.Aside from the effects of variety, soil fertility management and climate, the sampling time may have influenced the results, since the roots were sampled immediately after sugarcane harvest, in a period of sugarcane senescence (Smith et al., 2005). The RDM and density increased in the direction IRC<SB<PR for TC2, however in the managements TC1 and NTC, the values did not differ between the sampling sites (Table 4).Braunack & McGarry (2006) and Otto et al. (2009) also observed a reduction in the development of the sugarcane root system with increasing distances from the plant row and approaching the inter-row center, due to higher soil compaction.Plant responses to soil compaction are mediated by changes in the development and functioning of roots (Alameda et al., 2012), which can affect productivity and product quality. In the three management systems, RDM and surface decreased in the deeper layers (Table 4).This was due to decreasing soil structure and nutrient availability with depth (Table 1).According to Smith et al. (2005), the concentration of sugarcane RDM and density are highest at the surface, decreasing exponentially with depth.In the management of mechanical harvesting, sugarcane has 70-85 % of its roots in the 0.00-0.40m layer (Alvarez et al., 2000;Vasconcelos et al., 2003;Otto et al., 2009).The sugarcane root system may be active down to a depth of 2.0 m, but root growth can continue and reach deeper layers (Smith et al., 2005). In the average of the three management systems, the root diameter was lowest under the inter-row center (Table 4).This was due to the lower soil porosity, which limited the increase in root thickness.Queiroz-Voltan et al. (1998) also observed an effect of compaction on the anatomy of sugarcane roots, since the higher soil compaction affects the thickness of the cortex and vascular cylinder, which may reduce the root diameter. The sugarcane productivity under managements TC1 and TC2 were higher (18.09 and 19.34 %, respectively) than of NTC (Table 5).The same was true for sugar yield, with an increase of 20.22 and 20.36 % for TC1 and TC2, respectively.The higher productivity and sugar yield were due to the preservation of soil physical quality in the plant seedbed in the management systems with traffic control, allowing a greater root development (Table 4), reflected in the growth of the aerial part.These results agree with Braunack & McGarry (2006), who observed an increase in sugarcane productivity and sugar yield in a management system with traffic control in Australia.Qingjie et al. ( 2009) also observed better soil physical conditions in the management with traffic control, increasing the average wheat yield by 6.9 % than the random traffic (no traffic control). The technological variables did not differ between the management systems, i.e., the amounts of cane fiber, total soluble solids, cane juice purity, apparent sucrose content and total recoverable sugars were similar among management systems with and without traffic control (Table 6).The greater soil compaction in the plant row in the management no traffic control could reduce the absorption of water and nutrients from the soil, due to the restricted root growth (Table 4), influencing the sugarcane technological quality (Meyer & Wood, 2001).However, the adequate soil fertility, with the regionally recommended nutrients levels (Raij et al., 2001), together with favorable climatic conditions were sufficient to meet the crop demands, even under management without traffic control with reduced root development, without affecting the sugarcane technological quality (Meyer & Wood, 2001).The results showed that the highest sugar yield in the management with traffic control is mainly associated with increased productivity rather than improvements in the quality of the raw material (Tables 5 and 6).These results agree with Braunack & McGarry (2006), who reported that the commercial sugar yield per produced cane quantity (139 kg Mg -1 ) was not affected by the management systems with and without traffic control.Wiedenfeld (2008) also stated that changes in soil quality did not affect the quality of the raw material of sugarcane in Texas. The amounts of cane fiber, apparent sucrose content and cane juice purity were above the minimum levels recommended as quality threshold (Ripoli & Ripoli, 2004), which is 11-13, >14 and >85 %, respectively (Table 6).Larrahondo et al. (2009) found values of cane fiber, apparent sucrose content, cane juice purity, and total soluble solids of 18 %, 14.2 %, 84.9 % and 16.7º Brix, respectively, in sugarcane harvested mechanically in Colombia.Souza et al. (2005) analyzed 18 sugarcane varieties in São Paulo, found mean values of cane fiber, cane juice purity, apparent sucrose content and total recoverable sugars of 91 %, 11 %, 18 % and 168 kg Mg -1 , respectively, in a management where the cane trash was left unchopped on the soil surface, similar to this study.Table 5. Sugarcane productivity and sugar yield under managements with traffic control with adjusted track width and use of an autopilot (TC2) and only adjusted track width (TC1) and with no traffic control (NTC) Means followed by the same capital letter in the column did not differ statistically (p<0.05). CONCLUSIONS 1.The management system with traffic control, based on the adjustment of the track width of tractortrailer set and the use of an autopilot preserved the soil physical quality in the plant rows and resulted in greater compaction under the inter-row center. 2. Management systems to control traffic based on an adjustment of the track width of the tractortrailer set, with or without the use of the autopilot, allowed the best cane root development in the plant rows and in the seedbed region. 3. The soil compaction degree above 93 %, corresponding to a bulk density of 1.30 Mg m -3 , severely restricted the sugarcane root development. 4. The management systems with traffic control led to an increase in sugarcane productivity of 18.72 % and of sugar yield of 20.29 %, without improvements in technological variables compared to the management no traffic control. Figure 1 . Figure 1.Distribution of sugarcane root dry matter according to the soil compaction degree in management systems. Table 2 ) . Similar values were observed by Souza et al. (2006) and Roque et al. (2010) in soils cultivated with Table 3 . Aggregate stability of an Oxisol measured in the plant row (PR), inter-row center (IRC) and seedbed (SB) in the layers in study under managements with traffic control consisting of the adjustment of the track width and use of an autopilot (TC2) and only adjusted track width (TC1) and no traffic control (NTC)Means followed by the same capital letter in the column and lowercase in the line did not differ statistically (p<0.05). Table 4 . Sugarcane root system under managements with traffic control consisting of the adjustment of the track width and use of an autopilot (TC2) and only adjusted track width (TC1) and no traffic control (NTC) measured in the plant row (PR), inter-row center (IRC) and seedbed (SB), in the studied layers Means followed by the same capital letter in the column and lowercase in the line did not differ statistically (p<0.05). Table 6 . Technical quality of sugarcane in managements with traffic control with adjusted track width and use of an autopilot (TC2) and only adjusted track width (TC1) and check treatment without traffic control (NTC) Means followed by the same capital letter in the column did not differ statistically (p<0.05).Fiber: cane fiber; Brix: total soluble solids; Purity: cane juice purity; ASC: apparent sucrose content; and TRS: total recoverable sugars.
2018-12-19T15:15:41.327Z
2014-02-01T00:00:00.000
{ "year": 2014, "sha1": "918ff824dc57e466341f6472432898cb3b024f92", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rbcs/a/8bFw8Z68YxqXVCwBS3JjKML/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "918ff824dc57e466341f6472432898cb3b024f92", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Computer Science" ] }
250342919
pes2o/s2orc
v3-fos-license
A Prospective Study on Risk Factors for Acute Kidney Injury and All-Cause Mortality in Hospitalized COVID-19 Patients From Tehran (Iran) Background Several reports suggested that acute kidney injury (AKI) is a relatively common occurrence in hospitalized COVID-19 patients, but its prevalence is inconsistently reported across different populations. Moreover, it is unknown whether AKI results from a direct infection of the kidney by SARS-CoV-2 or it is a consequence of the physiologic disturbances and therapies used to treat COVID-19. We aimed to estimate the prevalence of AKI since it varies by geographical settings, time periods, and populations studied and to investigate whether clinical information and laboratory findings collected at hospital admission might influence AKI incidence (and mortality) in a particular point in time during hospitalization for COVID-19. Methods Herein we conducted a prospective longitudinal study investigating the prevalence of AKI and associated factors in 997 COVID-19 patients admitted to the Baqiyatallah general hospital of Tehran (Iran), collecting both clinical information and several dates (of: birth; hospital admission; AKI onset; ICU admission; hospital discharge; death). In order to examine how the clinical factors influenced AKI incidence and all-cause mortality during hospitalization, survival analysis using the Cox proportional-hazard models was adopted. Two separate multiple Cox regression models were fitted for each outcome (AKI and death). Results In this group of hospitalized COVID-19 patients, the prevalence of AKI was 28.5% and the mortality rate was 19.3%. AKI incidence was significantly enhanced by diabetes, hyperkalemia, higher levels of WBC count, and blood urea nitrogen (BUN). COVID-19 patients more likely to die over the course of their hospitalization were those presenting a joint association between ICU admission with either severe COVID-19 or even mild/moderate COVID-19, hypokalemia, and higher levels of BUN, WBC, and LDH measured at hospital admission. Diabetes and comorbidities did not increase the mortality risk among these hospitalized COVID-19 patients. Conclusions Since the majority of patients developed AKI after ICU referral and 40% of them were admitted to ICU within 2 days since hospital admission, these patients may have been already in critical clinical conditions at admission, despite being affected by a mild/moderate form of COVID-19, suggesting the need of early monitoring of these patients for the onset of eventual systemic complications. even mild/moderate COVID-19, hypokalemia, and higher levels of BUN, WBC, and LDH measured at hospital admission. Diabetes and comorbidities did not increase the mortality risk among these hospitalized COVID-19 patients. Conclusions: Since the majority of patients developed AKI after ICU referral and 40% of them were admitted to ICU within 2 days since hospital admission, these patients may have been already in critical clinical conditions at admission, despite being affected by a mild/moderate form of COVID-19, suggesting the need of early monitoring of these patients for the onset of eventual systemic complications. Keywords: acute kidney injury, COVID-19, electrolyte abnormalities, renal failure, SARS-CoV-2 BACKGROUND SARS-CoV-2 is known for its ability to invade various organs (1). Earlier studies on the impact of COVID-19 focused on the pulmonary system, and dysfunctions of other organs were attributed to hyper-inflammatory response and thrombophiliainducing multiorgan failure (MOF). ACE-2 and TMPRSS-2, surface cell proteins expressed by various tissues, are targeted by SARS-CoV-2. In addition to the respiratory system, ACE-2 and TMPRSS-2 are also expressed in the gastrointestinal tract, brain, and vessels (2)(3)(4)(5). Furthermore, ACE-2 is highly expressed in renal proximal tubules, where SARS-CoV-2 particles were detected postmortem in podocytes of COVID-19 patients, suggesting that the kidneys could also be one of the targets of SARS-CoV-2 (6,7). Acute kidney injury (AKI)-a common finding in hospitalized COVID-19 patientscan interfere with the conventional management of COVID-19, resulting in poorer prognosis in terms of higher risk of mortality, intensive care unit (ICU) admission, and prolonged hospitalization (8,9). It is unknown, however, whether AKI results from a direct infection of the kidney by SARS-CoV-2 or as a consequence of the physiologic disturbances and therapies used to treat the viral disease (10). Up until November 14, 2021, the cumulative number of COVID-19 infections globally was 3,800,000,000, with a 1%-2% hospitalization rate and a mortality rate of 194.5/100,000 (11). Prevalence of AKI in COVID-19 patients is inconsistently reported, ranging from 0.5% in China (12) to 80% among critically ill COVID-19 patients in France (13,14). While AKI prevalence among COVID-19 patients was low in initial reports from China, subsequent figures became much higher, suggesting the kidney as one of the main organs targeted by SARS-CoV-2 (15). For instance, in an observational cohort study conducted in a large tertiary care university hospital in Wuhan (China), enrolling all consecutive COVID-19 inpatients older than 65 years during January 2020, the prevalence of AKI was 14% (16). In a meta-analysis on 6,945 COVID-19 patients from China, Italy, the UK, and the USA recruited from 2019 to May 11, 2020, the incidence of AKI was 8.9% [95% CI: 4.6-14.5] (17). Higher AKI figures (46%) were observed among 3,993 COVID-19 patients aged ≥18 years admitted to the Mount Sinai Health System (New York) from February 27 to May 30, 2020 (18), and in another study (rate of 32%) in New York city on a cohort of 5,216 US veterans hospitalized for COVID-19 from February 1, 2020, to July 23, 2020 (16,19). Likewise, the incidence of AKI on 5.449 COVID-19 patients admitted to 18 university and community hospitals of New York between March 1 and April 5, 2020, was 36.6% (20). Lower AKI figures have been reported in Europe during the first pandemic wave, for instance in a multicenter study on 1,855 admissions for COVID-19 in London hospitals from January 1, 2020, up to May 14, 2020, where 455 patients (a rate of 24.5%) developed AKI (21). Likewise, prevalence of AKI among hospitalized COVID-19 patients was estimated to be 22.4% in an Italian study (22). By contrast, a study from Brazil reported an incidence of AKI of 71% among critically ill COVID-19 patients (23). In view of the above, we carried out a prospective study on hospitalized COVID-19 patients in Tehran (Iran) with a double aim: • To estimate the prevalence of AKI, since it varies by geographical settings, time periods, and populations studied; and • To investigate the risk factors predicting AKI occurrence, assessing their impact on survival of hospitalized COVID-19 patients. METHODS This single-center longitudinal prospective study was conducted at Baqiyatallah general hospital in Tehran (Iran), from October 22, 2020, until January 7, 2021, during the third wave of the COVID-19 pandemic ( Figure 1) (24). The study received approval from the research ethic committee of Baqiyatallah University of Medical Sciences. COVID-19 diagnosis was confirmed by RT-PCR on nasopharyngeal swabs, as per WHO guidelines (25). Following triage telephone consultations, 5,890 patients with COVID-19 symptoms were referred to Accident & Emergency (A&E) of Baqiyatallah general hospital of Tehran (Iran) from October 22, 2020, until January 7, 2021. Two thousand COVID-19 patients were randomly selected (using a simple random code generator software) among those 3,099 hospitalized for more than 1 day. Patients receiving alternative experimental treatments (N = 110) were excluded since they were part of other clinical trials. Furthermore, patients with missing data on past medical history and serum creatinine and those with only one documented creatinine measurement were excluded ( Figure 2). The final number of patients analyzed in this study was 997, broken down into 625 patients affected by mild/moderate COVID-19 and 372 patients with severe disease (Figure 2). All clinical information but ICU referral was collected at hospital admission. Blood creatinine and BUN were also measured at end of follow-up (hospital discharge or death). The national guidelines of the Iran Ministry of Health (MoH) and medical education as well as WHO guidelines on management of COVID-19 (26,(28)(29)(30) were followed for the hospital management of COVID-19 patients and decision making on ICU admissions. After the initial admission, all patients were evaluated, monitored, and treated for volume depletion and high blood sugar. All patients were stabilized in A&E department before being transferred to COVID-19 wards and constantly monitored for their hemodynamic status during hospitalization. Treatment of patients was mainly supportive and based on WHO guidelines on COVID-19 patient management at the time of the study. The only non-steroid-anti-inflammatory drug (NSAID) used was naproxen, administered routinely but to patients with low eGFR and with other contradictions. Enoxaparin or heparin was used as anticoagulants. In patients with mild/moderate COVID-19, dexamethasone was administered, whereas in those affected by severe disease methylprednisolone pulse was employed. Antibiotic therapy was administered only in case of secondary bacterial infection. Treatment drugs were adjusted based upon eGFR and administration of diuretics; ACE inhibitors and angiotensin II receptor blockers were refrained in patients at risk of AKI. Furthermore, remdesivir was not administered in patients with eGFR<30 mg/dl/1.73 m 2 , because of its debated nephrotoxicity risk (31). Statistical Analysis Distribution of variables by AKI status (yes vs. no) was estimated by chi-squared test in case of categorical terms, whereas ANOVA was employed for comparison of continuous terms by AKI. The distribution of timeline of ICU referral since hospital admission was contrasted by the mean length of hospital stay (LoS, in days), AKI onset (yes vs. no), timeline of AKI onset (days since hospital admission), and vital status at end of follow-up (death or discharge). The mean and median concentration of BUN and blood creatinine were contrasted between hospital admission and end of hospital dischargefollow-up by Wilcoxon test. This prospective study was conducted over a short period of time collecting both clinical information and several dates (of: birth; hospital admission; AKI onset; ICU admission; hospital discharge; death). In order to examine how the above risk factors influenced mortality risk and AKI incidence during hospitalization in a particular point in time, survival analysis using the Cox proportional-hazard models was adopted. Two separate multiple Cox regression models were fitted for each outcome (AKI and death). The two multivariable models were built up only including variables significant at univariable analysis. Statistical interaction was modeled by including a product of ICU and severity of COVID-19 in the regression model to evaluate whether COVID-19 severity modified the association between ICU and in-hospital mortality. Similarly, we assessed the interaction between ICU and AKI and between COVID-19 severity and AKI. Results were expressed as hazard ratio (HR) with 95% confidence interval (95% CI). Non-significant terms at multivariable analysis were omitted from the respective tables. The level of significance for each test was set at 0.05. All terms and interactions not being significant were dropped out, and the corresponding results were not shown in the tables. Statistical analysis was conducted using Stata 14.2 (Stata Corporation, College Station, Texas, USA). Table 1, 37.3% patients were affected by severe COVID-19, whose average age was 56.6 ± 14.7 years, with 60% (N = 599) of them being males. The mean LoS was 8.8 days, 33% (=330/997) had to be admitted to ICU, and 19% (=192/997) died. AKI was more prevalent in male patients and increased with age and severity of COVID-19 and among those referred to ICU, with a progressively higher prevalence with increasing number of days between hospital admission and ICU referral. The most common comorbidity was diabetes mellitus (49.5% = 493/997) followed by hypertension (31.9% = 318/997), ischemic heart disease and heart Figure 4). Finally, patients developing AKI were featured by higher levels of BUN, creatinine, WBC, and LDH ( Table 1). Table 2, the majority of patients were referred to ICU within 2 days (N = 127, 38.5%) or 3-5 days (N = 120; 36.4%) since hospital admission, and the increasing timeline between hospitalization and ICU referral translated into longer LoS. Prevalence of AKI was higher among patients affected by a milder form of COVID-19 referred to the ICU within 2 days since hospital admission, whereas among those developing AKI 3+ days since admission to hospital, it progressively increased with days since hospital admission to ICU referral (42.5% for patients admitted <2 days since hospital admission to 53.0% among those referred to the ICU 6+ days since hospital admission). The death rate was 19.3% (=192/997), 67.7% (=130/192) vs. 32.3% (=62/192) among those not developing AKI ( Table 2). The majority of COVID-19 patients developed AKI after ICU referral, and the death rate also increased with days since hospital admission to ICU referral ( Table 2). As is shown in As is shown in Table 3, both BUN and blood creatinine increased considerably more for COVID-19 patients developing AKI, from admission to end of follow-up. Key Findings The prevalence of AKI in the present study was 28.5%, a figure fairly in line with reports from different settings and time periods (32). The risk of AKI increased with diabetes mellitus as well as hyperkalemia, higher WBC count, and increasing level of BUN measured at hospital admission. On the other hand, the overall mortality risk among COVID-19 patients was 19.3%. Factors associated with a higher risk of death were ICU admission for severe COVID-19 as well as for mild/moderate COVID-19, hypokalemia, higher level of BUN, increasing WBC, and increasing LDH measured at hospital admission. Interpretation of Findings In a systematic review and meta-analysis on 14,415 COVID-19 patients from different countries, the prevalence of AKI was 11% (95% CI: 0.07-0.15; p < 0.01), hence a figure much lower than the present study. Moreover, in the latter meta-analysis AKI was significantly associated with death (OR = 8.45; 95% CI: 5.56-12.56; p < 0.001) and severe COVID-19 (OR = 13.52; 95% CI: 5.43-33.67; p < 0.001) among hospitalized patients (33). In our prospective study-where a survival analysis by Cox proportional-hazard models was employed to examine risk factors influencing AKI incidence and mortality risk in a particular point in time during hospitalization-although 28.5% COVID-19 patients developed AKI and the crude death rate was higher among patients developing it (67.7%), AKI did not increase the adjusted mortality risk at multivariable analysis, a rather unexpected finding. The majority of COVID-19 patients developed AKI after ICU referral; therefore, the risk of death appeared to rise following ICU admission rather than AKI. Almost 40% patients were referred to the ICU within 2 days since hospital admission, suggesting critical clinical conditions of these patients even with less severe form of COVID-19. Kidney involvement and AKI onset in COVID-19 patients have multiple risk factors, and several explanatory mechanisms have been proposed ( Figure 5), including electrolyte imbalance, medication-induced injury, organ failure in late stages of the disease, impairment of gas exchange, hemodynamic alterations including right heart failure, systemic congestion due to fluid overload, and secondary infections/sepsis, among others (32,34). This single-center clinical study was conducted from October 22, 2020, to January 7, 2021, during the third pandemic wave in Iran (Figure 1), before large-scale vaccination campaigns against COVID-19 were deployed globally. The massive surge of COVID-19 cases did not allow to perform autopsies or biopsies on patients with AKI. Although SARS-CoV-1 proved capable of infecting kidney cells in vitro (47,48), the evidence supporting persistent infection of the kidney by SARS-CoV-2 is still unconvincing (45). An alternative plausible pathogenetic hypothesis is the "hit and run" model, where the renal injury persists after the clearance of an early direct kidney infection by SARS-CoV-2. However, AKI associated with COVID-19 is probably determined by multiple factors, including an indirect organ damage induced by the physiologic disturbances caused by COVID-19 and the therapies administered to treat the severe acute respiratory syndrome (9,32,34,45). Contrast media, corticosteroids, NSAIDs, ACEs, Angiotensin receptor blockers (ARBs), and various antibiotics are reportedly associated with increased risk of AKI in COVID-19 patients (34,40), although there is also some evidence that high daily doses (40 mg) of methylprednisolone are associated with increased mortality but lower risk of AKI in COVID-19 patients (49). In a meta-analysis on 23,655 hospitalized, critically ill COVID-19 patients, the incidence of AKI was not significantly different between COVID-19 patients (51%) and critically ill patients infected with ACE2-associated (56%) or non-ACE2-associated viruses (63%). The latter meta-analysis estimated a lower risk of renal replacement therapy in patients affected by COVID-19 or ACE2associated viruses (featured by a lower risk of shock and use of vasopressors) as compared with patients infected with non-ACE2-binding viruses (50). As already mentioned, considerable inconsistencies exist regarding the prevalence of AKI in hospitalized COVID-19 patients, with figures widely ranging from 0.5% to 80% (14). Explanatory factors for the inconsistency of the epidemiological evidence on AKI prevalence among COVID-19 patients include ethnicity, genetic polymorphism, type of SARS-CoV-2 variant, the underlying mechanism of kidney injury (either pre-renal, renal, or post-renal), and the methodology employed in various studies (19,41,45,51). Up until November 14, 2021 (before the spread of the Omicron variant), 56,900,000 cumulative COVID-19 infections were recorded in Iran, where the overall infection/hospitalization rate was 1%-3% and the mortality rate equaled to 277.5/100,000 (11). According to a systematic review and meta-analysis, in Iran the proportion of hospitalized COVID-19 patients developing AKI was 24% (95% CI: 17%-31%), slightly lower than figures from the present study (28.5%) (52). In an earlier study conducted in Iran during February-April 2020, the prevalence of AKI was lower (13.8%) [53]. A lower prevalence of AKI reported from China in the early stages of the COVID-19 pandemic could also be due to underestimation of signs and symptoms not involving the respiratory system (16,17). Moreover, clinical management of COVID-19 has considerably evolved over time since the early days of the pandemic, and this may account for the inconsistency in AKI figures reported by different studies in diverse periods. Despite the inconsistencies of prevalence among COVID-19 patients, common risk factors of AKI according to the open literature are advanced age, male gender, and comorbidities such as diabetes mellitus, hypertension, CKD, ischemic heart disease, electrolyte imbalance, and inflammatory markers (22,23,32,36,41,54). Likewise, the role of diabetes, electrolyte imbalance, and inflammation in AKI was confirmed in the present study. These factors were already present at hospital admission, reflecting critical clinical conditions of patients entering the hospital independently from the severity of their viral lung disease, thereby supporting the hypothesis that treatment and alterations induced also by mild/moderate forms of COVID-19 may contribute to MOF, including AKI. The enhanced risk of AKI in males may reflect on one side a higher SARS-CoV-2 infection rate in males, on the other side their higher susceptibility to viral infections due to differences in natural immunity linked to sex chromosomes (41). The enhanced expression of ACE2 and TMPRSS2 receptors in males, regulated by androgens, might account for their higher susceptibility to severe COVID-19 (41,55,56). In contrast, 2 | Distribution of timeline of intensive care unit (ICU) admission, by severity of COVID-19, length of hospital stay (LoS, in days), acute kidney injury (AKI) onset (yes vs. no), and patient outcome (death vs. survival). Number (N), percentages (%), mean (M) ± standard deviation (SD). ICU admission LoS ( estrogen may inhibit the cell invasion of SARS-CoV-2 by reducing the expression of TMPRSS2 (57). STRENGTHS AND LIMITATIONS This study has several strengths, including a high number of hospitalized COVID-19 patients and a detailed and thorough collection of clinical variables with a good level of completeness of data, allowing to adjust risk estimates. Furthermore, rather than a cross-sectional design, this study employs a longitudinal approach and it is the first to test the interaction between ICU and severity of COVID-19, thereby disentangling the impact of ICU referral by COVID-19 severity on the mortality risk. Nevertheless, this study was conducted in the midst of the third pandemic wave in Iran, with no capacity to perform postmortem autopsies in AKI patients. Since autopsy and biopsy are essential steps to elucidate the exact mechanism of AKI in COVID-19 patients, future studies should include postmortem examination in COVID-19 patients affected by AKI. CONCLUSIONS The prevalence of AKI-a relatively common finding among hospitalized COVID-19 patients-was 28.5% in the present study, and the overall mortality rate was 19.3%. The risk of AKI was associated with diabetes mellitus hyperkalemia, electrolyte imbalance, and inflammation, but not with the severity of COVID-19. However, AKI did not influence the mortality risk, which increased with joint association between Hazard ratio (HR) with 95% confidence interval (95% CI). Model fitted on 755 complete (case analysis) observations and adjusted for severity of COVID-19, AKI, diabetes, any comorbidity. *Natremia, kalemia, magnesemia, WBC, platelets, LDH, and BUN at admission. *Hypertension; ischemic heart disease: chronic heart failure; chronic kidney disease; chronic obstructive pulmonary disease; interstitial lung disease; end-stage liver disease Hazard Ratio unadjusted (HR) and adjusted (aHR) with 95 confidence interval (95%CI). AKI, acute kidney injury; ICU, Intensive care unit; BUN, blood urea nitrogen; LDH, lactate dehydrogenase; WBC, white blood cells. (ACE2 and TMPRSS2), with cell endocytosis of the virus and altering the pH of lysosomes where the virus crosses once inside the cell. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The study was approved by the Medical Ethical Committee of Baqiyatallah University of Medical Sciences. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS ZR, BE, EN, SSH, ME, MJ, SHS, MA, ZE, and BHB designed the study etc. designed the study, collected the clinical data and samples, drafted the primary version of the manuscript, and validated the final draft. GM and LC analyzed/interpreted the data and wrote the manuscript. The authors approved the final manuscript.
2022-07-08T13:22:14.026Z
2022-07-08T00:00:00.000
{ "year": 2022, "sha1": "f601af3f2fbaae24c0c7bf8620dbb3055b6a5c43", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2022.874426/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "519cafcf1bdac88a5c29525a0defe389dcd8f5f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
259422139
pes2o/s2orc
v3-fos-license
Optimization of Soxhlet Extraction Papaya Seed Oil ( Carica papaya L. ) with Petroleum Ether – Papaya seed oil is a high source of fatty acids, especially oleic acid, and palmitic acid. It has 71.60% oleic acid, 15.13% palmitic acid, and has a low cholesterol content so it can be useful as a food oil. This study aims to determine the effect of the ratio of ingredients, time, and particle size on the maximum extraction of papaya seed oil. Extraction of papaya seed oil was carried out by the soxhletation extraction method using petroleum ether solvent. The factorial experimental design of 2 3 was used to determine the significant parameters for the resulting papaya seed oil: yield, density, fatty acid content, viscosity, and water content. The most influential process variable is particle size. The most optimal papaya seed oil extraction results were obtained at a particle size of 20 mesh, an extraction time of 180 minutes, and a ratio of ingredients to the dissolution of 1:9 (35 gram500 mL). That value obtains a yield of 57.029%. INTRODUCTION Papaya (Carica papaya L.) is a fruit plant that has spread evenly in Indonesia. Papaya is a plant from the Caricaceae family. Papaya is a plant that spreads in Indonesia around the 17th century. The origin of this plant is Mexico and South America, then spread to the continents of Africa, Asia, and India. Papaya can be classified as a herbal plant that has a hollow stem and a tree height of up to 10 m (Setiaji et al., 2009). Papaya seed oil is known to have a yellow color with a content of 71.60% oleic acid, 15.13% palmitic acid, 7.68% 3.60% linoleic acid, and contains other fatty acids that have certain levels of content (Warisno, 2003). The fatty acid content in papaya seeds varies depending on the type of fruit, which ranges from 25.41%-34.65%. The oil content in papaya seeds is relatively large when compared to sunflower seeds 22.23%, and coconut 54.74% (Sammarphet, 2006). Based on previous research, the most widely used extraction method is the soxhletation extraction method. In this study the method used is the soxhletation extraction method, this method is considered one of the most efficient methods by using a solvent that can be in direct contact with the material and runs continuously so that the solvent used to extract the material is always pure and the oil in the material can be reused. In a study (Susilowati, N. and Primaswari, R., 2012) entitled extraction of candlenut seed oil (Aleurites moluccana) through extraction using a soxhlet, the research objective was to find out the most appropriate solvent for extracting oil from candlenut seeds using a soxhlet, where the solvent used is between others n-hexane, petroleum ether, and ethanol. In this study, the weight of candlenut seeds was used, namely 50 grams, and the weight of each solvent, namely 500 grams. The yield results were 38.72% n-hexane, 33.24% petroleum ether, and 18.36% ethanol. From these results, the best solvent for extracting oil from candlenut seeds is n-hexane, then petroleum ether. Whereas research (Achtami, 2017), aims to characterize the chemical components Journal of Vocational Studies on Applied Research, Vol.5(1), April 2023 of papaya seeds using ingredients, namely papaya seed powder, alcohol, and distilled water. In this study, the solid-liquid method was used, and the results obtained were that papaya seeds contain FFA which can be used as a source of vegetable oil and cosmetic oil. The dominant fatty acids are stearic acid (saturated fatty acid) which is equal to 18.42%, and lauric acid (saturated fat) 11.58%. In a study (Soetjipto et al, 2018) entitled fatty acid profile and characterization of pumpkin seed oil (Cucurbita moschata D.), two methods were extracted for pumpkin seeds, namely the maceration method and the continuous method using soxhlet. In this study, the results obtained were that the maceration method obtained an oil yield of 32.54%, while the continuous or soxhlet method obtained an oil yield of 36.65%. In the study (Dewi, 2012), the characterization of oil from kidney bean seeds was carried out using the Soxhlet extraction method with two types of solvents namely n-hexane and petroleum ether, different oil results were obtained due to different types of solvents, but the differences were only slight. This makes it possible to use petroleum ether solvents in the extraction process due to the relatively cheap price compared to n-hexane solvents. In addition, petroleum ether is a non-polar solvent with high selectivity. In this research, optimization of papaya seed oil extraction will be carried out using the soxhletation extraction method and the level 2 3 factorial design experimental design method to determine the most influential process variables, as well as obtain information regarding the interactions between variables to obtain optimum results. The use of petroleum ether solvent is a novelty in this research. Researchers generally use n-hexane solvents which are more expensive than petroleum ether solvents. With this novelty, optimal and more profitable results can be obtained based on economic value. Materials and Tools The materials used in this study were 420 grams of papaya seeds (dry weight) obtained from a fruit seller in the market around Tembalang and 6000 mL of Petroleum Ether solvent purchased from Toko Agung Jaya Solo. The tools used are a series of soxhletation tools, a series of distillation apparatuses, sieves (10, 15, 20, 25, 30 mesh), an oven, grinder, filter paper, porcelain cup, desiccator, digital balance, Erlenmeyer, magnetic stirrer, pipette, measuring cup, stand coolers, electric heaters, volumetric flasks, and burettes. Papaya Seed Pretreatment At this stage, the material that needs to be prepared is papaya seeds. The first step is to clean the papaya seeds by washing them and then drying them in the oven until they turn brown. Then grind the papaya seeds and sift according to the size of the variables, namely 10 and 30 mesh. Then store it in ziplock plastic. Extraction of Oil from Papaya Seeds Prepare 35 grams of papaya seed oil measuring 10 mesh, then wrap it using filter paper, and put it in the soxhlet. Put 400 mL of Petroleum Ether into a three-neck flask, and extract for 170 minutes. Distillation of Extracted Oil This stage begins with assembling a distillation apparatus, then inserting the extracted papaya seed oil into a three-neck flask, and distilling it at 60⁰C for 60 minutes. Weigh the oil obtained and store it in a bottle for later analysis. Papaya Seed Oil Yield (% Yield) Data analysis was used to determine the percentage of oil yield contained in papaya seeds. By weighing the initial mass of papaya seeds, then weighing the yield of papaya seed oil. Based on research (Rina et al., 2021), to calculate the yield percentage using Eq. (1): (1) Specific Gravity of Papaya Seed Oil (Density) Density is done to get the density value of the resulting papaya seed oil. By weighing the empty pycnometer and recording it. Then fill the pycnometer with papaya seed oil up to the neck of the pycnometer, then close the pycnometer and make sure there are no air bubbles in the pycnometer. Then clean the outside of the pycnometer with a tissue or cloth until dry. And weigh the pycnometer along with the papaya seed oil in it. Based on research (Rina et al., 2021), to calculate density using Eq. (2): Information: ρ = Density of papaya seed oil (gr/mL) m1 = Weight of empty pycnometer (gr) m2 = Weight of pycnometer with sample (gr) v pycno = Volume pycnometer (mL) FFA levels (Free Fatty Acid) FFA analysis of papaya seed oil refers to (BSN, 2019) by preparing a neutral 95% ethanol solution, and a standard 0.1 N NaOH solution. After that, weigh 5 grams of the sample, put it in an Erlenmeyer, and add 50 mL of 95% ethanol. then drip with 3-5 drops of phenolphthalein indicator, and titrate with 0.1 N NaOH standard solution until a pink color is formed which lasts for 30 seconds. Record the volume of NaOH required for the titration, and calculate the free fatty acid content (FFA content) using Eq. (3) Information: Vn = NaOH titration volume (mL) Nn = NaOH Normality (mL) Viscosity Journal of Vocational Studies on Applied Research, Vol.5(1), April 2023 A viscosity test was carried out by cleaning the Ostwald viscometer. then water is put into the viscometer until it does not pass the mark. Place the deflated rubber ball on the capillary tube. Suck in the water by pressing the s point on the rubber ball to the upper mark. Then release the rubber ball while turning on the stopwatch. When the water reaches the lower mark, turn off the stopwatch. Then repeat these steps on the papaya seed oil. To calculate the viscosity can use Eq. (4): This analysis was conducted to determine the percentage of water content contained in papaya seed oil. By weighing a porcelain cup that has been in the oven for 30 minutes at 105⁰C, record the results. Then weigh 5 grams of papaya seed oil (sample weight) and put it in a porcelain cup, oven at 105⁰C, for 1 hour, then cool in a desiccator to room temperature, then weigh it until constant weight, and record the results. Based on research (Rina et al., 2021), to calculate the water content using Eq. (5): Information: W = Weight after in the oven (gr) W0 = Papaya seed oil sample weight (gr) W1 = Weight of cup + oil after baking (gr) RESULTS AND DISCUSSION Research on the extraction of papaya seed oil has been carried out and will be processed using a level 2 3 factorial design experiment where there are 3 research variables, namely extraction time (170 minutes and 190 minutes), particle size (dried papaya seeds) (10 mesh and 30 mesh), the ratio of substance to solvent (1:10 g/g and 1:20 g/g). This study conducted 8 trials. This method is used to determine the effects of the variables, the most influential variables, and the optimum operating conditions. The use of symbols (+) and (-) indicates the type of the variable used in each experiment. In this research, 8 experiments were carried out, with each experiment being given a different treatment according to the experimental design that had been made before. The percentage of oil yield produced is based on Table 1, namely the longer the extraction time, the higher the percentage of oil yield produced, because the longer the extraction time, the contact that occurs between the materials and the dissolution lasts a long time, so that the oil content in the papaya seeds will be extracted a lot (Azhari et al., 2020). In previous studies, the best results were also obtained at an extraction time of 180 minutes. The oil yield increased as the volume of solvent used increased. The more volume of solvent used, the greater the ability of the solvent to extract the oil contained in papaya seeds (Wulandari, 2017). In the process variable particle size, where at the lower level the particle size (-) 10 mesh produces less oil yield than the upper level (+) 30 mesh. The smaller the sample size, the greater the yield of oil produced (Andaka, 2020). To be able to know the optimal operating conditions need to be optimized. Before optimizing, it is necessary to know the most influential process variables in the research using the quicker method, namely by taking into account the main effect and interaction with the resulting oil yield. Based on the calculation of the quicker method, the greatest effect is produced at 11.9285. And the most influential variable is produced, namely particle size. These results can be seen in Table 2 and Table 3. Journal of Vocational Studies on Applied Research, Vol.5(1), April 2023 Whereas in Figure 1 it shows that X7, where the point of the calculation of the effect of particle size with percent probability is away from the density. From the results obtained, process optimization can be carried out with variations in particle size (s) to determine the yield of papaya seed oil. Optimasi yang dilakukan yaitu pada ukuran partikel 10 mesh, 20 mesh, 25 mesh, dan 30 mesh. And with extraction operating conditions time of 180 minutes, and the ratio of material to solvent 1 : 9 g/g. Optimization of Papaya Seed Oil Extraction Based on the analysis that has been done, it can be concluded that the process variable that has the most influence on papaya seed oil extraction research is particle size (papaya seed size), so in the optimization process variables, t (extraction time) and r (ratio of material to solvent) will be used. and the variable s (particle size) as the variable changes. The optimization results that have been carried out are presented in Table 4. In Table 4 it can be seen that there was an increase in the yield of papaya seed oil extraction at seed sizes of 10 mesh to 20 mesh, then decreased at seed sizes of 25 mesh to 30 mesh. These results can be seen in the optimization graph of papaya seed oil extraction in Figure 2. Figure 2. Optimization Graph of Papaya Seed Oil Extraction Results Based on the yield of papaya seed oil produced, it can be seen that the optimum conditions were achieved for the variable changing the size of the papaya seeds by 20 mesh, namely a yield of 57.029% with optimum operating conditions of 180 minutes extraction time and a ratio of material to solvent 1 : 9 g/g. This is because the larger the mesh size, the smaller the seed size, and has a large surface area so that it can improve the performance of the extraction process, and the extract yields obtained are increasing. In the extraction of papaya seed oil, the appropriate or most optimal size of papaya seeds will make the extraction process run faster. In this study the optimal mesh size was 20 mesh, then the extraction yield decreased for larger mesh sizes, namely 25 mesh to 30 mesh. This can happen because the material is too fine, and the space between cells will be narrower, making it difficult for the solvent to enter into the powder. Analysis of Papaya Seed Oil Extraction Result Analysis of the oil extracted from papaya seeds, namely density test, FFA content, and water content. This is to find out whether the oil extracted from papaya seeds is in accordance with the existing SNI. Density A density test was carried out on samples at optimum operating conditions, namely particle size of 20 mesh, extraction time of 180 minutes, and the ratio of material to solvent 1 : 9 g/g. The test results can be seen in Table 5. The results in the table above show that the density of extracted papaya seed oil is close to that obtained by previous studies, and is in accordance with the Indonesian National Standard (SNI), which is between 0.924-0.929 gr/mL. In previous studies, the results obtained a density of 0.90427 g/mL (Andaka, 2020). With this density test can be known the purity of papaya seed oil. The more components contained in the oil, the higher the heavy fraction. Which means the specific gravity of the oil will be greater (Azhari et al., (2020). FFA Levels ( Free Fatty Acid) In the FFA content test carried out on samples at optimum operating conditions, namely particle size of 20 mesh, extraction time of 180 minutes, and the ratio of material to solvent 1: 9 g/g, the test results are presented in Table 6. Free fatty acid testing is an important parameter to determine oil quality. The level of free fatty acids contained in vegetable oil can be one of the parameters determining the quality of the oil. The amount of free fatty acids in the oil is indicated by the value of the acid number (Cahyaningtyas et al., 2017). A high acid number indicates that the free fatty acids present in vegetable oil are also high so the quality of the oil is even lower, this is because the free fatty acids produced due to hydrolysis can reduce the quality of the oil. Free fatty acids can increase due to repeated heating even at a constant temperature (Andaka, 2020). If you look at the table above, the oil from this study is in accordance with the SNI for papaya seed oil, which is 0.58, where the SNI for free fatty acids for papaya seed oil is between 0. 36-0.82. The results of free fatty acid levels in this study were carried out under operating conditions, namely an extraction time of 180 minutes, a distillation time of 1 hour, a particle size of 20 mesh, and a ratio of material to solvent of 1: 9 g/g, with a weight of dry papaya seed material of 35 grams and 500 mL of solvent. Water Content In the water content test carried out on samples at optimum operating conditions, namely particle size of 20 mesh, extraction time of 180 minutes, and the ratio of material to solvent 1: 9 g/g, the test results are presented in Table 7. Based on Table 7 it is known that the water content of papaya seed oil is 0.077%w/w, this result is in accordance with SNI papaya seed oil, which is a maximum of 0.15%w/w. The water content in the oil is one of the parameters determining the quality of the oil. The higher the water content in the oil, the lower the quality of the oil because water is one of the hydrolysis catalysts in oil (Cahyaningtyas et al., 2017). In this study, the decrease in water content was caused by repeated heating, namely in the distillation and oven processes to remove solvents which could cause the oil to be easily damaged and the water content in the samples included so that the water content would decrease. In addition, the higher the temperature and the longer the heating will accelerate evaporation so that the water contained in the oil will be less (Andaka, 2020). The water content obtained in this and previous studies is in accordance with SNI. In this study, it was found that the quality of papaya seed oil was better because of less water content. Journal of Vocational Studies on Applied Research, Vol.5(1), April 2023 CONCLUSION This study uses factorial design analysis and the quicker method to calculate the effect. From the experiments conducted, it was found that the most influential process variable for optimizing the papaya seed oil extraction process was the particle size variable (s) with an effect value of 11,928. From the optimization carried out on the particle size variable, the optimum yield value was obtained at 57.029%, namely at a particle size of 20 mesh, an extraction time of 180 minutes, and a ratio of material to solvent of 1:9 (35 grams : 500 mL). The resulting density is 0.926 gr/mL according to SNI, namely 0.924 -0.929. The resulting FFA (Free Fatty Acid) content is 0.58% according to SNI, namely 0.36% -0.82%. And the resulting water content is 0.077% according to SNI, which is a maximum of 0.15%.
2023-07-11T00:40:57.323Z
2023-06-12T00:00:00.000
{ "year": 2023, "sha1": "55c167846b96b6f86d08fa13e2835b1d64cf7029", "oa_license": "CCBYSA", "oa_url": "https://ejournal2.undip.ac.id/index.php/jvsar/article/download/17338/9145", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "38155370b22dec06db2b4e106509f87f7a532b14", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Chemistry" ], "extfieldsofstudy": [] }
248347523
pes2o/s2orc
v3-fos-license
Solid-phase extraction and fractionation of multiclass pollutants from wastewater followed by liquid chromatography tandem-mass spectrometry analysis Herein, we describe a modular solid-phase extraction (SPE) setup, combining three sorbents, for the effective extraction of neutrals, acidic, and basic micropollutants from wastewater, followed by their further elution in three independent extracts. The performance of this approach was demonstrated for a suite of 64 compounds, corresponding to different chemical families, using liquid chromatography tandem-mass spectrometry (LC–MS/MS). Target compounds were effectively extracted from wastewater samples; moreover, 62 out of 64 species were isolated in just one of the three fractions (neutrals, acids, and bases) obtained from the combination of sorbents. Globally, the efficiency and the selectivity of the SPE methodology improved the features obtained using generic SPE polymers, displaying just reversed-phase interactions. The overall recoveries of the analytical method, calculated against solvent-based calibration standards, stayed between 80 and 120% for 57 and 60 compounds, in raw and treated wastewater, respectively. Procedural limits of quantification (LOQs) varied from 1 to 20 ng L−1. Analysis of urban wastewater samples identified a group of 19 pollutants showing either negligible median removal efficiencies (± 20%) during wastewater treatment, or even a noticeable enhancement (case of the biodegradation product of the drug valsartan), which might be useful as markers of wastewater discharges in the aquatic environment. Supplementary Information The online version contains supplementary material available at 10.1007/s00216-022-04066-8. Introduction The number of organic compounds of environmental and toxicological concern has increased steadily for the last 20 years. Many of them are introduced in the aquatic environment through urban wastewater [1,2]. Most of the analytical procedures for the monitoring of these compounds are based on mass spectrometry (MS), combined with different chromatographic techniques, after an extraction and concentration step [3]. In this regard, the hydrophilic-lipophilic balanced (HLB) solidphase extraction (SPE) sorbents cover the effective extraction of compounds within a broad range of polarities from water samples. Thus, they are usually employed in combination with multianalyte/multiclass liquid chromatography (LC) MS-based methods [4][5][6]. The price of their high retention efficiency is a limited selectivity, which turns in significant variations in the efficiency of compound ionization (particularly using electrospray ionization, ESI) between sample extracts and solventbased standards [6]. The so-called matrix effects (MEs) do not only affect the accuracy of the obtained results, but also compound detectability since, in most cases, their ionization efficiency is attenuated for sample extracts when compared to solvent-based standards [5]. Mixed-mode (MM) sorbents, sharing ionic and reversedphase (RP) interactions, improve the recoveries of highly polar, ionizable compounds in comparison to RP polymers, maintaining an acceptable retention efficiency for neutrals [7,8]. Moreover, they allow the use of fractionated elution protocols, recovering compounds retained through the RP mechanism and those establishing electrostatic interactions with sorbent in different fractions [9]. Thus, cleaner extracts are obtained for the latter group of compounds, which turns in lower MEs [10][11][12]. The scientific literature contains previous successful applications of different types of MM sorbents to the selective extraction of basic (i.e., illicit drugs [13] and pharmaceuticals [11]), or acidic compounds (such as anti-hypertension drugs [12] and perfluorinated carboxylic and sulfonic acids [14]) from water samples. Most of these studies have been compiled in a recent review [7]. However, using conventional MM sorbents, the isolation of acidic and basic compounds in two separate fractions, requires two independent SPE extractions, with different MM polymers. In both assays, neutrals are mixed, either with basic or with acidic species. In order to combine high retention efficiencies during the concentration step with fractionated elution protocols, different alternatives are under evaluation. Zwitterionic MM sorbents permitted the removal of neutrals, in a washing fraction, whilst acids and bases were recovered together [15]. Another possibility involves the combination of different types of sorbents, either packed in the same cartridge, or connected in tandem. This approach was reported to improve the extraction efficiency of low molecular size, polar, and ionizable compounds, poorly retained in RP materials; however, in these previous studies, the sequential elution of compounds in independent fractions was not investigated [8,16]; consequently, extracts presented a high complexity. Using a multilayered cartridge setup (based on different combinations of conventional MM sorbents), Salas et al. [17] demonstrated the possibility to retain and to recover quantitatively a selection of 13 compounds in neutral, acidic, and basic fractions. In that study, the retention efficiency and the success of the fractionated elution protocol depended on the type of MM sorbents, and their relative proportions in the in-house packed cartridge [17]. The aim of this research was to develop a modular SPE approach, based on tandem combinations of commercially available cartridges, covering the effective extraction and the further fractionated elution of a suite of 64 compounds (log D values from − 1.95 to 5.5) attending to their ionizable groups: acids, bases, and neutrals, from wastewater samples. Sorbents were maintained in separate cartridges to increase the versatility of the elution protocol. Each SPE fraction was analyzed using different LC-MS/MS procedures, finely tuned to enhance compound detectability and to reduce blank contamination problems. The performance of the method was characterized in terms of extraction efficiency, MEs, and accuracy. Thereafter, it was applied to determine the levels of target compounds in raw and treated wastewater samples obtained from urban sewage treatment plants (STPs). RP OASIS HLB 60-mg and 200-mg cartridges, and 150-mg MM cartridges (MCX, RP, and strong cationic exchange sorbent; and WAX, RP, and weak anionic exchange sorbent) were provided by Waters (Milford, MA, USA). Ionic exchange 500-mg cartridges containing either sulfonic functionalities (SCX), or quaternary amines (SAX), as charged groups bonded to silica particles, were obtained from Agilent (Santa Clara, CA, USA). Native standards of species involved in this research were purchased from Sigma-Aldrich (St. Louis, MO, USA). Compounds were selected attending to their environmental and/or toxicological concern, including species compiled in the 2020 revision of the EU Watch List of contaminants to control in the aquatic environment [18]. The suite of compounds includes species without ionizable groups (neutrals, i.e., organophosphorus compounds and certain neonicotinoids), weak (phenols) and strong acids (carboxylic, tetrazolic, sulfonic, etc.), weak and strong bases (i.e., azoles and tertiary amines, respectively), and pollutants combining acidic and basic functionalities in the same molecule (e.g., certain angiotensin receptor antagonists, ARA-II, as losartan). The list of analytes, including their log D values and their categorization as acids, bases, or neutrals, is given in Table 1. Individual stock solutions of each compound were prepared in MeOH, except in case of neonicotinoids (ACN). Further dilutions and mixtures were also made in MeOH. Stock solutions and diluted mixtures were maintained at − 20 °C and used throughout this study. The exception was perfluorinated carboxylic acids. Their methanolic solutions were renewed every month to prevent esterification of the carboxylic moiety [19]. A selection of isotopically labeled compounds (either deuterated or 13 C species) was obtained from Merck and Toronto Research Chemicals (North York, Canada), either as pure compounds, or as stocks in MeOH (usually 100-1000 μg mL −1 , Table S1). Mixtures of these species were also made in MeOH. They were added to water samples, as surrogate standards (SSs), before SPE extraction. Solvent-based calibration standards were prepared in MeOH, or MeOH to FA (99:1) case of acidic species, in the range of concentrations from 0.5 to 300 ng mL −1 . The concentration of SSs in calibration standards was 50 ng mL −1 . Samples and sample preparation Wastewater was obtained from four different urban STPs in Galicia (Northwest Spain). All of them apply similar wastewater treatments involving primary and biological (activated sludge) units. Grab samples were used during samples were employed to measure the concentrations of target compounds, and to evaluate their removal efficiencies during wastewater treatment. Samples were received in glass bottles, sequentially passed through quartz (0.7-μm cutoff) and cellulose acetate filters (0.45-μm pore-size), and stored at 4 °C, for a maximum of 24 h, before extraction. During method development, different combinations of sorbents were tested. Under final working conditions, a modular SPE setup consisting of a MM 150 mg WAX cartridge (top) on-line connected to a RP 60 mg HLB one (bottom) was employed. Samples (100 mL volume aliquots), spiked with SSs and adjusted at neutral pH (6.5-7.5) when required, were passed through both cartridges at a flowrate of c.a. 5 mL min −1 . After washing sample containers and connections with SPE sorbents, using 10 mL of ultrapure water, cartridges were dried using a gentle stream of nitrogen and connected to an ionic exchange (SCX) one, previously conditioned with MeOH. Neutrals and weak acids were recovered with MeOH flowing through the three sorbents (extract volume 5 mL). After disconnecting the three cartridges, compounds with a strong acidic functionality (carboxylic, sulfonic, or tetrazolic groups) were recovered from the WAX cartridge with 2 mL of MeOH to NH 3 (98:2). Basic species were eluted from the SCX one using 5 mL of MeOH to NH 3 (95:5) (Fig. 1). Every extract was evaporated and adjusted to a final volume of 1 mL; moreover, that from the WAX sorbent was acidified with 0.020 mL of FA. Reference SPE extractions were carried out using RP HLB cartridges (200 mg sorbent), for the concentration of 100 mL samples. In this case, all compounds were recovered in the same fraction of methanol (5 mL), which was further concentrated to 1 mL. Extracts were filtered (0.22-μm pore-size syringe filter) before LC-MS/MS analysis. LC-MS/MS determination conditions Compounds were determined using an ultra-performance liquid chromatography (UPLC) triple quadrupole-type MS system provided by Agilent. The UPLC was 1290 Infinity II connected through a jet-stream ESI source to an i-funnel Agilent 6495 QqQ instrument. Different analytical (LC or UPLC) and delay columns were employed for the separation of target compounds, and to discriminate responses for contaminants existing in the mobile phase from those corresponding to injected compounds. Detailed UPLC conditions for each group of compounds, including type of columns, mobile phase composition, flowrate, and column temperature are compiled in Table S2. The injection volume was set at 2 μL in all methods. Voltages of the ESI source were 3000 V and 2000 V for positive and negative ionization modes, respectively. The fragmentor voltage was 166 V and the MRM parameters for each compound, including Transitions of these compounds were included also in the group of neutrals Solid-phase extraction and fractionation of multiclass pollutants from wastewater followed… 1 3 ionization mode and ratio between qualification (Q2) and quantification (Q1) transitions, are compiled in Table 1. In a few cases (e.g., perfluorobutanoic acid and tramadol, TRA), only one transition was available. MRM parameters for compounds employed as SSs are given in Table S1. In addition to the QqQ system, a time-of-flight (TOF) instrument (Agilent 6550) was employed to investigate the distribution of additional compounds in the SPE fractions obtained from non-spiked wastewater samples. In this case, the pseudo-molecular ions ([M + H] + or [M − H] − ) of each species were extracted using a mass window of 20 ppm. Compound identities were further confirmed against authentic standards. In this case, the employed LC conditions were those reported in Table S2 for basic species. Extraction efficiency, matrix effects, and accuracy evaluation The extraction efficiency (EEs, %) of the modular SPE protocol described in "Samples and sample preparation" (accounting for yields of extraction, fractionated elution, and extract concentration to 1 mL) was assessed as the ratio of responses (peak area for the Q1 transition without SSs correction) obtained for spiked wastewater aliquots and spiked SPE extracts multiplied by 100. Matrix effects (MEs, %) during ESI were evaluated comparing the difference of responses for spiked and non-spiked extracts of each sample (raw and treated wastewater) with those observed for a solvent-based standard of the same concentration. Values close to 100% correspond to similar ionization efficiencies for sample extracts versus solvent-based standards. On the other hand, normalized response ratios below and above 100% mean suppression and enhancement of compound ionization in sample extracts versus solvent-based standards [20]. The above parameters (EEs and MEs) were evaluated using an additional level of 1 ng mL −1 referred to the water sample (equivalent to 100 ng mL −1 in the corresponding SPE extract). The accuracy of the final procedure was investigated using samples spiked at three levels: 50, 200, and 1000 ng L −1 . For each type of wastewater, non-spiked (n = 3 replicates) and spiked fractions (n = 3, for each addition level) were fortified with SSs (500 ng L −1 ) and processed as reported in "Samples and sample preparation." Responses obtained for each compound were corrected with that measured to the assigned SS (Table 1), and compared to those obtained for solvent-based standards (concentration range from 0.5 to 300 ng mL −1 ). Two different kinds of blanks were considered during method development and application. Instrumental blanks corresponded to simulated (false) injections. That is, the injection valve changes from the by-pass to the main-pass position, with the mobile phase flowing through the injector loop and the injection needle to the LC column; however, the autosampler does not select any vial (sample, procedural blank, standard or solvent). These experiments permitted identifying contamination problems related to the UPLC system and/or the mobile phase (mainly the aqueous phase). Procedural blanks were prepared using ultrapure water samples, spiked only with the selection of SSs, and submitted to the adopted modular SPE protocol. This type of blanks is useful to detect contamination problems related to the sample preparation process. Instrumental limits of quantification (LOQs) were calculated as the concentration of each compound producing a response with a signal to noise ratio (S/N) of 10 for the less intense of the selected transitions (usually Q2) in solventbased standards. Procedural LOQs were estimated from instrumental LOQs, considering a 100-fold concentration factor, corrected with EEs and MEs when they were outside the range of values between 80 and 120%. Moreover, for compounds found in the procedural blanks, the LOQs of the method were calculated as the average concentration measured in blank extracts plus10 times its standard deviation. LC-ESI-MS/MS conditions Three LC-QqQ-MS methods were employed to enhance the detectability of each group of considered compounds (acids, bases, and neutrals). In case of IMI and ACE, their transitions were included in methods developed for neutral and basic species. Except for TCS, the rest of neutrals and bases were determined in ESI ( +); thus, FA was used as mobile phase modifier (0.1%) to promote their ionization. ACN, instead of MeOH, was preferred as organic mobile phase to reduce the retention of some highly lipophilic organophosphate flame retardants included in the group of neutrals, and to decrease the pressure in the UPLC system considering that two identical columns (delay and analytical columns) were required to cope with instrumental blanks noticed for some compounds within this group. As regards acidic compounds, FA (0.1%) and NH 4 F (1 mM) were tested as mobile phase additives. The ARA-II drugs showed a higher ionization efficiency under ESI ( +). On the other hand, herbicides and perfluorinated compounds led only to their [M − H] − ions. Thus, ESI ( +) and ESI ( −) modes were combined in this method. Depending on the type of modifier, differences between 20 and 40% in responses obtained for herbicides and perfluorinated compounds were noticed. However, most ARA-II drugs rendered one order of magnitude higher responses using NH 4 F as modifier (Fig. S1). The instrumental LOQs of compounds considered in this research were not only conditioned by their ionization efficiencies, but also by the existence of instrumental contamination sources. These problems were noticed in LC-MS/ MS records obtained for simulated injections. Particularly, mobile phases contributed significantly to the presence of several perfluorinated and organophosphate compounds in instrumental blanks. To discriminate the response due to instrumental contamination from that of the standard, delay columns were connected between the mobile phase mixer and the injector of the UPLC system. These columns are described in Table S2. Fig. S2 illustrates the separation of the chromatographic peak (earlier signal) for a low concentration standard of selected compounds: tris(2-chloro-isopropyl) phosphate (TCPP), tributoxyethyl phosphate (TBEP), and perfluorooctanoic acid (PFOA), from that corresponding to mobile phase contamination (latter peak), after installing the corresponding delay column. Under final working conditions, the LC-ESI-MS/MS methods achieved instrumental LOQs in the range from 0.1 to 0.5 ng mL −1 for most of the target compounds. In all cases, linear responses were attained for concentrations up to 300 ng mL −1 (Table 1). Solid-phase extraction and fractionated compound elution Preliminary SPE experiments were carried out using spiked aliquots of ultrapure water, considering different combinations of sorbents. The first tested setup involved retention of the suite of compounds (neutrals, acids, and bases) in a MM MCX sorbent [9]. In this case, samples were adjusted at pH 3 to improve the retention of highly polar, acidic species, in the MCX sorbent, through RP interactions. Neutrals are expected to be retained by the same mechanism and bases through electrostatic interactions with negatively charged sites of the polymer. Obviously, the latter interactions are favored in acidified samples. During the elution step, the MCX cartridge was connected to an anionic exchange (SAX) one. Distribution of compounds was investigated in the following solvent fractions: MeOH (5 mL) flowing through both cartridges connected in series, MeOH to NH 3 (95:5) recovered from the upper MCX sorbent (after removing the SAX one), and MeOH to FA (95:5) collected from the SAX cartridge. Under these conditions, neither the retention nor the fractionation of neutrals and acidic compounds was satisfactory. As example, the short-chain perfluorinated compounds (C 3 carboxylic acid and C 4 carboxylic and sulfonic acids) were not retained by the MCX sorbent; so, their EEs remained below 20%. Compounds with acidic and basic moieties in their structures (i.e., most of the ARA-II pharmaceuticals) were found in the fraction of basic drugs, whilst acidic drugs (e.g., valsartan, VAL, and valsartan acid, 1 3 VALA), herbicides (phenoxy acids), and C 8 perfluorinated compounds eluted together with neutrals in the methanolic fraction. That is, they were not fractionated from neutrals by the SAX sorbent. To sum up, this setup did not show any advantage compared to the single use of the MM MCX sorbent, enabling the fractionated elution of bases from neutral and acids. The second SPE setup considered concentration of water samples, at neutral pH, using a weak anionic exchange MM sorbent (WAX) [12]. In the elution step, this sorbent was connected to a pure cationic exchanger (SCX) cartridge. As in the former case, three different fractions were collected. MeOH was passed through both cartridges connected in series to recover neutrals. Thereafter, they were disconnected and eluted with MeOH to NH 3 (98:2). Above 95% of the responses (peak areas) observed for the suite of selected compounds was noticed in the expected SPE fraction accordingly to their preliminary classification given in Table 1. The only exceptions were the neonicotinoid insecticides imidacloprid (IMI) and acetamiprid (ACE), distributed between the neutral and the basic fractions in similar percentages. On view of these preliminary results, the second setup was adopted, and retention and elution conditions were reevaluated using spiked wastewater samples. Some highly polar and basic species, such as TRA, venlafaxine (VEN) and O-desmethyl venlafaxine (O-DVEN), citalopram (CIT), and N-desmethyl citalopram (N-DCIT) (their log D values ranged from − 0.4 to 1.0 at neutral pH, Table 1), were not retained quantitatively by the WAX sorbent. For 100 mL volume wastewater samples, between 5 and 18% of the responses measured for these compounds were noticed in the extract from a second WAX cartridge on-line connected to the first one. In order to improve their retention, the mixedmode WAX cartridge was combined (placed on top) with 60 mg HLB one to reinforce the RP retention mechanism during sample concentration. As regards the volume and the type of solvents employed in the fractionated elution protocol, 5 mL of MeOH was passed through the ternary combination of sorbents (MM, RP, and strong cationic exchange) to recover neutrals (Fig. 1). Triclosan (TCS), selected as representative of weak acidic phenolic species (predicted pKa 7.8), was also quantitatively eluted in this fraction. Again, IMI and ACE were partially retained by the SCX sorbent, being detected in neutral and, mostly, basic fractions. The other two neonicotinoids included in the study (thiamethoxam, THM, and clothianidin, CLO) were found only in the neutral fraction (methanolic extract). Likely, the chloronicotinic ring existing in the structures of IMI and ACE leads to a weak interaction of both compounds with the strong anionic exchange sorbent. None of the tested acidic compounds was released from the WAX cartridge during elution of neutrals. So, the HLB cartridge was discarded after this step (Fig. 1). Acids were recovered using just 2 mL of MeOH with a 2% of NH 3 , which is in agreement with the data published by G. Castro and co-workers [12] for SPE of ARA-II species using WAX cartridges. Finally, basic compounds showed a strong interaction with the SCX sorbent. Their quantitative elution (particularly in case of those containing tertiary amine groups) was required to increase the percentage of NH 3 added to MeOH from 2 to 5%, using 5 mL of this mixture. Performance of the method The EEs of the sample preparation process, calculated as defined in "Extraction efficiency, matrix effects, and accuracy evaluation," are summarized in Table 2. For most compounds, EEs ranged from 80 to 120%. In a few cases, values between 70 and 130% were noticed. On the other hand, six compounds showed non-quantitative extraction yields. Within the group of bases, EEs around 50% were observed for the pharmaceuticals: fenticonazole, miconazole, and sertaconazole, and the drug metabolite N-desethyl amiodarone. The four are relatively lipophilic compounds, with log D values above 4.5 (Table 1). Very likely, non-quantitative EEs are the result of sorption losses on glassware and connections with SPE cartridges. Although it was attempted to improve their recoveries by addition of MeOH to the water samples (10-20 mL of methanol per 100 mL of sample), this approach led to retention problems for polar basic species, positively charged at neutral pH values. Since the latter ones have a higher potential to be present in the water phase than more lipophilic drugs, no organic solvent was added to samples before SPE. The 2nd group of compounds displaying non-satisfactory recoveries was the neonicotinoids IMI and ACE. As commented in "LC-ESI-MS/MS conditions," both species were distributed between neutral and basic fractions. In raw wastewater, the overall EEs for each group of pollutants were 94% (acids), 91% (bases) and 86% (neutrals). For treated wastewater, average SPE EEs were 96%, 76%, and 94% for acids, bases, and neutrals, respectively. The selectivity of the modular SPE methodology was assessed comparing the responses obtained for the three groups of compounds in spiked extracts from raw and treated wastewater, with those corresponding to solvent-based standards prepared in MeOH [20]. Moreover, the normalized response ratios were compared to those obtained using a HLB sorbent, applied to 100 mL aliquots of the same water samples. In case of acids, most species showed normalized responses in the range from 80 to 120% (Fig. 2A). The only exception was VALA affected by moderate (68%) and strong signal suppression (44%) effects in the modular SPE and HLB extracts, respectively ( Fig. 2A). It is worth noting that, for acidic compounds, the RP methodology (based on the use of an HLB 200-mg cartridge for concentration of 100-mL samples) failed to Table 3 Average accuracy of the procedure for spiked wastewater samples (n = 9 samples spiked at 3 concentration levels: 50, 200, and 1000 ng L −1 ) and estimated procedural LOQs recover the short-chain perfluorinated compounds (perfluoropropanoic, perfluorobutanoic, and perfluorobutane sulfonic acids), data not shown. For the set of basic species (including the neonicotinoids ACE and IMI), significantly higher signal suppression effects were noticed for most compounds using the non-selective HLB extraction protocol, than following the modular approach (Fig. 2B). Finally, for neutrals, only CLO and THM presented strong signal suppression effects (normalized responses below 60%, Fig. 2C). The magnitude of this attenuation was significantly higher for HLB extracts than for those obtained with the combination of sorbents described in this research. In summary, for the raw wastewater matrix, 48 out of 64 compounds showed low MEs (normalized responses from 80 to 120%) using the modular SPE procedure, whilst only 25 species were within this interval with the SPE sorbent (Fig. 2D). In case of treated wastewater, lower differences were noticed between MEs for the modular SPE protocol and those observed for the HLB sorbent (Fig. 2D); however, the later sorbent was not able to recover short-chain perfluorinated compounds from water samples at neutral pH. Detailed data of MEs for treated wastewater are compiled in Table S3. The accuracy of the proposed methodology was assessed with recoveries assessed for samples spiked at different concentration levels, and calculated against solvent-based standards. Values obtained for each sample and addition level, including their standard deviations, are given as supplementary information (Table S4). A summary of global recoveries (with minimum and maximum values) observed for raw and treated wastewater samples is compiled in Table 3. Out of 64 compounds considered in this study, 57 and 60 species (in raw and treated wastewater, respectively) showed overall recoveries in the range between 80 and 120%. Therefore, the use of isotopically labeled analogues permitted to compensate problems (EEs below 80%) previously highlighted for those compounds distributed between neutral and basic fractions (IMI and ACE), and most of the species affected by sorption problems. Overall, the global mean of the recoveries for each of the three groups of compounds considered in this study varied between 95 and 100%. In summary, the modular SPE method described in this study provides accuracy values, for the three kinds of compounds, similar to those achieved using previously reported conventional MM methodologies, either focused on the selective extraction of acidic compounds [12,14], or addressing the concentration and isolation of basic drugs [11], from wastewater samples. The last column in Table 3 summarizes the LOQs of the method for raw wastewater. For the perfluorinated carboxylic acids and organophosphorus species, the global LOQs were determined by responses observed in procedural blanks. For the rest of compounds, procedural LOQs were controlled by instrumental LOQs (Table 1) and the performance of the SPE extraction step. With the exception of two phenoxy acid herbicides, the procedural LOQs varied between 2 and 10 ng L −1 . Overall, LOQs shown in Table 3 are in the range of values reported in previous studies using LC-MS/MS as determination technique after SPE either using HLB type [4,6] or a single MM cartridge [10,11]. Obviously, none of these previous methods covered the range of polarities considered in the current research. Application to wastewater samples The presence of target compounds was evaluated in six pairs of water samples (inlet and outlet), obtained from four STPs. Positive identifications were based on retention time and Q2/ Q1 matches (0.1 min and ± 30%, respectively) with calibration standards. A group of 46 compounds was quantified in at least one of the processed samples. Their concentrations (average values for duplicate extractions) are shown in Table S5. Among them, 33 species reached average levels above, or very close to 20 ng L −1 , either in raw or in treated wastewater. Figure 3A shows the sum of concentrations for acidic, basic, and neutral pollutants in wastewater samples. The contribution of species detected, but remaining below method LOQs, was not considered. In most of the samples, the sum of concentrations of acids was higher than those of bases and neutrals. On the other hand, the overall concentration of bases was that showing the lower reduction during wastewater treatment. Figure 3B displays the average concentrations (logarithmic scale) of compounds found at levels above 20 ng L −1 in raw and treated wastewater. In the first matrix, 16 species showed average levels above 100 ng L −1 . Within this group, we found two organophosphorus compounds (TCPP and TBEP), several cardiovascular drugs (in most cases ARA-II compounds and flecainide), opioids (such as TRA), different psychiatric drugs, and their human metabolites (VEN, O-DVEN, CIT, and N-DCIT). Among pesticides, terbutryn, imidacloprid, thiamethoxam, and thiabendazole were also ubiquitous in wastewater, with average concentrations in the range from 30 to 100 ng L −1 Figure 3C shows the median value of the apparent removal efficiencies for those compounds found above their LOQs in at least four of the six pairs of composite sewage water samples. Compounds are sorted from higher to lower removal efficiencies. Several pollutants displayed very low (below 20%), even negative, removal efficiencies, leading to similar, even higher, concentrations in treated than in raw wastewater. In case of pharmaceuticals, a potential explanation for negative removal rates is deconjugation during wastewater treatment, as reported for lamotrigine [21]. For moderately lipophilic compounds, negative removal rates might be an artifact caused by differential sorption of these species on particulate matter existing in raw and treated wastewater, either at STPs, or during transport. Whatever the exact source, similar trends have been already reported in case of perfluorooctanoic acid [22]. A particular case of compound generated in the STPs was VALA. This species is a biodegradation product of some ARA-II drugs, particularly of VAL [12]. On average, its concentration in treated wastewater was 10-times higher than in the influent of STPs ( Fig. 3B and C). In fact, VALA was the compound showing the highest average concentration in the effluents of the STPs. It is also worth noting that some compounds showing high apparent removal efficiencies (i.e., TBEP, telmisartan, losartan, TCS, and climbazole) have been previously reported in sludge at moderate-to-high concentrations [23,24]. Thus, sludge sorption might be responsible, at least in part, for the apparent degradation efficiencies depicted in Fig. 3C The efficiency of the modular SPE protocol to isolate additional compounds in a single fraction, eluted from the combination of SPE sorbents, was assessed using a LC-ESI-QTOF-MS system. To this end, a list of suspected targets was investigated in every SPE fraction from three different raw wastewater extracts. Their normalized responses (Table S6) confirmed trends observed for the previous suite of targets. Compounds with a carboxylic acid, or a stronger acid functionality, were recovered in the fraction from the WAX cartridge, no matter the co-existence of basic moieties in their molecules (e.g., atorvastatin, furosemide, and diclofenac). Slightly (caffeine) and strong basic compounds (cocaine, ephedrine, amisulpride) were trapped by the pure cationic exchange SCX sorbent. Finally, the set of investigated phenols (acetaminophen, benzophenone-3, methyl and propyl paraben) was mostly recovered with neutrals given their weak and null interactions with WAX and SCX sorbents, respectively. Conclusions The modular SPE configuration described in this research permitted the effective concentration of a suite of 64 compounds, with log D values comprised between − 1.95 and 4.5 units, from wastewater samples. Compared to an HLB-type sorbent, the described setup allowed the effective retention of relevant groups of polar, anionic pollutants, without requiring acidification of water samples. Considering the obtained MEs, the fraction of bases showed a much lower complexity than that obtained using the HLB-type sorbent. To the best of our knowledge, this study reports for the first time the successful extraction and fractionation of acids, bases, and neutrals from water samples combining commercially available cartridges of different sorbents. In addition to its quantitative applications, the proposed SPE setup might be useful in nontarget screening studies, to reduce the number of potential candidates existing in each fraction obtained from the same water sample. Quantitative data obtained for integrated water samples highlighted several pharmaceuticals poorly removed, even generated, during wastewater treatment. These species might serve as markers to assess the impact of urban wastewater in surface water reservoirs.
2022-04-24T13:17:36.349Z
2022-04-23T00:00:00.000
{ "year": 2022, "sha1": "a99e033e80deeb1df4d030a22ac2a48d0e46f79b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00216-022-04066-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "a99e033e80deeb1df4d030a22ac2a48d0e46f79b", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [] }
229685067
pes2o/s2orc
v3-fos-license
The Impact of Nanoparticles on Innate Immune Activation by Live Bacteria The innate immune system evolved to detect and react against potential dangers such as bacteria, viruses, and environmental particles. The advent of modern technology has exposed innate immune cells, such as monocytes, macrophages, and dendritic cells, to a relatively novel type of particulate matter, i.e., engineered nanoparticles. Nanoparticles are not inherently pathogenic, and yet cases have been described in which specific nanoparticle types can either induce innate/inflammatory responses or modulate the activity of activated innate cells. Many of these studies rely upon activation by agonists of toll-like receptors, such as lipopolysaccharide or peptidoglycan, instead of the more realistic stimulation by whole live organisms. In this review we examine and discuss the effects of nanoparticles on innate immune cells activated by live bacteria. We focus in particular on how nanoparticles may interfere with bacterial processes in the context of innate activation, and confine our scope to the effects due to particles themselves, rather than to molecules adsorbed on the particle surface. Finally, we examine the long-lasting consequences of coexposure to nanoparticles and bacteria, in terms of potential microbiome alterations and innate immune memory, and address nanoparticle-based vaccine strategies against bacterial infection. The Human Innate Immune Response The innate immune response begins with cellular or humoral recognition of danger, which alerts cells of the presence of potentially pathogenic foreign or endogenous materials. Upon recognition of danger, innate immune cells such as monocytes and macrophages rapidly initiate a response that includes phagocytosis and leads to inflammation, which are aimed at removal and destruction of the threatening object [1,2]. Activation of innate cells is generally initiated by pathogen binding to pattern recognition receptors (PRRs) expressed on the plasma membrane and on intracellular 3 even after sterilization, can induce an inflammatory response that may be mistakenly attributed to particles [32]. Nanomaterials may interact with humans at multiple levels throughout their life cycle, with exposure scenarios ranging from food additives and consumer products to the workplace and the medical field. This implies that the interaction may occur at the level of different barrier tissues and that the NP exposure doses may significantly vary. Besides human exposure, we should be aware that manufacturing, use, and disposal of such nanomaterials may be also detrimental to the environment where they may exert undesirable effects on the entire biosphere, including microorganisms [33,34]. This may again lead to indirect effects on human health by impacting biodiversity in the environment, and thus the food chain, and the microbiota in symbiosis with humans. In each case, from synthesis to environmental effects or direct particle exposure, the potential exists for nanoparticles to impact human health. The direct effects of different nanomaterials on the innate immune system have been reviewed earlier [35][36][37]. NPs can enter the human body by different routes such as inhalation, ingestion, injection, or skin contact (depicted in Figure 1). Some inhaled or ingested NPs have been shown to penetrate the relevant biological barriers, the alveolar epithelium, or the intestinal epithelium [38]. While dermal penetration can be considered as a minor entry route of NPs, some studies provided evidence that NPs can to a certain extent penetrate the protective layers of the skin, in particular if the skin presents anomalies [39,40]. Once inside the body, NPs can interact with different components of the innate immune system, such as neutrophils, monocytes, macrophages, dendritic cells, natural killer cells, etc. [36,41]. Furthermore, NPs may also impact adaptive immune responses for example by modulating the function of dendritic cells in antigen presentation [35]. The question of whether or not a nanomaterial can be considered as immunologically safe is heavily discussed in the current literature and depends on numerous factors, including the condition of the target host [42,43]. For instance, elderly people or people with chronic diseases are more likely to develop detrimental reactions to NPs that pose no problem to healthy people in identical exposures [44]. Other factors depend on the nanomaterial itself (size, shape, and composition), its potential contamination (especially with LPS), its concentration upon exposure and the sensitivity of the target host [45]. While the capacity of NPs to directly impact the immune system has been extensively studied, far less literature exists on the impact of NPs upon coexposure with infectious pathogens. Within this review we focus on the capacity of NPs to affect the innate immune activation triggered by live bacteria, thereby modulating the course of a normal defensive innate/inflammatory reaction. We have chosen to focus on NPs that are not functionalized with specific molecules, in order to examine the direct impact of the bionano interaction on the antibacterial immune response. The Effects of NPs on Innate Immune Stimulation Many in vitro models that investigate the interaction of novel NP formulations with innate immune activation use synthetic or purified ligands that activate monocytes, macrophages, or other PRR-expressing cells. PRR-based cell activation leads to production of molecules involved in the defensive inflammatory reaction, including costimulatory surface molecules and soluble factors such as cytokines and chemokines. Several scenarios exist in which unfunctionalized NPs may interfere with and alter both the initiation and progression of inflammatory responses induced by microbial or other stimuli. Of most concern from a pathological standpoint is the possibility that NPs could exacerbate or prolong PRR-driven inflammatory reactions, leading to uncontrolled tissue-damaging inflammation. For instance, NPs may synergize with and amplify the effects of the bacterial stimulation. An exemplary case is the production of interleukin-1β (IL-1β). LPS binding to TLR4 upregulates, through the transcription factor NFκB, the gene encoding the potent inflammatory cytokine IL-1β [46]. IL-1β is produced as an inactive pro-protein that needs enzymatic cleavage before being exported extracellularly as biologically active IL-1β. This usually requires a second signal involving activation of the NOD-, LRR-, and pyrin domain-containing protein 3 (NLRP3) inflammasome, which induces the enzyme caspase-1 to cleave pro-IL-1β [47,48]. Active IL-1β then leaves the cell and binds to IL-1 receptors (which are present on most cells in the body, erythrocytes being an exception) and triggers defensive activation responses [49]. If uncontrolled in duration and intensity, IL-1β-induced inflammation can lead to pathological symptoms such as vasodilation, leukocyte influx, swelling, fever, and eventually inflammation-driven tissue damage may occur [50]. Silica NPs, multiwalled carbon nanotubes and many other NP types have been shown to be potent inducers of NLRP3 inflammasome activation, an event that could contribute to the establishment of uncontrolled pathological inflammation [51,52]. Conversely, NLRP3 activation by NPs could be exploited in a beneficial manner in the case of adjuvant function, where activation of the innate immune system is desirable for induction of long-lasting adaptive immunity [4,26]. Although some NPs may enhance inflammation by acting on NLRP3 activation, in several cases, it was observed that an inflammatory response to inflammatory stimuli could be downregulated by the presence of NPs. This is most commonly reported in terms of reduced cytokine production in response to LPS stimulation. Nanoplatinum was observed to suppress the production of cytokines and reactive oxygen species (ROS) in LPS-stimulated mouse macrophage-like leukemia cells [53]. Grosse et al. demonstrated an inhibition of the response to LPS in terms of production of tumor necrosis factor-alpha (TNFα), IL-1β, and IL-6 when human primary monocytes were exposed to iron oxide NPs, an effect more pronounced at higher particle concentrations but smaller particle sizes [54]. Possible explanations include a NP "cleaning" effect, in which the stimulant (as in the case of IL-1 β) is adsorbed onto the particle surface limiting its capacity to bind to its receptor [55]. Other evidence demonstrated that mouse bone marrow macrophages preexposed to superparamagnetic iron oxide NPs (SPIONs) exhibit a more inflammatory gene activation profile in response to LPS, raising another potential pathway for altering LPS-induced reactivity [56]. However, as experiments with similarly sized metallic or silica NPs failed to interfere with LPS-induced inflammation [32,56], it is likely that the NP effects may depend on the particle type and concentration, the exposure characteristics (before LPS, concurrent with LPS, etc.), the cell type (transformed vs. primary, human vs. murine) and the assay conditions. We noted that in many cases, exposing innate immune cells to NPs has no effect on cellular reactivity to inflammatory stimuli, in spite of the fact that cells recognize and uptake the particles to which they are exposed [6,57,58]. This phenomenon is likely widely underreported, as it is inherently a "no-result" outcome, and these data have a tendency to be omitted from the literature. We also noted that much of the literature concerning NPs and immune activation regards the mechanistic aspects of the interaction, and uses particle and stimulant doses that are poorly related to realistic human exposure scenarios [45]. Some examples for the impact of unfunctionalized NPs on innate immune activation can be found in Table 1. The Effect of NPs on Innate Immune Activation by Live Bacteria Activation of innate immune responses by bacteria typically occurs, as already mentioned, principally through recognition of bacterial surface patterns by PRRs, including TLRs. In addition, microorganisms that enter the cytosol can be recognized by a class of cytosolic PRRs, the NOD-like receptors (NLRs). While these are the most likely and potent mechanisms of innate immune activation, in the case of bacterial stimulation several additional variables must be considered beyond the ligand-receptor based effect. A primary task of the innate immune response is to restrict bacterial growth and spread. Malfunctioning or suppressed immune responses, for instance in the case of diabetic chronic wounds, can result in uncontrolled bacterial proliferation leading to chronic and tissue-damaging infection [67]. Upon recognition of potentially dangerous agents such as bacteria, macrophages attempt to engulf the threat for destruction and elimination by activating the energy-expensive process of phagocytosis [2]. The efficiency of cell-mediated host defense also depends upon effective and rapid capture of motile bacteria and on the motility of phagocytes themselves [68,69]. Some bacteria such as Staphylococcus aureus have evolved an active immune avoidance mechanism by secreting biofilms, which strongly decrease immune detection or even drive the activation of immunosuppressive cells [70,71]. Further, some bacterial species are capable of intracellular survival, residing within phagosomes and preventing their fusion with lysosomes to escape phagolysosome-based destruction [72]. These processes are dynamic, adding increased levels of physical and chemical complexity that are not mimicked by PPR activation with soluble ligands such as LPS. This underlines the significant difference, in terms of molecular and functional mechanisms engaged in the response, between ligand-induced PPR stimulation (mimicking a late response, once bacteria have been destroyed and only single molecules are still present, with only PRR stimulation still occurring), uptake of dead bacteria (mimicking an intermediate phase of the response, once bacteria have been killed extracellularly but not yet destroyed; PRR stimulation and phagocytosis still occurring), and interaction with live bacteria (mimicking the first phases of the responses, with PRR stimulation, phagocytosis, proliferation restriction, and bacterial killing occurring). Thus, in vitro assays exclusively based on PPR stimulation would only reproduce the late phase of an innate immune response to bacteria and may not be predictive of the possible interference of NPs in particular with the early innate reaction to bacteria. This becomes particularly important in scenarios where coexposure to NPs and bacteria may take place. Certainly, in consumer or occupational exposure to NPs in the respiratory or digestive tract an interaction with the microbiota colonizing the respiratory and digestive mucosae is expected [73,74]. NPs may also be applied topically for a variety of purposes, where interactions with epidermal bacteria would be unavoidable [75]. Increased use of NPs in modern society makes it likely that coexposure to bacteria and NPs will become more common. NPs can interact with bacteria in multiple ways, thereby interfering with the bacterial capacity to activate innate immune responses ( Figure 2). It should be noted that small NPs (≤30 nm) may adhere to the bacterial surface, blocking as much as 80% of the bacterial surface area from contact with target cells and subsequent cell activation [76]. NP coating could thus serve the dual purpose of inhibiting bacterial infectivity and motility and masking the PAMPs to which TLRs or NLRs would otherwise bind [77]. It has been shown that pretreatment with Au or SiO 2 NPs can inhibit killed E. coli phagocytosis by RAW 264.7 cells [78], although this may not in fact alter the course of an inflammatory response to E. coli infection [79]. Similarly, SPIO NPs could decrease uptake of killed Staphylococcus pneumoniae in bone marrow-derived mouse macrophages [56]. The general evidence that NPs inhibit bacterial phagocytosis leads to the hypothesis that they can also inhibit innate cell activation by bacteria. Indeed, data from our group show that pretreatment with Au NPs decreases the response of primary human monocytes to live Bacille Calmette-Guérin (BCG, a strain of Mycobacterium bovis used as a tuberculosis vaccine) in terms of cytokine production, although the NPs had no effect on LPS stimulation [59]. The mechanism remains unclear. 7 Staphylococcus pneumoniae in bone marrow-derived mouse macrophages [56]. The general evidence that NPs inhibit bacterial phagocytosis leads to the hypothesis that they can also inhibit innate cell activation by bacteria. Indeed, data from our group show that pretreatment with Au NPs decreases the response of primary human monocytes to live Bacille Calmette-Guérin (BCG, a strain of Mycobacterium bovis used as a tuberculosis vaccine) in terms of cytokine production, although the NPs had no effect on LPS stimulation [59]. The mechanism remains unclear. While the aforementioned studies describe models using safe and biomedically relevant particles, toxic NP types could enhance the detrimental impact of bacterial infections. Especially in the case of occupational exposure, disruption of tissue homeostasis by toxic NPs could open the door for pathogenicity. Particles resulting from welding fumes were observed to be particularly dangerous in this regard, with exposure driving inflammation and eventually increasing susceptibility to pneumococcal disease [80,81]. Likewise, inhalation of toxic copper oxide (CuO) NPs impaired the mouse capacity to clear a lung infection by Klebsiella pneumoniae [82]. While these NPs do not have a direct synergistic effect on bacterial growth, it is clear that agents that cause tissue damage and impair immune functions facilitate rapid bacterial growth and infectious spread [83,84]. The possibility that certain NP types could have a direct positive influence on bacterial growth should not be ignored [85], although data on the topic remain scant. The bactericidal capacity of certain NPs is the most commonly reported scenario in which NPs can contribute to an effective immune reaction. Live bacteria induce immune responses of greater magnitude than their killed counterparts [9,86]. It can thus be hypothesized that the impact of NPs on immune activation caused by live bacteria could be far different than that on responses induced by PAMPs or killed bacteria. Silver NPs are perhaps the best studied particles in this regard. Ag NPs have a potent bactericidal activity mainly due to the release of toxic Ag ions that negatively impact membrane permeability and respiration [87,88]. Moreover, Ag NPs have also been found to enter the bacterial cells, where the high reactivity of silver may interfere with processes relating to sulphur (abundant on cell membranes) or phosphorus (abundant in compounds such as DNA) containing compounds [87,88]. More recently, Ag NPs have been demonstrated as effective antibiofilm agents, although it is unclear whether the particles act at the biofilm level or exclusively on bacteria [89]. Dependent upon the NP type, NPs could also contribute to biofilm formation [90]. The antimicrobial activity of other NP types, including ZnO, iron oxide, and mesoporous silica, is extensively reviewed elsewhere [91][92][93]. The use of NPs as antimicrobial agents is of great interest, in particular in the case of antibiotic-resistant infections [93]. Inhibition of bacterial proliferation and subsequent killing of pathogenic bacteria is the objective of treatment against bacterial infection, and NP-based treatments are already in use in this regard [94]. However, in scenarios such as bacterially driven sepsis, bacteriolysis following certain antibiotic treatments is known to liberate membrane-bound bacterial components such as LPS leading to excessive pathological inflammation due to uncontrolled PRR activation [95]. The same could be a pitfall for NP-caused bacteriolysis. Direct comparison between PAMP-induced responses and responses to live bacteria is not easy, since the two responses are different. In any case, it is clear that NPs can impact both types of reaction, and that the effect of NPs on PAMP-induced innate activation is not necessarily predictive of their impact on the response to live bacteria. The possibility of using unfunctionalized NPs for modulating innate reactions to bacteria in a beneficial direction is however promising and opens the way to future applications. Table 2 summarizes some recent findings on the subject. The Long-Term Effects of NP Exposure In addition to the possibility that NPs can directly modify immune responses toward bacteria, several indirect and potentially long-term immune consequences of the NP-bacterial interaction should be considered. These include the possibility of improving the immune response in vaccination, the possibility of modulating resident microbiota towards improving human health, and the possibility of inducing/modulating innate memory towards increased resistance to infections. NPs and Vaccines Vaccines are among the most significant innovations in modern medicine. Successful vaccination involves two components: (i) presentation of antigen by professional antigen-presenting cells (APCs, typically dendritic cells) to induce adaptive immunity and long-term memory and (ii) innate immune activation, which drives crosstalk between innate and adaptive immune cells, facilitating and amplifying induction of adaptive immunity and memory. NPs can be applied for both functions, i.e., as carriers for improving antigen delivery, and as adjuvants that amplify immune responses and subsequent memory establishment [26,[101][102][103]. NPs can be used to transport and deliver diverse types of antigens such as nucleic acids, proteins, peptides, and immune stimulating agents (adjuvants). Specific NP types have been reported to improve antigen processing and uptake, and prevent premature proteolytic degradation of protein antigens [104,105]. Vaccine antigen can be delivered to the target cells by either encapsulation within NPs or by antigen adsorption onto the particle surface. Encapsulation can prevent premature antigen degradation and achieve sustained release, whereas surface adsorption can both stabilize the antigen and facilitate the uptake by APCs through surface receptor-mediated mechanisms [20]. Importantly, intracellular delivery can be tuned so as to achieve presentation of the same antigen both in class I and in class II major histocompatibility complexes, in order to achieve a more complete protective immunity [106]. In addition, the particulate nature of NPs endows them with adjuvant capacity, i.e., the capacity of promoting a localized innate reaction while antigen presentation takes place. All these desirable properties can improve the vaccine delivery and efficacy compared to the other conventional delivery and adjuvant systems [26]. NP types such as inorganic NPs and polymeric NPs have been shown to be efficient antigen carriers. Immunity against Mycobacterium tuberculosis could be enhanced in mice using chitosan NP coated in lipid antigen, enhancing the delivery of antigen to the APCs [107]. Likewise, conjugation of N-terminal domains of flagellin onto Au NPs elicited higher titers of antigen-specific antibodies in mice compared to the carrier-free antigen [76]. Au NPs have also been used as carriers to enhance immunogenicity of antibacterial vaccines against Yersinia pestis, and S. pneumoniae [108][109][110]. Vetro et al. demonstrated that Au NPs coated with synthetic oligosaccharides corresponding to the repeating units of S. pneumoniae triggered better response to the oligosaccharide epitope (usually poorly immunogenic) and showed similar antibody production in vivo in the mouse as the human PCV13 pneumococcal vaccine (in which diphtheria toxoid is used as a carrier of the pneumococcal polysaccharides) [108]. Furthermore, NPs can be used either directly as adjuvants, or as adjuvant carriers concurrent with antigen. A cationic liposome-based adjuvant stabilized with a synthetic glycolipid (CAF01) could induce a strong and persistent Th1 response to tuberculosis in humans [111] and persistent protective immunity in mice [112]. In the mouse, the adjuvant properties of CAF01 was linked to protracted uptake and activation by dendritic cells [113]. PLGA particles in particular show promise for delivery of multiple molecules simultaneously. By loading PLGA nanoparticles with both TLR4 and TLR7 agonists, Kasturi et al. could demonstrate a synergistic adjuvant effect that greatly enhanced IgG antibody titers against ovalbumin compared to delivery of a single adjuvant [114]. This underlines the potential of using NPs to modulate innate immune responses for the optimal adjuvant effect, and indicates the possibility of fine tuning vaccination strategies against bacterial infection. In summary, NP-based vaccine formulations are a promising strategy that can induce long-lasting protective memory, as NPs can function as carriers to modulate antigen delivery and improve immunogenicity, and they can also act as adjuvants that amplify the induction of protective immunity by a controlled amplification of innate/inflammatory reactions. Developing nanovaccines with optimal safety and efficacy will be of great importance, especially now, in the era of antibiotic resistance. NPs and Microbiota Perhaps the most likely avenue for NP interactions with bacteria in a human context occurs within the several locations inhabited by microfloral populations. Figure 3 depicts some potential implications of such NP-bacterial interactions on human innate immunity. The gastro-intestinal tract is colonized by a large and highly interactive community of microbes that play a pivotal role in host health by providing essential nutrients and aiding in digestion. It is now increasingly recognized that these commensal microbiota also contribute to the host immune defense, for example, by providing resistance against invading pathogens and by training and stimulation of the host immune system, as reviewed by [73,115,116]. Owing to these critical functions provided by intestinal microbiota, their disruption (dysbiosis) by, for example, a dietary change or a chemical exposure may adversely affect the health of the host [117][118][119] and in humans is associated to numerous diseases including obesity, inflammatory bowel disease, and diabetes [120]. Given the central role that microbiota play in host immunity and host health, there is a need to incorporate commensal microbiota in the health risk assessment of NP applications, both from a biomedical standpoint, and in the context of occupational, consumptive, or inadvertent NP exposure [121]. An increasing body of literature now indicates that NPs have the potential to disrupt both the gastro-intestinal, but also the respiratory microbiota of animals, as reviewed by [74,118,[122][123][124][125]. Some studies have indicated that in rodents Ag NPs can negatively affect Lactobacillus and other core intestinal Firmicutes [126,127]. These results are, however, contrasted by other findings that show no or a positive impact of Ag NPs on the relative abundance of these bacterial taxa in the intestinal microbiota of rodents [128][129][130]. Exposure to micro-and nanoplastics has also been proposed as a potential cause of gut dysbiosis [131,132]. Further investigation will help reveal the extent by which NPs can disrupt the commensal microbiota and the consequent impact of NPs on gastro-intestinal conditions. Further, it remains unclear which implications dysbiosis induced by NPs could have on host health [122,125]. Some studies have shown that microbiota modulations under NP exposure coincide with changes in the expression of host immune markers [126,127,133], which may reflect the inherent link between microbiota and host immunity. In contrast, other studies have found no relation between microbiota changes and host immune status under NP exposure [134]. A modulation of the expression of immune markers by NPs does not necessarily indicate that the host is being immunocompromised. However, a chronic stimulation of intestinal epithelial cells consequent to synergistic NP-microbial effects and leading to a chronic inflammatory milieu in the gut may have critical consequences on barrier integrity and long-term tissue homeostasis. To study whether NPs actually have an adverse effect on the host immune status through dysbiosis, we need animal models in which immunity to infections or other challenges is assessed [134,135]. To date, the possible impact of NPs on the interactions between host immunity, microbiota, and host health status remains largely unexplored, but given the widespread exposure to NPs this clearly requires further investigation. NPs, Bacteria, and Innate Immune Memory It is now evident that innate immune cells can develop memory of past challenges, which makes them able to react to subsequent stimulations in a more efficient and more protective fashion. Some stimuli, most notably LPS, drive a tolerance-type memory, in which production of inflammatory effectors is less potent in response to a second challenge, in order to achieve protection without An increasing body of literature now indicates that NPs have the potential to disrupt both the gastro-intestinal, but also the respiratory microbiota of animals, as reviewed by [74,118,[122][123][124][125]. Some studies have indicated that in rodents Ag NPs can negatively affect Lactobacillus and other core intestinal Firmicutes [126,127]. These results are, however, contrasted by other findings that show no or a positive impact of Ag NPs on the relative abundance of these bacterial taxa in the intestinal microbiota of rodents [128][129][130]. Exposure to micro-and nanoplastics has also been proposed as a potential cause of gut dysbiosis [131,132]. Further investigation will help reveal the extent by which NPs can disrupt the commensal microbiota and the consequent impact of NPs on gastro-intestinal conditions. Further, it remains unclear which implications dysbiosis induced by NPs could have on host health [122,125]. Some studies have shown that microbiota modulations under NP exposure coincide with changes in the expression of host immune markers [126,127,133], which may reflect the inherent link between microbiota and host immunity. In contrast, other studies have found no relation between microbiota changes and host immune status under NP exposure [134]. A modulation of the expression of immune markers by NPs does not necessarily indicate that the host is being immunocompromised. However, a chronic stimulation of intestinal epithelial cells consequent to synergistic NP-microbial effects and leading to a chronic inflammatory milieu in the gut may have critical consequences on barrier integrity and long-term tissue homeostasis. To study whether NPs actually have an adverse effect on the host immune status through dysbiosis, we need animal models in which immunity to infections or other challenges is assessed [134,135]. To date, the possible impact of NPs on the interactions between host immunity, microbiota, and host health status remains largely unexplored, but given the widespread exposure to NPs this clearly requires further investigation. NPs, Bacteria, and Innate Immune Memory It is now evident that innate immune cells can develop memory of past challenges, which makes them able to react to subsequent stimulations in a more efficient and more protective fashion. Some stimuli, most notably LPS, drive a tolerance-type memory, in which production of inflammatory effectors is less potent in response to a second challenge, in order to achieve protection without risking the significant tissue damage that a strong reaction to LPS would cause [136,137]. Conversely, in response to other agents (e.g., fungal β-glucan, oxidized low-density lipoproteins, and BCG) a different type of memory develops, which leads to an increase in cellular reactivity upon a second challenge, again aiming at improving health protection [138][139][140]. This elevated response has been termed "trained immunity" or "potentiation" [141,142]. Thus, innate immune memory is a protective mechanism, with tolerance shielding the host from excessive inflammation and consequent tissue damage [140], and potentiation enhancing the host defense against unrelated pathogens [143]. Moreover, it appears that innate immune memory may be conferred either locally, for instance within resident macrophages of the lung [144], or systemically via monocyte progenitors in the bone marrow [145,146]. In both scenarios, innate immune memory is a long-lasting phenomenon (life-long in invertebrates, at least a few months in humans), despite the shorter lifespan of memory monocytes and macrophages. Most importantly, at variance with immunological memory in adaptive immunity, innate memory in mammals (including human beings) is largely non-specific, meaning that priming (first stimulation) with an agent such as BCG may induce a more powerful secondary response to an unrelated stimulus. Thus, innate memory is apparently one of the mechanisms at the basis of the non-specific protection induced by vaccination/priming that enhances resistance to vaccine-unrelated diseases [147][148][149]. Since engineered NPs are foreign particles that the innate immune system may recognize as potentially dangerous, it is well possible that they could act as innate memory inducers and/or interfere with memory induction by bacteria and other stimuli. Two scenarios should be considered: (i) that unintentional exposure to NPs or microorganisms together with NPs may erroneously prime the innate immune system for inadequate response to future challenges (e.g., by provoking excessive and destructive inflammation to a subsequent infection) and (ii) that NPs could be used to deliberately induce/modulate innate memory in order to prime the innate immune system towards a more efficient response to future infections, in a sort of non-specific innate vaccination. In 2017, the first evidence that NPs can induce innate memory was published, which showed a memory effect induced by Au NPs on human primary monocytes and that also hypothesized that the NP-dependent epigenetic reprogramming capacity could be at the basis of innate memory induction [150]. In 2020 several independent reports showed that different NP types could induce innate memory. In the marine bivalve Mytilus galloprovincialis, previous exposure to nanoplastics modulated the hemocyte subpopulations and immune-related genes, resulting in an increased bactericidal capacity upon subsequent exposure to nanopolystyrene [133]. This is the first indication that NPs, similar to bacteria and PAMPs, could induce an innate memory resulting in increased resistance to subsequent infections. Another study demonstrated that pristine graphene (unable to induce cell activation capacity per se) could prime mouse bone marrow-derived macrophages to react to a subsequent challenge with different PAMPs (LPS, CpG, and R848) with an increased production of the inflammatory cytokines IL-6 and TNFα, in parallel to a decreased production of the anti-inflammatory cytokine IL-10, indicating graphene-induced memory reprogrammed macrophages in the direction of immune potentiation, despite the apparent inert nature of the nanomaterial [151]. In addition to the capacity of some NPs to induce innate memory, an interesting possibility is that the presence of NPs may modulate the priming/memory-inducing capacity of other agents. Preliminary results indeed show that the memory response of human primary monocytes primed with live BCG and challenged with LPS, in terms of production of inflammatory TNFα and IL-6 and anti-inflammatory IL-1Ra and IL-10 cytokines was significantly reduced if Au NPs were present with BCG during priming [59]. These findings open the possibility of novel approaches of immunomodulation, in which NPs can be used for improving vaccine efficacy and general resistance to infections, and also for modulating and rebalancing the altered immune responses in a range of immune-related diseases, such as chronic inflammatory, degenerative, and autoimmune diseases [152]. Conclusions and Future Perspectives Human exposure to NPs has become increasingly frequent in modern society, both from an occupational and consumer standpoint, and in biomedical applications. Here we addressed the impact that NPs may have on the innate immune response, in particular within the context of the protective response to live bacteria. NPs can interfere with bacterial life, motility, growth, and biofilm formation, and can decrease phagocytosis of bacteria by human immune cells. Since the immune reaction to live bacteria is much more complex than that to PAMPs, generally used in vitro as a bacterial proxy, reliable assays for assessing the capacity of NPs to affect the innate immune response should make use of live bacteria. It also becomes apparent that the NP impact on innate responses to bacterial infection is not limited to the immediate reaction. In vaccination strategies, NPs can facilitate the development of long-lasting specific immunity against pathogenic bacteria at multiple levels: by improving antigen delivery to antigen-presenting cells, by acting as adjuvants that amplify adaptive responses through a controlled inflammatory reaction at the site of antigen presentation, by activating antigen-presenting cells for more efficient antigen presentation, and through modulation of innate memory, which could prime innate immune cells towards a more efficient response to the vaccine. Moreover, NPs may alter the composition of the host microbial community, potentially altering synergism between the host and microflora during immune responses. In each case, in order to understand the full potential impact of NP exposure on human health and innate immunity we need to examine their interactions with live bacteria in suitable in vitro and in vivo models. Overall, based on the most recent data, the possibility of using NPs for modulating innate immune responses towards increased resistance to infections is a promising and realistically feasible possibility.
2020-12-24T09:13:57.111Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "0a927d7f85e94a91c39461a7c2ba2bf2674a6660", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/24/9695/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d3399a558176e1e54620d51ae6d5ca7ad948834", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219726349
pes2o/s2orc
v3-fos-license
Neuroprotection or Neurotoxicity of Illicit Drugs on Parkinson’s Disease Parkinson’s Disease (PD) is currently the most rapid growing neurodegenerative disease and over the past generation, its global burden has more than doubled. The onset of PD can arise due to environmental, sporadic or genetic factors. Nevertheless, most PD cases have an unknown etiology. Chemicals, such as the anthropogenic pollutant 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and amphetamine-type stimulants, have been associated with the onset of PD. Conversely, cannabinoids have been associated with the treatment of the symptoms’. PD and medical cannabis is currently under the spotlight, and research to find its benefits on PD is on-going worldwide. However, the described clinical applications and safety of pharmacotherapy with cannabis products are yet to be fully supported by scientific evidence. Furthermore, the novel psychoactive substances are currently a popular alternative to classical drugs of abuse, representing an unknown health hazard for young adults who may develop PD later in their lifetime. This review addresses the neurotoxic and neuroprotective impact of illicit substance consumption in PD, presenting clinical evidence and molecular and cellular mechanisms of this association. This research area is utterly important for contemporary society since illicit drugs’ legalization is under discussion which may have consequences both for the onset of PD and for the treatment of its symptoms. Introduction Neurodegenerative diseases are progressive incapacitating conditions involving the function loss of nerve cells in the brain or peripheral nervous system. These types of diseases affect millions of people worldwide, who may suffer an early death. The burden of neurodegenerative diseases has significantly increased worldwide over the past 25 years, mostly due to incremental increases in the population numbers, population ageing and to the environmental stress associated to contemporary societies [1][2][3]. These diseases have become the leading cause of disability and death in developed countries and, in spite of decades of research, there is no way to cure them or slow down their progression [4]. Neurodegenerative diseases include Alzheimer's, amyotrophic lateral sclerosis, Huntington's, Lewy body disease and Parkinson's, among others. The two most prevalent neurodegenerative diseases are Alzheimer's (AD) and Parkinson's disease (PD). About 5% of people over the age of 85 suffer from PD, system. The two isoforms CB2A and CB2B are predominantly expressed in testis and in the spleen [70]. Although several studies have shown the presence of CB2 in the brain, the role of CB2 in endocannabinoid-mediated synaptic transmission is still largely elusive [71,72]. However, it was reported that in medial prefrontal cortical pyramidal neurons, intracellular CB2 reduces neuronal firing through the opening of Ca 2+ -activated chloride channels, suggesting its involvement in the regulation of neuronal activity [73]. Moreover, CB2 receptors are involved in neuroinflammation by modulating microglia activation and migration [74] and are consequently associated with neurodegenerative disorders [75]. Accordingly, CB2 receptor is seen as a potential therapeutic target for PD since it modulates the inflammatory process and does not trigger undesirable psychoactive effects [76,77]. Furthermore, several shreds of evidence suggest that cannabinoid compounds may interact with other receptors, besides CB1 and CB2, namely the transient receptor potential cation channel subfamily V member 1 (TRPV1), the peroxisome proliferator-activated receptors (PPARs) and the G-protein-coupled receptor (GPCR) [78,79]. CB1 receptor structure has a classical 7TM fold (Figure 1a) similar to other rhodopsin family class A GPCRs [80]. Endocannabinoids bind to the CB1 hydrophobic binding pocket, activating the receptor. However, the position of the ligand-binding pocket of CB1 is different from the previously described binding sites of other class A GPCRs. Ligands lie low in the binding pocket of CB1, immediately above the conserved W356 [81]. Conformational changes in the surroundings of the residue W356 have been proposed as triggering CB1 activation [81]. CB2 has a high degree of homology with CB1, sharing 44% sequence identity [82]. The architecture of CB2 is also comprised of a 7TM fold (Figure 1b). The structural data indicates a critical role for the residue W258 as the toggle switch for CB2 activation [83]. Overall, structural data points out differences in the binding pockets between CB1 and CB2. N-terminal loop (red) occupy the polar zone of the binding pockets. The CB1 receptor represented is bound to Tranabant, and CB2 receptor is bound to AM10257. The endocannabinoid system plays an important role in central nervous system development and synaptic plasticity. This system is comprised of the endogenous cannabinoids, the cannabinoid CB2 has a high degree of homology with CB1, sharing 44% sequence identity [82]. The architecture of CB2 is also comprised of a 7TM fold (Figure 1b). The structural data indicates a critical role for Life 2020, 10, 86 5 of 36 the residue W258 as the toggle switch for CB2 activation [83]. Overall, structural data points out differences in the binding pockets between CB1 and CB2. The endocannabinoid system plays an important role in central nervous system development and synaptic plasticity. This system is comprised of the endogenous cannabinoids, the cannabinoid receptors and the enzymes responsible for the synthesis and degradation of the endocannabinoids. N-arachidonoyl-ethanolamine or anandamide (AEA) was the first endocannabinoid found in the early 90s by Raphael Mechoulamt and its laboratory team [84]. Later, the same research group found 2-arachidonoylglycerol (2-AG), docosatetraenoyl ethanolamide (DEA) and noladin ether (2-AGE) [85,86]. Currently, about 15 endocannabinoids were found [87]. Endocannabinoids act on CB1, CB2, TRPV1, PPARS and several orphan receptors [87,88] and can be full agonists, partial agonists and/or antagonist, depending on the endocannabinoid and the receptor. For example, AEA is a high-affinity, partial agonist of CB1, and almost inactive at CB2 whereas 2-AG acts as a full agonist at both CBs with moderate-to-low affinity [89]. In the CNS, endocannabinoids are produced in post-synaptic neurons and act on presynaptic CB1, activating Ca 2+ channels or decreasing neurotransmitter release by the vesicular release machinery [90,91]. 2-AG and anandamide are seen as 'circuit breakers' protecting glutamatergic neurons from excessive excitatory neurotransmission. Recently, it was suggested that the CB2 receptor mediates the reduction of excitability in the mouse ventral tegmental area [92]. In addition, several evidence are supporting endocannabinoid-mediated communication between neurons, astroglia and microglia [90,93,94]. In most cases, endocannabinoid-mediated retrograde signaling starts with the production of 2-AG, in response to increased intracellular Ca 2+ concentration and/or activated Gq/11-coupled receptors [95,96]. 2-AG is then released into the extracellular space, via a mechanism not yet fully elucidated, and arrives at the presynaptic terminal where it binds to the CB1. Activated CB1 suppresses the release of neurotransmitter in two ways: first, by inhibiting voltage-gated Ca 2+ channels, which reduces presynaptic Ca 2+ influx; second, by inhibiting adenylyl cyclase (AC) and the subsequent cAMP/PKA pathway [95,96]. Signal termination requires the degradation of 2-AG by monoacylglycerol lipase (MAGL), which is expressed in selective synaptic terminals and glial cells [95,96]. The differential recruitment of 2-AG and AEA by several types of presynaptic activity has been described in the extended amygdala [97]. Moreover, AEA negatively regulates 2-AG metabolism in the striatum, the effect of which can be mimicked by the activation of TRPV1 [98]. Endocannabinoids play an important role in regulating basal ganglia physiology and motor function. Moreover, the modifications occurring in endocannabinoid signaling after dopamine depletion observed both in experimental models of PD and in patients with this condition, provide strong evidences for the involvement of the endocannabinoid system in PD. An abnormally high level of the AEA was found in 16 untreated patients who were diagnosed with PD [99]. It was suggested that the increase of AEA might be a result of a compensatory mechanism occurring in the striatum of PD patients, aimed at normalizing chronic dopamine depletion, thus extending for the first time to humans previous data on animal models of PD [99]. It can be explained since anandamide may inhibit the dopamine transporter function by a receptor-independent mechanism [100]. In untreated MPTP-lesioned primate, it was also found high levels of endocannabinoids, namely AEA and 2-AG in the striatum and 2-AG in substantia nigra [101]. Furthermore, anandamide can protect neurons from toxic insults such as glutamatergic excitotoxicity, nutrient deprivation, hypoxia, ischemia and apoptosis [102][103][104][105]. These protective effects of anandamide have been reported to be mediated by CB1 and CB2 cannabinoid receptors, whereas activation of TRPV1 has been suggested to mediate anandamide-induced apoptosis [106]. Several studies showed that treatment with anandamide lowered motor activity and produced hypothermia and analgesia in mice, increased inactivity time and markedly decreased ambulation and frequency of spontaneous non-ambulatory activities in rats [107,108]. The hypokinetic actions of AEA were boosted when co-administrated with a selective inhibitor of endocannabinoid uptake N-(3-furylmethyl) eicosa-5,8,11,14-tetraenamide, UCM707 [109]. A contradictory study showed that an intravenous administration of anandamide increased extracellular Life 2020, 10, 86 6 of 36 dopamine levels in the nucleus accumbens shell of awake, freely moving rats, a characteristic effect of most drugs of abuse in humans [110]. Levels and activities of AEA and 2-AG can be manipulated by inhibition of fatty acid amide hydrolase (FAAH) enzyme, the action of which is reduced in experimental models of PD [111,112]. However, it was evidenced that FAAH inhibition remarkably increases AEA tissue levels but reduces 2-AG levels [113]. The systemic administration of N-(4-hydroxyphenyl)-arachidonamide (AM404) enhances anandamide (AEA) availability in the biophase and exerts antiparkinsonian effects in 6-hydroxydopamine-lesioned rats. This is due to a reduction of D2 dopamine receptor function together with a positive modulation of 5-HT1B serotonin receptor function [114]. TRPV1 receptors also seem to play an important role in development and expression of dyskinesias in PD. The systemic administration of oleoylethanolamide, an agonist of PPARα and antagonist of TRPV1 receptors, reduces the development of dyskinesias dependent of a TRPV1-pathway in mouse model of PD, not involving PPARα receptors [115]. The intake of this compound induced the reduction of FosB striatal protein overexpression and the phosphoacetylation of histone 3, which are molecular markers of L-DOPA-induced dyskinesias. Actually, FOSB overexpression was previously associated with L-DOPA-induced dyskinesia in nitric oxide synthase-positive striatal interneurons in hemiparkinsonian mice [116]. This observation was correlated with the activation of ERK1/2 due to increased phosphorylation of its regulatory kinases [116]. It was suggested that GPR55 modulates anti-neuroinflammatory responses and movement control. These observations point to this receptor as a therapeutic target for the non-dopaminergic symptomatic treatment of PD [78,117]. The anti-inflammatory, antioxidant and proneurogenic properties of the endocannabinoid system make it a potential target to reduce the symptomatics of a number of neurodegenerative conditions [65,[118][119][120]. Nonetheless, there are challenges in the development of drugs lacking psychoactive side effects and, to that end, targeting the anti-inflammatory non-psychotropic CB2 receptor is particularly promising [119][120][121]. There are shreds of evidence supporting the involvement of the endocannabinoid system in PD. Magnetic resonance imaging studies have shown regional differences in CB1 receptor availability in PD patients' brains. According to one of these studies, CB1 availability was increased in mesolimbic and mesocortical regions of the brain, which are usually dopamine depleted in PD, and decreased in the substantia nigra [122]. Two other studies shown active involvement of CB1 in the regulation of L-DOPA action during PD therapy, preventing motor fluctuation through modulation of the striatonigral and striatopallidal pathway [123,124]. Regarding CB2, this receptor was found at significantly lower levels in tyrosine hydroxylase-containing neurons from substantia nigra of PD patients [125]. In contrast, studies in glial elements from post-mortem tissues of PD patients showed an increase in CB2 availability, either quantified by immunochemistry or by gene expression. These observations were then corroborated by studies in animal models [125][126][127][128]. The authors suggest that up-regulation of CB2 in glial cells is an indicator of the involvement of this receptor in neuroprotection. Clinical Observations on Phytocannabinoids Use in Parkinson's Disease In countries where cannabis is legal, marijuana is used recreationally to self-medicate symptoms of disorders such as PD, multiple sclerosis, amyotrophic lateral sclerosis and schizophrenia [129]. About 44% of the population with PD is currently using marijuana [130]. In addition to cannabis, three cannabis derived products are also in use: dronabinol, nabiximols and nabilone [129]. Despite the lack of solid scientific evidence, PD patients using cannabis mention a positive impact on mood, memory, fatigue, obesity, sleep, pain, tremor, rigidity and bradykinesia after its consumption. A study with 85 PD patients combined half a teaspoon of cannabis leaves, along with their prescribed pharmacotherapy for PD. About 46% of these individuals reported relief of PD symptoms on average 1.7 months after the first use of marijuana, suggesting chronic use of marijuana may be required for improvement in symptoms [131,132]. Overall, patients using cannabis have been reporting a lower level of disability after the intake of phytocannabinoids [130,[133][134][135]. On the other Life 2020, 10, 86 7 of 36 hand, Carroll et al. have conducted a clinical trial with an orally administered cannabis extract which resulted in no objective or subjective improvement in dyskinesias or parkinsonism showing that results for clinical cannabis in PD still seem to be inconsistent [136]. Therefore, in determining if medical marijuana is beneficial as a PD therapeutic, some factors, such as chemical constituents, dose, delivery system and clinical outcomes, must be carefully controlled [137]. Cannabis is a complex plant with two main subspecies, namely Cannabis sativa and Cannabis indica, which can be differentiated by C. indica having higher cannabidiol content and C. sativa having a higher ∆9-THC content [138]. In addition, there are differences in ∆ 9 -THC and CBD amount from strain to strain. Moreover, with the rising of the marijuana business, each sample might have different levels of ∆9-THC and CBD [139]. Furthermore, from the 538 natural compounds identified in C. sativa, more than 100 are phytocannabinoids. Therefore, the use of therapeutic cannabis is, surely a complex issue from the composition point of view [140,141]. Regarding the two most abundant phytocannabinoids found in C. Sativa, CBD showed to be the most promising, relieving some PD symptoms [142]. The first clinical studies with CBD pointed to a decrease in the psychotic symptoms [93] and significant improvements in measures of functioning and well-being of PD patients with no psychiatric comorbidities [94]. Overall, CBD shows significant therapeutic effects in reducing tremor, dyskinesia, rigidity and some non-motor symptoms, such as psychosis, rapid eye movement sleep behavior disorder, daily activities and stigma linked to relational and communication problems in PD [68,[142][143][144]. However, larger-scale studies and randomized double-blind controlled studies are still needed to confirm the observations since several reports are mentioning negative effects [143]. A major concern with phytocannabinoids use is the inherent risk of PD patients to develop psychosis and cognitive impairment. This aspect makes them more susceptible to psychomimetic substances agonists of CB1, such as ∆ 9 -THC or Nabilone [145,146]. In fact, it is well known that Nabilone may induce psychosis, even in patients without a psychiatric history [146]. Thus, patients with dementia should not be treated with agonists of CB1 to avoid further aggravation of neuropsychiatric symptoms [68]. Moreover, the PD patient's personality must be taken into account to avoid the development of addictive behavior [147]. Overall, the evidence on the therapeutic use of medical marijuana and cannabinoid derivatives in patients with PD are heterogeneous and of poor quality [146]. Consequently, there is an urgent need for further scientific studies and to educate the caretakers on the pharmacology, known risks and known benefits of cannabis [148]. Studies on the Molecular and Cellular Mechanisms Underlying Clinical Observations The positive clinical evidence observed in PD patients using medical marijuana and cannabinoid derivatives are leading researchers to address the cellular and molecular mechanisms underlying such outcomes. Therefore, a couple of studies suggest that molecules which bind to CB1 and/or CB2 receptors might be beneficial, since pharmacological modulation of the endocannabinoid system has been shown to reduce chronic activation of the neuroinflammatory response, reduce mitochondrial dysfunction and keep calcium homeostasis, resulting in the decrease of oxidative stress, which prevents the proapoptotic cascade, promoting neurotrophic support [59,122]. Studies in human differentiated neuroblastomas, which have dopamine beta-hydroxylase activity, show that ∆ 9 -THC seems to be neuroprotective by up-regulating the expression of gene encoding CB1, suggesting a direct neuronal protective effect of ∆ 9 -THC mediated via PPARγ not involving CB2 [55]. The activity of dopamine beta-hydroxylase modulates the levels of dopamine [149]. Busquets-Garcia et al. (2016) observed that normal circulating adrenaline and noradrenaline levels are sustained after stress by AM6545 pre-treatment, a full agonist of CB1 [150]. Thus, the clinical observation that the increment of CB1 availability in mesolimbic and mesocortical regions of brain seems to be neuroprotective [122]. Moreover, the involvement of PPARγ activation in the neuroprotective effect of ∆ 9 -THC is also suggested, as it induces the transcription of proteins involved in oxidative stress defense and mitochondrial biogenesis, promoting mitochondrial normal function in PD [151]. In addition, Life 2020, 10, 86 8 of 36 the reduction of oxidative stress was linked to the restored the Peroxisome proliferator-activated receptor-gamma coactivator (PGC-1α) levels which regulate energetic metabolism [151]. In fact, low basal levels of PGC-1α are expected to be associated with enhanced glycolytic metabolism, low oxygen consumption and elevated reactive oxygen species (ROS) levels [152]. In addition, the observed ∆ 9 -THC mitochondrial biogenesis may be linked to its ability to induce the mitochondria transcription factors (TFAM) expression and to restore mitochondrial DNA levels leading to increased cytochrome c oxidase subunit 4 (COX4) [151], the terminal enzyme complex of the respiratory chain which is linked to PD [153] (Figure 2). seems to slow down the degenerative processes in PD associated with the overflow of glutamate [154]. Cannabidiol also presents a neuroprotective activity against MPP + , a neurotoxin which triggers PD, by the activation of nerve growth factor receptor (NGF) also known as Tropomyosin receptor kinase A (TRKA), and the increment in the expression of axonal and synaptogenic proteins [155]. Other compounds found in Cannabis sativa, such as β-caryophyllene and Δ 9 -tetrahydrocannabivarin (Δ 9 -THCV), showed the potential to prevent the onset of PD. β-caryophyllene activates CB2, leading to a decrease of oxidative/nitrosative stress, to a decrease of pro-inflammatory cytokines release and to an inhibition of gliosis, which reduces neuroinflammation and nigrostriatal degeneration [156,157]. Δ 9 -THCV is a potent CB2 receptor partial agonist in vitro and it antagonizes cannabinoid receptor agonists in CB1-expressing tissues. However, in vivo Δ 9 -THCV behaves both as an antagonist or, at higher doses, an agonist of CB1 [58]. It has been shown that acute administration of this phytocannabinoid attenuated the motor inhibition caused by changes in glutamatergic transmission, and the chronic administration of Δ 9 -THCV has reduced the loss of tyrosine hydroxylase-positive neurons caused by 6hydroxydopamine in the substantia nigra [158] (Figure 2). In general, the effects of some phytocannabinoids on PD appear to be protective either by binding to the CB1 receptor, inhibiting dopamine beta hydroxylase activity and decreasing glutamate levels or by binding to CB2, reducing neuroinflammation. A high concentration of glutamate induces deregulation of intracellular Ca 2+ levels which results in mitochondrial Ca 2+ overload and membrane depolarization, triggering the mechanism of cell death [94]. ∆ 9 -THC also seems to play a neuroprotective effect against glutamate-induced neurotoxicity, in neural primary cells, by restoring mitochondrial membrane potential which produces an anti-apoptotic effect. In the same study, a decrease in the levels of glutamate was observed, which in turn decreases capase-3 levels, one of the critical enzymes of apoptosis. Overall, CB1 activation by ∆ 9 -THC seems to slow down the degenerative processes in PD associated with the overflow of glutamate [154]. Cannabidiol also presents a neuroprotective activity against MPP + , a neurotoxin which triggers PD, by the activation of nerve growth factor receptor (NGF) also known as Tropomyosin receptor kinase A (TRKA), and the increment in the expression of axonal and synaptogenic proteins [155]. Other compounds found in Cannabis sativa, such as β-caryophyllene and ∆ 9 -tetrahydrocannabivarin (∆ 9 -THCV), showed the potential to prevent the onset of PD. β-caryophyllene activates CB2, leading to a decrease of oxidative/nitrosative stress, to a decrease of pro-inflammatory cytokines release and to an inhibition of gliosis, which reduces neuroinflammation and nigrostriatal degeneration [156,157]. ∆ 9 -THCV is a potent CB2 receptor partial agonist in vitro and it antagonizes cannabinoid receptor Life 2020, 10, 86 9 of 36 agonists in CB1-expressing tissues. However, in vivo ∆ 9 -THCV behaves both as an antagonist or, at higher doses, an agonist of CB1 [58]. It has been shown that acute administration of this phytocannabinoid attenuated the motor inhibition caused by changes in glutamatergic transmission, and the chronic administration of ∆ 9 -THCV has reduced the loss of tyrosine hydroxylase-positive neurons caused by 6-hydroxydopamine in the substantia nigra [158] (Figure 2). In general, the effects of some phytocannabinoids on PD appear to be protective either by binding to the CB1 receptor, inhibiting dopamine beta hydroxylase activity and decreasing glutamate levels or by binding to CB2, reducing neuroinflammation. Is There Enough Data Supporting Protective or Therapeutic Role of Cannabinoids on PD? Overall, clinical observations and research outcomes support the endocannabinoid system as a target to alleviate the symptoms of PD. Actually, patients using cannabis have been reporting a lower level of disability after the intake of phytocannabinoids [130,[133][134][135]. However, the evidence on the therapeutic use of cannabinoids in patients with PD are heterogeneous and of poor quality [146]. At molecular and cellular levels, the evidence is promising for the use of phytocannabinoids in PD. Phytocannabinoids reduce neuroinflammatory response, mitochondrial dysfunction and oxidative stress [59,122]. Additionally, ∆ 9 -THC plays a neuroprotective effect against glutamate-induced neurotoxicity, in neural primary cells slowing down neuron degeneration due to overflow of glutamate [154]. Epidemiological studies are also encouraging. A retrospective survey found an improvement of PD symptoms with medical cannabis in the initial stages of treatment, with no evidence of major adverse effects [49]. Another epidemiological study pointed to the possible effect of cannabidiol in improving the quality of life of PD patients without psychiatric comorbidities. However, the authors found no statistically significant differences concerning the motor symptoms of PD [159]. Despite the shreds of evidence suggesting that the consumption of cannabinoids can reduce PD symptoms, some authors argue that there are not enough studies for such a conclusion [50][51][52][53][54]. Stampanoni Bassi et al. (2017) concluded that results from available clinical studies are controversial and inconclusive due to several limitations, including small sample size, lack of standardized outcome measures and expectancy bias [54,160]. They propose studies involving a larger sample of patients, appropriate molecular targets, objective biological measures (i.e., cannabinoids blood level) and specific clinical outcome measures to clarify the effectiveness of cannabinoids-based therapies [54]. Moreover, most of the studies investigating the therapeutic potential of cannabinoids in PD have been conducted in animal models, and an insufficient number of clinical trials have been carried out. Furthermore, the therapeutic benefits demonstrated in animal models will require further study in humans avoiding extrapolation between them, since animal models may not properly induce or recapitulate PD pathology [51,52,119,148,[161][162][163][164]. Thereby, in the present, the studies investigating the role of phytocannabinoids are few and limited to understand its beneficial effects. The improvements needed for further successful research in this area are (i) larger sample size; (ii) well-designed studies testing cannabis in PD patients population to establish evidence-based data on the scope of pharmacological benefits and adverse effects; (iii) long term evaluation of disease progression; (iv) identification of the precise formulation for each type of pathology and each subset of patients for achieving a neuroprotective effect [51,52,119,148,[162][163][164]. Overall, there is a clear need for further studies in humans. Amphetamine-Type Stimulants and Parkinson's Disease According to the World Health Organization, amphetamine-type stimulants is a group of drugs of abuse whose principal members include amphetamine and methamphetamine [165]. Amphetamine was firstly synthesized in 1887 in Germany as phenylisopropylamine by Romanian chemist Lazăr Edeleanu [166]. Methamphetamine was synthetized in 1893 by Nagayoshi from ephedrine [167], an alkaloid present in the plant Ephedra, isolated for the first time in 1885 by G. Yamanashi and named by Nagai in 1887 [168]. Amphetamine-type stimulants have been used for recreational purposes to improve physical and mental performance in fatigued subjects. During World War II, amphetamine and methamphetamine were used extensively by Allied and Axis forces for their stimulant and performance-enhancing effects [169]. As the addictive properties of the drugs became known, governments began to place strict controls on the sale of the drugs. As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II-controlled substance, as defined in the treaty, ratified by all 183 state members at the time. Despite strict government controls, amphetamine and methamphetamine are used for recreation purposes, and according to the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA), amphetamines are associated with a large number of health emergencies in the north and east of Europe [170]. The monitoring center estimates that 1.2 million of European young people between 15-34 age have consumed amphetamine-type stimulants in the last year, and 12.4 million of European people have consumed them somewhere during their lifetime [170]. Amphetamine-type stimulants share structural features with the catecholamine neurotransmitters, such as noradrenaline and dopamine with twelve transmembrane (TM) helices arranged in a barrel-like bundle [171]. Amphetamine-type stimulants have an aromatic ring and a nitrogen on the aryl side-chain which is a prerequisite for competitive binding to the monoamine reuptake transporters, noradrenaline transporter (NET), dopamine transporter (DAT) and 5-HT transporter (SERT) [171]. All three transporters are membrane-embedded proteins ( Figure 3) expressed in the presynaptic neuronal terminals. Monoamine reuptake transporters mediate the uptake of neurotransmitters from the synaptic cleft, into the pre-synaptic neuronal terminals using the energy gradient produced by Na + /K + ATPase. DAT and NET translocation of dopamine and norepinephrine involve co-transport of two Na + and one Cl − ion along with one molecule of substrate. SERT co-transports one 5-HT molecule with one Na + and one Cl − along with one K + ion in the opposite direction. [170]. Amphetamine-type stimulants share structural features with the catecholamine neurotransmitters, such as noradrenaline and dopamine with twelve transmembrane (TM) helices arranged in a barrel-like bundle [171]. Amphetamine-type stimulants have an aromatic ring and a nitrogen on the aryl side-chain which is a prerequisite for competitive binding to the monoamine reuptake transporters, noradrenaline transporter (NET), dopamine transporter (DAT) and 5-HT transporter (SERT) [171]. All three transporters are membrane-embedded proteins ( Figure 3) expressed in the presynaptic neuronal terminals. Monoamine reuptake transporters mediate the uptake of neurotransmitters from the synaptic cleft, into the pre-synaptic neuronal terminals using the energy gradient produced by Na + /K + ATPase. DAT and NET translocation of dopamine and norepinephrine involve co-transport of two Na + and one Clion along with one molecule of substrate. SERT co-transports one 5-HT molecule with one Na + and one Cl − along with one K + ion in the opposite direction. Since amphetamine competes with endogenous monoamines for transport into the nerve terminals via these transporters, the higher the concentration of amphetamine present in the synapse, the less molecules of endogenous catecholamines are uptake due to competitive inhibition of DAT by amphetamine. Consequently, there is a greater stimulation effect on postsynaptic receptors by dopamine [171]. Amphetamine also has an affinity for vesicular monoamine transporter 2, preventing the translocation of monoamines into the intraneuronal storage vesicles and reversing the direction of the reuptake transporter. Therefore, it pumps neurotransmitters out of neurons into the synapse Since amphetamine competes with endogenous monoamines for transport into the nerve terminals via these transporters, the higher the concentration of amphetamine present in the synapse, the less molecules of endogenous catecholamines are uptake due to competitive inhibition of DAT by amphetamine. Consequently, there is a greater stimulation effect on postsynaptic receptors by dopamine [171]. Amphetamine also has an affinity for vesicular monoamine transporter 2, preventing the translocation of monoamines into the intraneuronal storage vesicles and reversing the direction of the reuptake transporter. Therefore, it pumps neurotransmitters out of neurons into the synapse [94,172]. In addition, amphetamine also increases synaptic monoamine concentrations inhibiting monoamine oxidase, which catalyzes the breakdown of monoamine neurotransmitters in the CNS. The abuse of amphetamines-type stimulants has been largely described as affecting dopaminergic transmission and function, inducing dopamine depletion, rising extracellular dopamine levels and prolonging dopamine receptor signaling in the striatum. The consequences of amphetamines-type stimulants intake have been suggesting a relationship between its consumption and the onset of PD [173,174]. The studies performed with amphetamine and methamphetamine show that these two substances have similar pharmacokinetic profiles and their dopamine responses in the striatum are equivalent [175]. Some studies tried to verify the amphetamine-like stimulants effects in PD symptoms treatment, however no significant improvement has been found [176]. Clinical Observations of Amphetamine-Type Stimulants Use in Parkinson's Disease Amphetamine-type stimulants effects are euphoria, mood elevation, sense of wellbeing, energy, wakefulness, fatigue decrease, focus and alertness increase [177][178][179]. Repeated administration of amphetamine-type stimulants leads to neuroadaptation and impaired basal functioning, which can result in a depressed mood, cognitive impairment, leakage of the blood-brain barrier by hypoperfusion in the striatum, causing hypoxia and dopamine reduction [180][181][182]. Chronic methamphetamine use causes neurotoxicity, damaging the dopamine neurons in the nigrostriatal pathway, due to a rise in α-syn levels in substantia nigra, which may increase the risk of developing PD in later life [180,[183][184][185]. In addition, it has been observed that the intake of these substances during mice adolescence may later increase their vulnerability for neuroinflammation and cell death by toxins, such as 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) [186]. Over the years, these damaged cells may die precociously, depleting the reserve of neural cells necessary for normal neurological function and, when a critical number of cells are lost, parkinsonism starts developing [183]. In addition, neurotoxic doses of methamphetamine cause depletion in the dopamine content of striated tissue. This depletion should also be considered as a clinical consequence to the brain, regardless of the absence of neuronal loss or physiological nerve changes [187]. Thereby, amphetamine-type stimulants make dopamine pathways, involved in motor function and limbic-motor integration, vulnerable to progressive degeneration increasing the predisposition to PD [188]. Studies on the Molecular and Cellular Mechanisms Underlying Clinical Observations Protein misfolding and aggregation processes are involved in several neurodegenerative diseases and are a consequence of conformational changes in the amyloid protein precursors. In PD, α-syn aggregates form amyloid deposits in the brain called Lewy bodies, which are associated with the loss of dopaminergic neurons in the substantia nigra. Thus, some researchers are studying the relationship between α-syn and amphetamine-type stimulants, such as conformational changes, post-translational modification and increased protein expression [189][190][191]. Amphetamine and methamphetamine bind tightly to N-terminus of intrinsically unstructured α-syn inducing a folded conformation. A putative fold conformation increases the likelihood of misfolding and aggregation. Consequently, the authors suggest that this mechanism may increase the incidence of PD amongst amphetamine and methamphetamine users [189,190]. Additionally, Wang et al. (2014) suggested an increment of α-syn levels due to methamphetamine-induced excessive heat. Actually, the temperature in the mid-brain region can exceed 41 • C upon ingestion of this stimulant [191]. Repeated bouts of excessive heat increase α-syn expression to prevent cells from heat damage by inhibition of stress signaling. Consequently, this causes an accumulation of α-syn promoting its aggregation, which in turn damages neurons [191]. Moreover, post-translational modifications of α-syn, such as phosphorylation, nitration, acetylation and ubiquitination, have also been pointed as a risk or beneficial factor for PD [192][193][194][195]. However, only protein nitration has been linked to the use of methamphetamine, which is pointed out as a risk due to the increased post-translational modifications of α-syn which seems to mediate neurotoxicity, as judged by studies in human neuronal lines and mice brain cells [192]. Methamphetamine influences gene expression of normal dopaminergic innervation in striatum via stimulation of dopamine and glutamate receptors [196][197][198]. Low doses of methamphetamine were also found to induce the expression of a different set of genes in lesioned denervated striatum, completely lacking dopamine. These observations implicate an alternative gene expression activation independent from dopamine in the presence of methamphetamine [196]. In addition, the authors suggest that the absence of dopamine might cause plastic changes that render the striatum differentially responsive to the effects of methamphetamine [196]. Another study observed the neurotoxic effects of methamphetamine in rodent models using epigenetics assays, showing that the consumption of this substance decreased cytosine methylation in SNCA promoter region, and consequently upregulates α-syn in substantia nigra, contributing to the Parkinson's-like behavior [199]. Regarding the cellular mechanisms underlying the neurotoxic effects of amphetamine-type stimulants, these substances activate nicotinic alpha-7 receptors, which increase intra-synaptosomal calcium, nitric oxide synthase and protein kinase C, leading to the production of high levels of nitric oxide and to dopamine oxidation, which promotes neurodegeneration [200]. The increase of nitric oxide synthase may modulate fundamental functions since nitric oxide is involved in almost all vital functions, from platelet aggregation to neurotransmission [201]. Cells treated for 24 h with methamphetamines significantly increased its nitric oxide synthase, causing a rise in nitric oxide and α-syn levels, that consequently promoted the aggregation of α-syn [202]. Another study has also suggested that tyrosine hydroxylase, dopamine transporter, vesicular monoamine transporter 2, nitric oxide synthase and reactive oxygen species may be involved in α-syn mediated methamphetamine-induced neuronal toxicity [203] (Figure 4). Life 2020, 10, x FOR PEER REVIEW 12 of 38 consumption of this substance decreased cytosine methylation in SNCA promoter region, and consequently upregulates α-syn in substantia nigra, contributing to the Parkinson's-like behavior [199]. Regarding the cellular mechanisms underlying the neurotoxic effects of amphetamine-type stimulants, these substances activate nicotinic alpha-7 receptors, which increase intra-synaptosomal calcium, nitric oxide synthase and protein kinase C, leading to the production of high levels of nitric oxide and to dopamine oxidation, which promotes neurodegeneration [200]. The increase of nitric oxide synthase may modulate fundamental functions since nitric oxide is involved in almost all vital functions, from platelet aggregation to neurotransmission [201]. Cells treated for 24 h with methamphetamines significantly increased its nitric oxide synthase, causing a rise in nitric oxide and α-syn levels, that consequently promoted the aggregation of α-syn [202]. Another study has also suggested that tyrosine hydroxylase, dopamine transporter, vesicular monoamine transporter 2, nitric oxide synthase and reactive oxygen species may be involved in α-syn mediated methamphetamine-induced neuronal toxicity [203] (Figure 4). Oxidative stress-induced by amphetamine-type stimulants is also linked to PD since it increases dopamine neurons vulnerability. A study in pregnant primates exposed to methamphetamine showed that high levels of oxidative stress in pregnancy can compromise the population of nigrostriatal dopamine neurons and potentially elevate the risk of PD in the born child's later life [204]. It was proposed that the higher levels of oxidative stress, induced by amphetamine-like stimulants, are a consequence of dopamine autoxidation which increases excitotoxicity [205]. Oxidative stress-induced by amphetamine-type stimulants is also linked to PD since it increases dopamine neurons vulnerability. A study in pregnant primates exposed to methamphetamine showed that high levels of oxidative stress in pregnancy can compromise the population of nigrostriatal dopamine neurons and potentially elevate the risk of PD in the born child's later life [204]. It was proposed that the higher levels of oxidative stress, induced by amphetamine-like stimulants, are a consequence of dopamine autoxidation which increases excitotoxicity [205]. However, there are also evidences that the exposure to low levels of methamphetamine induces a certain degree of cellular stress that can reduce the vulnerability of dopamine neurons to insults. The activation of a small stress response can be used to protect neuron against neurodegeneration and might be used pharmacologically [196]. The cellular mechanisms underlying stress-induced protection are associated to (i) decrease of basal ERK 1/2 and kinase b levels, involved in multiple cellular processes such as apoptosis; (ii) reduced activity of protein phosphatase 2, a protein phosphatase implicated in ERK1/2 dephosphorylation, inhibiting it; and (iii) upregulation of the pro-survival protein BCL-2, which plays an anti-apoptotic role [196]. Is There Enough Data Supporting a Neurotoxic Role of Amphetamine-Type Stimulants on PD? Overall, clinical observations point out amphetamine-type stimulants as neurotoxic. These substances damage dopaminergic neurons, involved in motor function and limbic-motor integration, increasing the predisposition to PD [188]. The molecular studies show that amphetamine upregulates α-syn in substantia nigra which accumulates leading to aggregation, which in turn damages neurons [191] contributing to the Parkinson's-like behavior [199]. Conversely, there is evidence that exposure to low levels of methamphetamine may reduce dopamine neurons vulnerability to insults. Epidemiological studies suggest an increased risk of PD for amphetamine-type stimulants users independently of the lifestyle [45,[206][207][208]. In fact, a nearly 3-fold increased risk of PD in amphetamine-type stimulants users vs. non-consumers was described [209]. Moreover, a retrospective case-control study revealed that prolonged use of amphetamines is associated with 8-fold increased risk of PD, with an average of 27 years between amphetamine exposure and the onset of disease signs [183]. Despite this epidemiological evidence, some studies suggest that there is not enough data to indicate that amphetamine-type stimulants exposure causes loss of dopamine neurons in humans, and consequently the appearance of PD [187,210]. In some consumers, the exposure to methamphetamine resulted in dopamine loss, more marked in caudate than in putamen, whereas in PD the putamen is distinctly more affected [187,210]. However, striatal dopamine deficiency is evident in methamphetamine consumers which are explained by a loss of dopamine in intact neurons and/or loss of dopaminergic neurons. According to the authors, this can be partially resolved by dopamine substitution medication in some individuals [210]. Other studies agree that these drugs may not directly evoke PD, but might predispose the central nervous system for Parkinson-like syndromes in long-term exposure [174,211]. Perfeito et al. (2013) showed the evidence of neurotoxic events linked to dopamine-induced oxidative stress and decreased protein quality control and Volkow et al. (2015) showed an acceleration of the age-related loss of dopamine neuronal function [174,211]. Therefore, the use of amphetamine-type stimulants may be an initiating event in the development of PD and parkinsonism, in conjugation to other risk factors that a given individual may hold [212]. Corroborating that the interplay of genetic and environmental risk factors increases the susceptibility to sporadic PD, a recent study found a significantly higher allele and genotype frequency of the CYP2D6*4 variant in 174 sporadic PD patients when compared to 200 controls [27] providing evidence on the hypothesis that a poor metabolizer status may increase the risk to develop PD especially in populations that are exposed to environmental toxins [27]. Cocaine Cocaine is extracted from leaves of two distinct species of the genus Erythroxylum (family Erythroxylaceae): Erythroxylum coca Lam. and Erythroxylum novogranatense (Morris) Hieron [213]. Coca leaves chewing is part of the Andean lifestyle for thousands of years. At the end of the 19th Century, pharmaceutical and food products with coca leave extracts were introduced in the market achieving high popularity. Later, the active principle present in coca leaves was purified and used in medicine both as a stimulant for psychanalysis and as an anesthetic. Simultaneously, the use of pure cocaine for recreative purposes also started. Nowadays, the cocaine market is the second-largest illicit drug market in the EU, after cannabis [214]. According to the EMCDDA 2019 drug report, about 4 million people in the EU have used cocaine in 2018 [214]. Cocaine acts on presynaptic monoamine reuptake transporters inhibiting monoamine neurotransmitters reuptake which increases its levels in the synaptic cleft [215,216]. The atomic structure of dopamine transporter of Drosophila melanogaster bound to cocaine was obtained in 2015 [217]. This structure was used as a template in combination with computational tools to study the binding and modulation of human dopamine transporter function by dopamine and cocaine. This study showed that cocaine competitively binds dopamine transporter. However, the binding affinity is dependent on the conformational state of dopamine transporter [216]. Notwithstanding, cocaine binds competitively to dopamine transporter inhibiting dopamine reuptake [218]. Clinical Observations of Cocaine Use in Parkinson's Disease Cocaine exposure may have neurotoxic effects on dopaminergic neurons since the total number of melanized dopamine cells in the anterior midbrain is reduced in cocaine users [219]. In addition, chronic cocaine use leads to down-regulation of post-synaptic dopamine receptors which results in putamen hypertrophy as a compensatory process to produce more dopamine to maintain dopaminergic transmission [220,221]. However, in PD brains, both caudate and putamen volumes were smaller when compared to controls [222,223]. On the other hand, there is a published case report of a young adult that developed early parkinsonism after chronic cocaine use [224]. To date, the effect of cocaine use in PD is still controversial. Studies on the Molecular and Cellular Mechanisms In a similar way to amphetamine, cocaine binds tightly to N-terminus of intrinsically unstructured α-syn, inducing a folded conformation, which increases the likelihood of misfolding, and possibly leading to an increased incidence of PD amongst drug users [189]. Moreover, cocaine has been shown to increase the levels of α-synuclein [225][226][227]. A recent genetic study links the cocaine abuse to secondary Parkinsonism as a consequence of a potential gene-environmental interaction, namely a detected leucine-rich repeat kinase 2 (LRRK2) risk variant [224]. Is There Enough Data Supporting a Neurotoxic role of Cocaine on PD? Despite the scarcity of the clinical and bench research data on cocaine and PD, several studies have shown that cocaine is not a risk factor for PD onset [45,[228][229][230]. There is a consensus that high levels of cytosol dopamine are neurotoxic. It was observed that after cocaine administration the cytosol levels of dopamine remained unchanged suggesting that cocaine administration may not be considered a risk factor in terms of dopamine-induced neurodegeneration [228]. This is reinforced by the observations that cocaine enhances dopamine levels in the dorsal, but not in ventral, striatum [229]. Finally, a three-day administration via implanted minipumps of cocaine hydrochloride did not produce axonal degeneration in the frontal agranular cortex or neostriatum [230]. Interestingly, cocaine has been shown to alleviate the symptoms of PD in monkeys [231]. Opiates and Parkinson's Disease Opiates comprise the naturally occurring alkaloids found in the opium poppy from the plant Papaver somniferum, such as morphine, codeine and also their semi-synthetic derivatives, heroin, hydrocodone, oxycodone and buprenorphine among others [232]. Most pharmaceutical opioids are controlled under the Single Convention on Narcotic Drugs of 1961 with some exceptions, such as buprenorphine, which are controlled under the Convention on Psychotropic Substances of 1971. The prevalence of opiates consumption in Europe in 2017 was estimated at 0.7% of the adult population, representing nearly 3.8 million opioid users. In Western and Central Europe, where there are an estimated 2 million opioid users (0.6% of the adult population), the use of opioids is dominated by heroin [233]. In addition to heroin, the most common opioids are opium, morphine, methadone, buprenorphine, tramadol and various fentanyl analogues. Opioid binds to G protein-coupled (Gi and/or Go) receptors [234,235]. The opioid receptors are present in CNS and are classified into four types: µ, κ, δ and nociceptin [236]. µ-receptors mediate natural rewards initiating addictive behaviors [237], whereas δ and κ-receptor activity appears to play a role in improving mood states [238,239]. These receptors bind to endogenous and exogenous opioids structurally related to the natural plant alkaloids found in opium, but also to small opioid peptides. Nociceptin receptor binds to medium size endogenous opioid peptides such as nociception and orphanin. Theses receptors exhibit seven transmembrane helices, typical of GPCR structures. The atomic structures of opioid receptors revealed common features for opioid recognition as predicted previously [240][241][242]. Opioid receptors binding sites contain an anionic aspartic acid residue that forms an ionic bond with the amino group of opioid ligands ( Figure 5). The binding hydrophobic pocket accommodates the aliphatic substituents on the amino group and the phenolic group of morphine engages an extended hydrogen-bonding network between two water molecules and a conserved histidine residue in transmembrane helix 6 (TM6) [243]. The activation of the receptor is due to a conformational change displacing TM6 10 A and, to a lesser extent, TM5 and TM7. These movements open a large pocket in the intracellular side of the receptor allowing to couple the heterotrimeric G proteins [243]. After binding to these receptors, opioids inhibit voltage-dependent Ca 2+ channels or activate inwardly rectifying potassium channels, thereby diminishing neuronal excitability [234]. Opioids also inhibit the cyclic adenosine monophosphate pathway and activate mitogen-activated protein kinase cascades, both of which affect cytoplasmic events and transcriptional activity of the cell [234]. Overall, opioids inhibit neurons by decreasing either neuronal firing on the postsynaptic localization or neurotransmitter release on presynaptic localization of the receptors. Finally, since opioid receptors are expressed on both excitatory and inhibitory neurons, they can exert activation or inhibition of the neural circuits [234]. In addition to opioids, opioid peptides are sharing a common N-terminal Tyr-Gly-Gly-Phe signature sequence that also interact with opioid receptors, namely β-endorphin, enkephalins and dynorphins which bind to µ, δ and κ, respectively [234]. In the rat model of PD, studies suggest the involvement of opioid pathways in the mechanisms modulating nociceptive thresholds [244]. Another study observed an increase in the survival rate of dopaminergic neurons treated with δ opioid peptide, when exposed to the neurotoxin 6-OHDA, both in vitro and in vivo [245]. These results suggest that δ-opioid receptors may be protective in PD [245]. Moreover, more studies performed in rat and primate models of PD indicates that δ-opioid receptors reduce dyskinesia induced by levodopa [246,247]. Regarding endogenous opioids, it is now accepted that endogenous morphine, structurally similar to vegetal morphine-alkaloid, is synthesized by mammalian cells from dopamine [248]. It binds to µ opioid receptor and induces antinociceptive effects. In PD patients the levels of endogenous morphine and its metabolites were increased [249]. This increment may be associated with fatigue, depression and pain symptoms experienced by PD patients [249]. Opioids affect locomotion and reward behavior mediated by the basal ganglia [250,251]. Since the striatum is rich in both µ-and δ-opioid receptors, these substances can act as modulators of dopamine, gamma-aminobutyric acid (GABA) and glutamate neurotransmission [250,252]. An increase in opioid transmission in the two main striatal outputs has been observed in monkeys or humans with dyskinesis induced by levodopa, which may indicate that the endogenous opioid system must be involved in mitigating the effect of abnormal dopaminergic stimuli. This knowledge can help to find therapeutic strategies for the treatment and prevention of motor complications in PD [253]. On the other hand, prolonged treatment with oxycodone-naloxone seems to affect only specific subgroups of PD patients with pain, which suggests that successful clinical improvements require a careful identification and characterization of PD patients [254]. Cellular studies showed that δ-receptor activation attenuates α-synuclein expression and aggregation reducing cytotoxicity in vitro PD model exposed to MPP(+) stress [255]. δ-receptor activation can largely attenuate α-synuclein expression via DJ-1 upregulation in both genetic (α-syn wild-type or A53T-mutant α-syn) and environmental (hypoxic) conditions. Moreover, the δ-receptor action involves transducer of regulated CREB1 (TORC1) / salt-inducible kinase 1 (SIK1) downregulation in the former condition and cAMP response element-binding protein (CREB) phosphorylation in the latter condition [256]. The activation of δ-receptor seems to be cytoprotective against both hypoxia and MPP+ through the regulation of PTEN-induced kinase 1 (PINK1) and caspase 3 pathways [257] Although, activation of δ-receptors has anti-parkinsonian effect, adverse effects of opioids were also observed. A long-term exposure to tramadol is known to induce tremor, muscular rigidity and tardive dyskinesia [258]. These symptoms are possibly related to: (i) serotonin's inhibitory effect on dopamine neurotransmission within the basal ganglion system, which may result in the altered function in the striatum [259]; and (ii) the inhibition of serotonin reuptake inhibitors [260]. Morphine and Parkinson's Disease Morphine is a partial agonist for µ-opioid receptors and acts as a weak agonist for δ-opioid receptors. However, morphine does not seem to act through κ-opioid receptors [236]. It was suggested that morphine raises dopamine levels in the brain by stimulating µ opioid receptors, which inhibit GABA release and consequently enhances dopamine release [261,262]. Therefore, µ opioid receptors are a potential therapeutic target in PD symptom relief. This is reinforced by clinical observations showing that morphine alleviates tremor significantly [263]. However, the levels of α-syn protein in mice withdrawn from morphine for 48 h were significantly increased in the ventral striatum, namely nucleus accumbens and two weeks after treatment cessation the protein levels were still high [264]. According to Fan et al. (2019), morphine increases the cell viability in PC12 cells after MPP + exposition. MPP + reduces cell viability and tyrosine hydroxylase expression, but this effect was reversed in the presence of morphine which acts on the P13K/Akt pathway [265]. Moreover, it was shown that morphine have neuroprotective effects against 6-OHDA-induced SH-SY5Y dopaminergic cell damage and neurodegeneration [266,267]. Moreover, it was shown that morphine contributes for Ca 2+ homeostasis and ROS production decreasing in 6-OHDA-treated SH-SY5Y cells [266]. Morphine potentially changes the expression of PD-associated genes. Mantione (2014) reported that PARK2 was up-regulated and PINK1 was down-regulated [268]. These two genes are associated with juvenile PD. Mutations in PARK2 are the cause of near 50% of autosomal recessive juvenile Parkinsonism [269]. PINK 1 overexpression activates Parkin's E3 ubiquitin ligase and recruits Parkin triggering selective autophagy [270,271]. Heroin Heroin is a semisynthetic product obtained by acetylation of morphine, which occurs as a natural product in opium, the dried latex of certain poppy species (e.g., Papaver somniferum L.). Heroin is a narcotic analgesic used in the treatment of severe pain. This substance crosses the blood-brain barrier within twenty seconds and almost 70% of the dose reaches the brain after injection. Heroin is 2-3 fold more potent than morphine. It is difficult to detect in the blood since it is rapidly hydrolyzed to 6-monoacetylmorphine and slowly converted to morphine, the main active metabolite. Heroin is the most common opioid on the European Union drug market and, in 2017, it was second in the rank of drugs responsible for emergency attendance in hospitals [170]. Like other opioids, this drug of abuse interacts with opioid receptors localized in the peripheral and central nervous system. Heroin is usually injected or smoked. The inhalation of the vapor, resulting from heroin heated on aluminum foil (chasing the dragon) has been associated with spongiform encephalopathy [272][273][274][275][276][277][278][279][280] and Parkinsonism [47,48]. However, despite of a case report describing temporary Parkinsonism in a patient who inhaled heroin vapor, it was found a reversible deficiency of tetrahydrobiopterin underlying the altered dopamine metabolism [48]. Another case report refers that, twenty four hours after snorting heroin, a patient exhibited a generalized dyskinetic syndrome and impaired vision, and severe parkinsonian symptoms which have worsened in the following two weeks [281]. More cases were reported, however, in all these cases the heroin was analyzed and contaminants were found, namely MPTP, 1-methyl-4-phenyl-4-propionoxypiperidine (MPPP) (1-methyl-4-phenyl-4-propionoxypiperidine) and 1-methyl-4-phenylpyridinium ion (MPP+) [14,282,283]. The relationship between these contaminants and PD has already been described. MPPP, a heroin analogue, is quickly converted in its metabolite MPP + , which promotes a syndrome indistinguishable from Parkinsonism, after cell uptake by the dopamine transporter of dopaminergic neurons inhibiting the activity of the mitochondrial nicotinamide adenine dinucleotide hydride (NADH)-Q dehydrogenase complex (EC 1.6.5.3) [284][285][286][287]. Moreover, MPTP also destroys dopamine-making cells in substantia nigra [283]. In addition to the contaminants present in heroin preparation, this drug is normally combined with other substances of abuse, namely mephedrone [282]. In this context, there are no molecular evidence that heroin is associated with PD onset and the case reports are unclear, as heroin is mostly consumed with contaminants. Future Issues: Novel Psychoactive Substances-Protective or Neurotoxic? The emergence of NPS over the last decade has been challenging drug policies [288]. An NPS is defined as "a new narcotic or psychotropic drug, in pure form or in preparation, that is not controlled by the United Nations drug conventions, but which may pose a public health threat comparable to the substances of abuse listed in these conventions" [289]. NPS have been emerging in the street market and on the Internet on a regular basis. Their properties change regularly, due to structural modification to circumvent legislation. This practice makes it almost impossible to characterize its toxicological profiles on an acceptable time scale, mostly due to the time-consuming experiments that must be held in animal models or human cells by standard methods [290]. NPS are associated with deaths and acute intoxications in Europe, as well as changes to current drug policy models [170]. NPS are classified in several groups, of which the most consumed are synthetic cannabinoids and synthetic cathinones [170]. Synthetic Cannabinoids Cannabinoid receptor agonists have been developed with therapeutic purposes. However, most of these substances were not approved by medicine regulatory agencies. These substances were hijacked and misused for recreational purposes [291]. Synthetic cannabinoids are a group of NPS with similar properties to ∆ 9 -THC that appeared in the drug market in 2004 as a herbal blend [288]. Since then, the composition of these cannabinoid containing herbal products has substantially changed to include more potent new psychoactive compounds and circumvent the law. Synthetic cannabinoids are divided into several classes which JWH series are among the oldest. These synthetic cannabinoids are representative of scientific research which the illegal market turned them into potent, dangerous recreational drugs that strongly activate the brain reward pathway and expose the user to a higher risk of developing addiction and other severe illnesses, including psychosis [292]. In the presence of JWH-018, the cells' growth rate is higher due to an enhanced glycolytic flux at expenses of a decrease in pentose phosphate pathway (PPP) [294]. PPP generates NADPH, which is critically important since its production provides reducing power to deal with the oxidative stress [295]. In the brain, reactive oxygen species come mainly from dopamine metabolism, mitochondrial dysfunction and neuroinflammation, and its oxidative stress contributes to PD [296]. Consequently, the reduction of NADPH production during a long period might allow the proliferation of reactive oxygen species, thus increasing the probability of developing PD in JWH-018 consumers. However, further studies must be performed to understand such implications in PD models. In synthetic cannabinoids, the affinity to CB1 or CB2 seems to be essential in understanding if a compound might be neurotoxic or protective to PD, respectively. Synthetic Cathinones Synthetic cathinones appeared in drug markets in the mid-2000s [288]. They are derived from cathinone, which is the principal active ingredient in the leaves of the khat plant (Catha edulis). Since cathinones are structurally similar to amphetamine, it has been hypothesized that these substances also share the same mechanisms of action. Synthetic cathinones are divides into two major groups according to its effects. On group includes cathinones with amphetamine-related pharmacological activities, lacking the methylenedioxy ring, and another group comprises those with effects similar to "ecstasy", bearing the methylenedioxy ring [297]. Mephedrone is a synthetic cathinone causing Parkinson type symptomatology (in the form of spasms and 'wobbling') [282]. This powerful stimulant seems to work as a monoamine reuptake inhibitor, increasing serotonin, norepinephrine and dopamine levels at neuronal synapses, which leads to dangerous neurological complications, such as reversible encephalopathy [298][299][300]. Mephedrone is normally combined with other drugs of abuse, and the literature indicates that it significantly enhances the neurotoxicity to dopamine nerve endings of the striatum caused by methamphetamine, amphetamine and 3,4-methylenedioxymethamphetamine (MDMA), showing that the consumption of these mixtures may still represent a greater risk [301]. Moreover, a recent study conducted in human differentiated neuronal cells revealed that amphetamine does not generate significant amounts of reactive oxygen species compared to the negative control. However, the cells showed significant levels of reactive oxygen species in the presence of 3,4-dimethylmethcathinone (3,4-DMMC), methcathinone and pentedrone [302], meaning that these new compounds could be more dangerous than the natural cathinone. In addition, the reactive oxygen species production by cells in the presence of these compounds might be a risk factor for developing PD. Furthermore, neurodegenerative effects have also been observed. A recent study with methylenedioxypyrovaleron (MDPV) proved that long-term use increases the risk of impaired cognitive function and neurodegeneration in the prefrontal cortex or hippocampus [303]. In sum, synthetic cathinones appear to induce neurocognitive dysfunction and cytotoxicity, which are dependent on drug type, dose, frequency and time following exposure [304]. Synthesis of the Available Data on Illicit Drugs and Parkinson's Disease The study of illicit substances' effects on PD combines data from basic, clinic and epidemiological research. Table 1 abridge all the scientific data previously exposed, aiming to understand whether drugs of abuse are neuroprotective or neurotoxic regarding PD. Phytocannabinoids are used legally or illegally by PD patients worldwide despite their use have not been approved by the EMA and FDA for PD. In general, the effects of phytocannabinoids on PD appear to be protective either by binding to the CB1 receptor or by CB2. As an example, the bind of ∆ 9 -THC to CB1 restores membrane potential; decrease ROS; increase CB1 protein level which will increase the amount of synaptic vesicles inhibition. Moreover, after a decrease of the mitochondrial potential membrane by MPP+ exposition, the activation of CB2 might be neuroprotective by inhibiting the apoptosis by Trka/NGF. The effects of amphetamine-type stimulants consumption in PD is fairly scientifically documented. The effects of amphetamines seem to be neurotoxic and the several molecular and cellular interaction of amphetamines-type substances seems to result in the aggregation or in the increment of ROS by dysregulated cellular Ca 2+ which activate nitric oxide synthetase or dopamine oxidation. Cocaine is also a stimulant, but studies do not seem to point a protective or neurotoxic effect of cocaine. However, there are few studies suggesting that cocaine might be neurotoxic promoting PD by increasing α-syn levels and consequently its aggregation. Morphine is vastly used in a medical context, mostly as an analgesic. It is suggested that morphine is neuroprotective in PD by increasing the brain dopamine levels, stabilizing Ca 2+ homeostasis, decreasing ROS production and altering the expression of PD-associates genes, which counteract the neurodegeneration triggers. Neuroprotector [55] induce the transcription of proteins involved in oxidative stress defense and mitochondrial biogenesis, promoting mitochondrial normal function [151] expresses mitochondria transcription factors (TFAM) and restore mitochondrial DNA levels leading to increased cytochrome c oxidase subunit 4 (COX4) [151] effective against glutamate-induced neurotoxicity restoring mitochondrial membrane potential which produces an anti-apoptotic effect. [154] cannabidiol effective against MPP+ neurotoxin by the activation of NGF/TRKA receptors and the increment in expression of axonal and synaptogenic proteins Neuroprotector [155] β-caryophyllene decreases oxidative/nitrosative stress, decrease pro-inflammatory cytokines release and to an inhibition of gliosis Neuroprotector [156,157] ∆ 9 -THCV acute administration changes glutamatergic transmission, and the chronic administration was shown to reduce the loss of tyrosine hydroxylase-positive neurons caused by 6-hydroxydopamine in the substantia nigra Stimulants Amphetamine and methamphetamine bind tightly to N-terminus of intrinsically unstructured α-syn adopting a folded conformation, increasing the likelihood of misfolding Neurotoxic [189,190] Amphetamine and methamphetamine involvement of tyrosine hydroxylase, dopamine transporter and vesicular monoamine transporter 2 in the decrease of dopamine levels Neurotoxic [203] methamphetamine increments α-syn levels induced by excessive heat Neurotoxic [191] causes post-translational modification of α-syn by nitration increase expression of nT39 α-syn. [192] decreases cytosine methylation in SNCA promoter region, and consequently upregulates α-syn in the in substantia nigra [199] activates nicotinic alpha-7 receptors, which increase intra-synaptosomal calcium, nitric oxide synthase and protein kinase C, leading to the production of unjustified nitric oxide and dopamine oxidation [200] induces higher levels of oxidative stress as a consequence of dopamine autoxidation and increasing excitotoxicity as a result of perturbations in energy metabolism. [205] low doses induce the expression of a different set of genes in lesioned denervated striatum, completely lacking dopamine (i) decreases basal ERK 1/2 and kinase b levels, involved in multiple cellular processes such as apoptosis; (ii) reduces the activity of protein phosphatase 2, a protein phosphatase implicated in ERK1/2 dephosphorylation, inhibiting it; and (iii) upregulates the pro-survival protein BCL-2, which plays an anti-apoptotic role Neuroprotector [196] Cocaine binds tightly to N-terminus of intrinsically unstructured α-syn adopting a folded conformation, increasing the likelihood of misfolding Neurotoxic [189] increments α-syn levels [225][226][227] Opioids Morphine elevates brain dopamine levels by stimulating µ opioid receptor, which inhibits GABA release and consequently enhances dopamine release Neuroprotector [261,262] reverses MPP+ toxicity through activating P13K/Akt pathway [265] stabilizes Ca 2+ homeostasis and decreases ROS production and cytochrome c in 6-OHDA-treated cells. [266,267] alters PD-associated genes expression, whereas PARK2 is up-regulated and PINK1 is down-regulated. [268] Synthetic cannabinoid JWH-018 enhances glycolytic flux at expenses of a decrease in pentose phosphate pathway Neurotoxic [294] JWH-133 suppresses blood-brain barrier damage, astroglial myeloperoxidase expression, infiltration of peripheral immune cells and production of inducible nitric oxide synthase, proinflammatory cytokines and chemokines by activated microglia Neuroprotector [127] Synthetic cathinone mephedrone monoamine reuptake inhibitor, increasing serotonin, norepinephrine and dopamine levels at neuronal synapses Neurotoxic [298][299][300] 3,4-DMMC, methcathinone and pentedrone Increases the levels of reactive oxygen species Neurotoxic [302] Conclusions If drugs of abuse are neuroprotective or neurotoxic regarding PD onset is highly dependents on the type of substance and is still a matter of debate. However, the evidence obtained by different research groups have been pointing to similar effects for the same class of illicit substances. Concerning cannabinoids, evidence from basic and clinical research along with epidemiological studies suggest that phytocannabinoids have the potential to prevent and alleviate PD. Notwithstanding, some results are inconsistent. Discrepancies are partially explained by differences in research methodologies and by the translation of data gathered in animal models to humans. Actually, animal models do not recapitulate the timeline of PD pathology in humans. The effects of amphetamine-type stimulants consumption in PD seem to be better scientifically documented from basic to clinical research and epidemiological evidence. In general, these studies associate amphetamine stimulants consumption to PD onset. Regarding opioids, the basic, clinical and epidemiological studies suggest that they are neuroprotective in PD. Opioids regulate levels of dopamine, calcium and ROS, which counteract the neurodegeneration triggers. Although heroin was associated with PD clinical observations, the scientific studies do not support this association, and the evidence point out heroin cutting agents as the cause of PD. The impact of NPS consumption in PD is yet to be revealed since it is a recent trend. However, the few scientific results in the literature point to neurotoxicity and the hypothesis of these substances being implicated in PD onset cannot be discarded. To sum up, phytocannabinoids and morphine seem to be neuroprotective while amphetamine-type stimulants seem to be neurotoxic. In addition, there is not enough data to support the involvement of cocaine and heroin in PD. Overall, this review gathers current knowledge on the relationship between illicit drugs and PD, which is utterly important for contemporary society, as illicit drug legalization is under discussion in many countries worldwide, and the health consequences must be balanced.
2020-06-18T09:05:06.052Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "8362d65ab1cc7b84401b0f7e85c31dcd7deb8fc7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1729/10/6/86/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3a729efa2819e4e4756c2106b78e69d12bec475", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253437511
pes2o/s2orc
v3-fos-license
Evidence on the effectiveness of policies promoting price transparency - A systematic review Highlights • Research on policies promoting price transparency is scarce and (partially) inconclusive.• Public disclosure of medicine prices could be effective in reducing prices.• More robust evidence is needed to confirm the effects of these policies.• Future research should also focus on unintended effects of these policies. Introduction In recent years, improved price transparency of pharmaceuticals has emerged as an important yet highly debated approach to manage medicine prices. This approach is believed to contribute to expanded access to medicines through the reduction of medicine prices [1]. In both the 2019 Fair Pricing Forum and the 72 nd World Health Assembly (WHA) the need for reliable information on medicine prices was emphasized, leading to the WHA's adoption of a resolution on advancing the transparency of markets for pharmaceuticals (WHA 72.8) [2,3]. The importance of promoting price transparency has also been reflected in various initiatives and regulations aiming to enhance transparent pricing. One such example is the Medicines Transparency Alliance (MeTA) initiative by the World Health Organization (WHO), which sought to develop national-level multistakeholder platforms to share data on the selection, procurement, quality, availability, pricing, promotion and use of medicines [4,5,6]. Another example is the European Union (EU) Transparency directive which requires the publication of the list prices of all reimbursable medicines in Europe [7]. The underlying rationale for promoting price transparency is that it may improve economic efficiency, as conventional economic theory indicates; assist policymakers and researchers through reliable price information; empower buyers to negotiate more strategically; increase accountability of manufacturers and governments for prices; and promote cost-effective decision-making by prescribers and patients [8,9]. Conversely, a lack of price transparency may give rise to corruption as confidential agreements may compromise accountability, especially in healthcare systems with weak governance [8,10]. These theories cut across four levels where transparency may occur: 1) the reporting of R&D and production costs, 2) the disclosure of net transaction prices to stakeholders as an input to price benchmarking, 3) the disclosure and control of prices along the supply chain, and 4) the communication of prices to prescribers or patients [8]. At the same time, there are concerns that improving transparency may lead to an increase in prices for lower-income countries, as manufacturers might abandon differential pricing schemes and apply uniform pricing for all countries to refrain from the appearance of unfair pricing [11]. Other harmful effects suggested are discouraged entry in poorer markets, reduced competition and lessened incentives for investments [11,12]. Despite the different claims that have been made, the impact of transparency measures on medicine accessibility remains largely theoretical thus far. It is, however, essential that governments and policy-makers implement measures that have proven to be effective. The 2015 WHO Guideline on Country Pharmaceutical Pricing Policies, which aimed to assist countries in evidence-based policy-making, did not include guidance on policies promoting price transparency [13]. The update of the 2020 Guideline therefore called for identification and assessment of the available evidence on price transparency measures, among nine other pricing policies [14]. Hence, the purpose of this systematic review is to determine whether policies promoting price transparency are effective in managing the prices of pharmaceutical products, with consideration to their impact on the volume, availability and affordability of these products. This review also aims to elucidate what contextual factors and implementation strategies may influence the effects of such policies. Methods This systematic review was undertaken according to the principles of systematic reviewing embodied in the Cochrane Handbook and guidance document published by the Centre for Reviews and Dissemination (CRD) [15,16]. The methodology and detailed search strategies have been described in detail previously [17,18]. As part of a wider review on ten pharmaceutical pricing policies, this paper only addresses policies promoting price transparency as a pricing approach. Search strategy and selection criteria An extensive literature search was performed between September 5 and October 10, 2019, for relevant articles published from 2004 to the search date in a large number of databases including but not limited to Ovid MEDLINE (Ovid), Embase (Ovid), Social Science Citation Index, EconLit, and the NHS Economic Evaluations Database (NHS EED). A variety of grey literature sources were also searched. The main structure of the search strategy comprised concepts pertaining to 1) non-specific pharmaceutical pricing policies (e.g. terminology related to pricing/ prices combined with terms for medicines) or to 2) pharmaceuticals and one of ten specific pricing policies, among which were policies promoting price transparency (e.g. terminology related to pricing/prices combined with terms for transparency, including related terms such as disclosure, rebates, sharing, and accountability). Supplementary search approaches included reference-list checking and contacting experts. Selection criteria This systematic review only included studies that used robust experimental or observational study designs comparing policies promoting price transparency (see Fig. 1) to at least one comparator or counterfactual [15]. Study designs including randomized trials and non-randomized or quasi-experimental studies (including interrupted time-series (ITS), repeated measures (RM), panel data analyses, and controlled before-after studies) were considered strong designs. Single policies, or combinations of policies, were considered eligible. Studies reporting at least one of the primary outcomes of interest, i.e. price (or expenditure as a proxy), volume, availability or affordability, were eligible for inclusion. Medicine prices reported at all levels of the supply chain (e.g. ex-factory price, wholesale price, retail price, or patient price) were considered eligible. Outcomes in both public, private and mixed public-private settings were of interest. Study selection A single researcher assessed all titles and abstracts identified from the database searches and removed the obviously irrelevant records based on titles and abstracts. Two reviewers independently screened the titles and abstracts of potentially eligible records, with disagreements adjudicated by a third reviewer. The full texts of studies identified as potentially relevant were then subject to an eligibility check by two members of the review team independently (TB and CL or IRJ and HAvdH) before data extraction. Disagreements about study selection were resolved by discussion, and if consensus could not be reached, a third reviewer (DT or AKM-T) was consulted. Data extraction and quality assessment Data from included studies was extracted by one member of the review team (IRJ or LT) using a standardized data extraction form, including information on study design, setting and subjects, interventions including implementation strategies, outcomes, and results including contextual factors. Extracted data was verified by a second reviewer (HAvdH or DT) for accuracy. The risk of bias in each included study was assessed by the extracting reviewer and checked by a second reviewer. Any disagreements were resolved by discussion until a consensus was reached. The assessment was done according to the EPOC guidelines, in which bias assessment criteria were adapted to study design [19]. Randomized and non-randomized trials and controlled before-after studies were assessed on nine criteria; ITS and RM studies were assessed on 8 criteria; and a set of four assessment criteria applied to all other study types. An explanation of the bias criteria is presented in Appendix 1. The quality of the evidence was assessed by use of the GRADE methodology [20]. GRADE evidence levels were determined by considering the body of evidence available for each (sub-)intervention. Domains of scoring were the risk of bias, inconsistency of results, indirectness of evidence, imprecision of results, and 'other'. Studies were upgraded in the 'other' domain if strong observational study designs were used (ITS, RM, panel data/regression analysis), according to precedent in literature [21]. The resultant certainty of the evidence was expressed as high, moderate, low or very low. Data analysis Substantial expected differences in the characteristics and contexts of included studies meant we did not aim to undertake a meta-analysis. Instead, we provided a narrative summary describing the quality of the studies, the relationship between interventions and patterns discerned in the data. Results Electronic database and grey literature searches identified 43,693 records for all ten pricing policies combined. The review of relevant reference lists and other sources yielded a further 2,345 records. After removal of duplicates, 32,011 articles were screened by title and abstract, resulting in 1,000 potential articles to be included in the wider review. Nine of these articles were specific to policies promoting price transparency at first sight. After full-text screening, three scientific articles covering two policy measures were included in this section of the systematic review (Fig. 2). Specifically, two articles (Moodley 2019a, Moodley 2019b) are part of the same study, one addressing originator pharmaceuticals while the other addresses both originator and generic pharmaceuticals [22,23]. These references are considered to be one study in this review, according to Cochrane guidelines. Six studies were excluded, because of a lack of a historical control [24][25][26], primary outcomes of interest were not reported [27], theoretical effects were studied [28] and one study was considered off-topic after reading the full text [29]. Both studies identified had an interrupted time series design, one examining data from the United Kingdom [30] and one being set in the private sector in South Africa [22,23] (Table 1) The results of the risk of bias assessment are presented in Table 2. The study by Langley et al. was associated with a low risk of bias across all domains, and overall certainty of evidence was assessed to be moderate. The study by Moodley et al. was associated with an unclear risk of bias across three of eight domains. None of these potential biases were considered to be of major influence on the results, and the overall risk of bias was thus considered to be low in this study. The certainty of evidence was assessed to be low for measures promoting public price disclosure due to serious indirectness. Detailed results of the overall quality assessment (GRADE) are provided in Appendix 3. Communicating prices to prescribers or patients Langley et al. examined the impact of cost-feedback to prescribers in a hospital setting [30]. Clinicians were provided with extra information on the costs of drugs during prescribing, with the simple aim of informing them of the costs of their decision without intending to direct their prescription behavior. The intervention was implemented in November 2014 in the hospital's electronic prescribing system, which permitted the costs of the medicine of choice to be added to the display that the prescribing clinician sees immediately prior to selecting the drug. The study reported expenditure outcomes for antibiotics and inhaled corticosteroids (Table 3). For antibiotics, a decrease of GBP -3.75 (p=0008) in weekly costs paid by the patient was observed immediately after implementation of the intervention, whereas the trend slightly increased with GBP 0.10 (p=0.015) over a twelve-month period. For inhaled corticosteroids, a small change in trend was seen in weekly costs per patient of GBP -0.03 (p=0.11), but no other changes were observed. The authors were not able to explain the contradictory results in both drug classes. There was no evidence on the impact of this intervention subtype for [22] was assessed to have an unclear risk of bias across three domains due to the source of data not being described in detail (intervention to affect data collection), possible bias due to missing data (incomplete outcome data) and the outcome measure not being described in detail (other bias). The second reference [23] was similar to the first, except that the analysis method was not reported. However the two references are by the same authors, using the same dataset and methodology. As the analysis is appropriately reported in one of the studies (low risk of bias) but with less detail in the other (unclear risk of bias), it is reasonable to assume both studies are of equal quality. Overall the risk of bias is considered to be low for the two publications collectively. the outcomes volume, availability or affordability, because these outcomes were not included in the study. Disclosure and control of prices along the supply chain Moodley et al. examined the impact of a national measure that introduced a transparent pricing system in the private market, in the context of the South African Single Exit Price (SEP) policy [22,23]. In an attempt to reduce medicine costs, the 2004 SEP ensures that there is a fixed price for all private sector prescription medicines sold by the manufacturers to distributors and dispensers in the country. The SEP must be publicly disclosed and is composed of the weighted average of the sales price, the logistics fee and value-added taxes, and determined by the manufacturers themselves. Simultaneously, all bonuses, discounts and sampling of medicines were removed. This was complemented with a regulated maximum percentage annual increase and regulation of dispensing fees at retail level. The disclosed prices are made available on the South African Medicine Price Registry website. The study included 50 medicines within three samples (a 'global core' for international comparison, a 'regional core' for items important in the region, and a 'supplementary list' for products of local importance) as per the WHO/HAI (i.e. Health Action International) methodology. It reports on the prices of medicines paid for by the patient, obtained from dispensing files and claims data. Medicine prices in retail pharmacies across all three samples were reduced immediately following the SEP policy, for both originator and generic medicines (Table 4). Mean reduction was greater for generics. Global core percentage price reduction ranged from 2.45% to 9.12% for originator medicines and 18.50% to 91.52% for generics; regional core reduction was 1.77% to 42.17% for originators and -0.70% to 78.03% for generics; supplementary list price reduction was 11.68% to 55.86% for originators and 9.78% to 78.49% for generics. A (significant) negative change in trend implying continued benefit on patient prices was observed in 26 out of 50 originator medicines and 23 out of 73 generic medicines. The impact of this intervention subtype on the outcomes volume, availability or affordability was not studied. Discussion Following extensive searches, we found only two studies assessing an intervention promoting price transparency in a manner sufficiently robust for inclusion in this review. It is worth noting that the SEP, while introducing transparency in the private market, also included aspects of price control other than price transparency. With that, evidence on measures that exclusively promote price transparency is even more limited. Nevertheless, the results show that the majority of patient prices of both originator and generic medicines were reduced following a national measure that introduced transparency on the level of the manufacturer. Not only did this policy achieve the intended price reduction, it has also improved accountability of manufacturers through mandatory price disclosure. Findings on the impact of cost-feedback approaches to prescribers are considered inconclusive, due to inconsistent results for different therapeutics. Information about the effects on volume, availability or affordability is currently missing for all transparency initiatives. The 2020 WHO Guideline on Country Pharmaceutical Pricing Policies suggests that countries improve the transparency of pricing and prices, informed by the limited research evidence and additional qualitative information that was considered [14]. These considerations include the notion that transparent pricing or prices could serve multiple purposes, including increased citizen engagement and facilitating other pricing policies such as external reference pricing. The 2015 WHO Guideline on Country Pharmaceutical Pricing Policies did not yet include policies promoting price transparency in its scope [13]. Despite considerable attention for price transparency measures on the international stage since then, this was not reflected in the amount of robust evidence currently available. Similarly, a recent scoping review [9] on countries' price transparency initiatives, with a somewhat broader setup that included other study designs as well, confirms that there is limited robust evidence on price transparency policies. This scoping review identified 12 studies, none of which would have been considered eligible for our systematic review. A WHO Technical Report on the pricing of cancer medicines [8] again confirmed that the amount of strong evidence is limited. The small number of studies reporting on the effectiveness of price transparency measures may be due to the complexity inherent to performing this research [31]. At the same time, price transparency measures are currently not common practice, which further contributes to the lack of studies of their real-world effectiveness. The WHA's resolution to advance the transparency of markets for pharmaceuticals was considered controversial [3,32]. Although the large majority of WHO Member States considered price transparency measures to be key in achieving better access to price data and universal Table 3 Summary of findings of cost-feedback approaches to prescribers. health coverage (UHC), the resolution was strongly contested by several countries. These countries argued that the assessment of potential negative implications of price transparency measures had been insufficient. Specifically, concerns were expressed about the impact of the resolution on developing countries, as improved transparency may threaten differential pricing arrangements [32]. The controversy that the resolution triggered reflects the paradoxical situation of price transparency measures. Without compelling evidence on the impacts of price transparency measures, countries may be cautious to conform to the resolution and implement transparency initiatives. However, without the implementation of novel transparency measures to inform new research, the opportunities for high quality research on the effectiveness of transparency interventions are limited. This 'Catch-22' appears to be borne out in the volume of literature identified in this systematic review. Despite this paradox, the resolution may inspire novel policies promoting price transparency to be implemented, which may present new opportunities for research. The strengths of our systematic review include the use of a rigorous methodological approach, following a pre-defined protocol [17]. We used a sensitive search strategy containing a wide range of terms, designed to retrieve both records that referred to non-specific pharmaceutical pricing policies as well as to price transparency measures specifically. Furthermore, we performed an extensive database search and searched the grey literature, as well as used supplementary search approaches such as checking relevant reference lists and contacting experts. This search strategy reduced the risk of missing potentially relevant studies. Risk of bias and strength of the evidence base were assessed using a validated guideline [19,20] and were determined in duplicate to minimize bias and error. Cost-feedback to prescribers compared to no cost-feedback approach Medicines: Antibiotics and inhaled corticosteroids Our study has several limitations. As noted before, the search resources included grey literature sources. Although important to include such resources, many of the databases have very limited search and exporting functionalities. For those resources, we had to use a more limited range of search terms. This pragmatic search approach is a limitation of the search methods, but should be seen within the wide range of search approaches described above. Another limitation might arise from the heterogeneity of price transparency measures, which may often be interwoven with other pricing policies or which may not be described as a transparency measure by the authors of the study. To minimize this limitation, a standard systematic review approach of using a highly sensitive search approach was used with a broad definition of price transparency policies and search terms, which would identify all types of transparency measures were used. Nevertheless, there is always the chance of missing relevant studies. However, we note that experts in the field were consulted to mitigate this risk. Additionally, the scoping review on transparency measures mentioned earlier, did not identify any studies that we had missed [9]. Finally, due to the nature of policy research, no randomized controlled trials were available to inform on the effectiveness of price transparency measures. Therefore, the certainty of the evidence is lessened due to the use of strong yet quasi-experimental study designs. The evidence identified on price transparency measures may be limited in applicability, despite its broad relevance in both high-and low-income economies [33,34,35]. The study by Langley et al. [30] focused on two groups of therapeutics only, one of which being antibiotics. The prescription of antibiotics in a high-income setting is expected to be highly regulated and guided by antibiotic susceptibility, so results may not be applicable to other therapeutics. As price transparency initiatives are believed to be promising in a broad range of medicine groups including innovative, anticancer, and other high-priced medicines [35,36], future research should examine the effects of transparency measures in other medicine classes before extrapolating these results. Furthermore, this study was set in a high-income economy that generally requires no co-payment by patients. While these results may be applicable to similar settings, generalizability to healthcare systems in which patients' ability to pay could influence physician's prescribing behavior is challenging. Similarly, the SEP introduced uniformity in the private market through a transparent pricing system with fixed prices, with the final goal of reducing pharmaceutical expenditures. These results may inform design of policies with similar objectives, but do not immediately apply to other price transparency challenges such as the disclosure of R&D costs. Finally, the overall evidence was limited to only two studies and may not reflect the broad scope that price transparency measures could include. The generalizability of our results to other healthcare systems should therefore be viewed with consideration to context, until such a time when more evidence has been produced. Despite these limitations, this first systematic review is a first step in informing national and regional governments and other policy-makers such as hospital boards or insurance providers on effective policies for managing the prices of pharmaceuticals using transparency measures. Although opportunities for research on transparency measures seem to be limited due to a 'Catch-22' dilemma, it is crucial that when such opportunities do present, efforts of policy-makers and researchers are coordinated. This will help to ensure the collection of data for adequate monitoring of these policies. The conduct of small pilots may help to increase opportunities for evidence generation on the one hand and overcome the reluctance of policy-makers on the other. These future studies should focus on all levels that transparency measures may occur in, and not only on medicine prices or expenditures, but likewise on outcomes such as the volume, availability and affordability of medicines. There should also be a particular focus on unintended and potentially harmful effects of these policies, both in high-as well as in low-income settings. Additionally, the limited amount of evidence currently available is insufficient to elucidate what contextual factors and implementation strategies may influence the effects of such policies, and should be the object of further study. Conclusion In conclusion, the lack of quantitative and comparative evidence assessing the impact of policies promoting price transparency is a clear call for further research. Collaborative pilots involving both national governments and researchers could help to align their interests and overcome the current inertia in evidence development. Additional evidence is needed to confirm the impact of a wide range of transparency measures on the management of medicine prices in countries all over the world. The evidence that is currently available, although from a single study, indicates that a national measure introducing price transparency The number of articles identified through database searching and screening by title and abstract shown in grey apply to the overall search; as per protocol the database search included search terms for all ten specific pricing policies among which policies promoting price transparency was one. The lower part of the flow chart shown in white is specific to the selection of studies on policies promoting price transparency. WoS=Web of Science. *Two articles are part of the same study, but were published separately. These references are considered one study. along the supply chain may be effective in managing medicine prices. Role of the funding source This systematic review was commissioned and funded by the World Health Organization [HQ/EMP/2019/002, 2019] under a grant from the United Kingdom Department for International Development (DFID). The review was part of the process for developing the 2020 WHO Guideline on Country Pharmaceutical Pricing Policies. The WHO secretariat and its advisors provided technical support for formulating the review protocol, performing specific searches on government websites, and translating several non-English publications. WHO, its advisors and DFID had no role in data analysis or interpretation. The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the WHO or DFID. Declaration of Competing Interest This systematic review was commissioned and funded by the World Health Organization.
2022-11-10T17:17:37.517Z
2022-11-01T00:00:00.000
{ "year": 2023, "sha1": "ebc8e4415d230fe4a186bf9de7bfbf46a6e4d482", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.healthpol.2022.11.002", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "63ad8ea6602bce8553f94038cced49579732be1b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239663136
pes2o/s2orc
v3-fos-license
A Constraint Method for Supersaturation in a Variational Data Assimilation System : The reproducibility of precipitation in the early stages of forecasts, often called a spindown or spinup problem, has been a significant issue in numerical weather prediction. This problem is caused by moisture imbalance in the analysis data, and in the case of the Japan Meteorological Agency’s (JMA’s) mesoscale data assimilation system, JNoVA, we found that the imbalance stems from the existence of unrealistic supersaturated states in the minimal solution of the cost function in JNoVA. Based on the theory of constrained optimization problems, we implemented an exterior penalty function method for the mixing ratio within JNoVA to suppress unrealistic supersaturated states. The advantage of this methodis thesimplicityof its theoryand implementation.The resultsof twin dataassimilationcycleexperiments conducted for the heavy rain event of July 2018 over Japan show that—with the new method—unrealistic supersaturated states are reduced successfully, negative temperature bias to the observations is alleviated, and a sharper distribution of the mixing ratio is obtained. These changes help to initiate the development of convection at the proper location and improve the fractions skill score (FSS) of precipitation in the early stages of the forecast. From these results, we conclude that the initial shock caused by moisture imbalance is mitigated by implementing the penalty function method, and the new moisture balance has a positive impact on the reproducibility of precipitation in the early stages of forecasts. Introduction The reproducibility of precipitation in the early stages of forecasts, especially for the spindown problem (spurious excessive precipitation), has been an issue in numerical weather prediction since the resolution of forecast models reached the convective scale, i.e., 1-10 km. Weak reproducibility is thought to be caused by moisture imbalance among the variables in the initial data (Hólm et al. 2002;Dixon et al. 2009). The imbalance in the initial data results from incompleteness of the assimilation procedure, which infers the plausible model state from observations and previous forecasts. This issue is analogous to that of spurious gravity waves caused by the imbalance between the mass and flow fields in the initial data, the resolution of which is relatively large, i.e., 20-100 km (Daley 1991). For the mass-wind imbalance issue, by introducing a process called ''initialization'' before the forecast to damp the spurious gravity waves (Daley 1991), and by incorporating this initialization and the nonlinear mass balance related to hydrostatic equilibrium into the assimilation procedure (Courtier and Talagrand 1990;Gauthier and Thépaut 2001), this imbalance issue has almost been resolved. However, the moisture imbalance issue remains unresolved. It is thought that the difficulty with moisture imbalance is due to the following: 1) proper moisture processes in the forecast model at the current convective resolution are unknown, 2) simplified (or absent) moisture processes in the assimilation step do not have sufficient accuracy, and 3) an assimilation window of several hours is too long, considering the nonlinearity of the moisture processes (Stiller and Ballard 2009). In addition, the inhomogeneity and local variability of the moisture field make the problem difficult. The distribution of the relative humidity-which has upper and lower bounds, as well as a maximum near the upper bound in the lower and middle troposphere-requires a non-Gaussian error distribution of the moisture variable in the assimilation step. The assimilation procedure is usually constructed within the framework of a Gaussian error distribution, and while a background error covariance based on a Gaussian distribution is suitable for describing the smooth large-scale balance, which is well known and well modeled, it sometimes fails to describe the convective-scale balance. The major method for handling the moisture imbalance problem is to construct proper control variables (parameters). This involves selecting valid moisture variables that are convenient for considering the balance with other variables at the convective scale (depending on the numerical models or the atmospheric states) and/or applying nonlinear transformations to obtain a Gaussian-like error distribution (Hólm et al. 2002;Gustafsson et al. 2018;Ingleby et al. 2013). To improve the quality of the analysis through effective use of satellite data, introducing the condensate water-in addition to water vapor-as a control variable in the moisture processes of the assimilation procedure has been attempted, and the effect has been discussed by Gong and Hólm (2011). Nudging techniques and incremental analysis update (IAU) are also useful tools for taking in moisture-related information (Jones and Macpherson 1997;Golding et al. 2014;Bloom et al. 1996). Recent progress and challenges in the assimilation of atmospheric water, including these methods, have been comprehensively reviewed by Bannister et al. (2019). In the JMA mesoscale data analysis system called JNoVA (Honda et al. 2005), the existence of unrealistic supersaturated and negative moisture states in the minimal solution of the cost function (which differs from the analysis data in that it has lower resolution and limited atmospheric variables) is the main cause of moisture imbalance. Supersaturated states are not totally unrealistic in the atmosphere, but the supersaturation rarely exceeds 1% (i.e., the relative humidity is less than 101%) because of the existence of aerosols or condensation nuclei (Rogers and Yau 1989). In the minimal solution given by JNoVA, however, there are regions where the supersaturation sometimes exceeds 20% (see section 3b). Such states are considered to be unnatural, but they are inevitable in a general variational assimilation framework because of the features of the moisture distribution and the simplifications in the assimilation procedure mentioned above. Even in the simplest one-dimensional optimal interpolation (OI) data assimilation system for the relative humidity, supersaturated states can occur due to the correlation of the background error covariance matrix, if there is no way to take into account the local variability of the moisture-related variables. In addition, supersaturated states are not allowed in the cloud physics scheme of the JNoVA forecast model-supersaturated moisture is converted into rain or cloud water and latent heat is released at each time step of the evolution (saturation adjustment). In practice, from the minimal solution, the analysis data are obtained by increasing the resolution and supplementing with unanalyzed atmospheric variables from the first guess. Nonphysical supersaturated and negative moisture states in the minimal solution are inherited to the analysis data, but they are removed by preprocessing when the initial data are made from the analysis data. This preprocessing is intended to avoid forecast crashes due to numerical instability caused by the excessive moisture. However, preprocessing cannot take moisture balance into account, and it sometimes causes another moisture imbalance that results in a spinup problem-underestimation of precipitation in the early stages of the forecast-rather than a spindown problem. Thus, in this paper we aim to obtain a minimal solution that has a more realistic moisture balance and that suits our forecast model by suppressing the unrealistic moisture states better in the minimizing process of the variational method. For this purpose, we propose implementing an exterior penalty function method in the constrained optimization problem. This method is similar to that discussed in Courtier and Talagrand (1990) and Gauthier and Thépaut (2001), which uses a weak constraint in order to reduce the mass/wind imbalance, but this penalty function method treats the boundedness of the moisture-related variable directly as a strict constraint. The rest of this paper is organized as follows. In section 2, the minimization process in the assimilation procedure is reformulated as a constrained optimization problem, and the exterior penalty function method is introduced. The results of twin experiments to examine this procedure are presented in section 3, and our conclusions are given in section 4. Penalty function method The analysis is performed statistically by using an assimilation procedure based on the background and the observation error covariances. Thus, it is necessary to provide proper error covariances, which can describe the appropriate moisture balance at the convective scale (with a proper choice of control variables), in the assimilation procedure. However, this is extremely difficult, as pointed out by Bannister et al. (2019). In JNoVA the main cause of moisture imbalance in the analysis data is the existence of supersaturated and negative moisture states in the minimal solution, so a method that can avoid generating such unrealistic moisture states in the assimilation procedure without breaking the moisture balance or entering into the problem of reconfiguring the error covariances is very attractive. This can be done by confining the minimal solution to the subset of the control variable space where the relative humidity is in the range from 0% to 100% in the minimization process within the variational assimilation procedure. This kind of problem can be formulated as a constrained optimization problem: where f(x) is the objective function (or cost function), g(x) is the (nonlinear) constraint function, and X is the control space. The region F 5 {x 2 Xjg(x) # 0} is called the feasible region, and the region {x 2 Xjg(x) 5 0} is the effective region. To deal with this problem mathematically, we have to check a suitable constraint qualification. This is very hard for our present problem, so we assume that the functions f and g have desirable properties such as continuity, differentiability, and convexity in order to proceed. Numerous algorithms have been developed to solve the problem (1); see Bertsekas (1982); Bazaraa et al. (2006). Among them, the exterior penalty function method is easy to implement within the existing minimization process in the variational assimilation procedure. In this method, the original constrained problem, (1), is converted to an unconstrained problem, by introducing the auxiliary function defined by where l . 0 and a $ 1 are the penalty parameters. The second term of Eq. (3) is sometimes called the ''penalty term'' or ''penalty function.'' If there is more than one constraint, one additional penalty term is added for each constraint. The concept of the exterior penalty function method is to add some penalty value if x is outside the feasible region F but not if x is inside F. The penalty parameter a controls the effect of putting the solution back into the feasible region, depending on the degree of violation max{0, g(x)}. It is popular to set a 5 2 to maintain continuity at the effective region, but in this case we have to solve a sequential minimization problem repeatedly while increasing l to infinity, since the effect of the penalty term reduces quadratically as the state approaches the feasible region. A penalty function that yields the same solution as the original problem without altering the value of the parameter l is called an exact penalty function. When a 5 1, we can get an exact penalty function by choosing a sufficiently large value for l. In practice, a suitable value for l depends on the gradients of f and g, and it is known that too large a value of l sometimes destroys a minimization process based on gradient methods. In our problem, there is arbitrariness in constructing the constraint function g on the moisture-related variable. Here we set where qv i and qvs i are the water vapor mixing ratio (QV) and the saturation mixing ratio (QVS), respectively, at the ith grid point. Here, g 1i # 0 is the constraint for supersaturation at the ith grid point, and g 2i # 0 is that for the negative mixing ratio. Since the mixing ratio has large values in the lower troposphere, this construction is intended for the effective modification of the moisture field in the lower troposphere, which is closely related to the initiation and development of deep convection and the generation of precipitation. Twin experiments a. System We employed an experimental system based on the JMA mesoscale forecast system to examine the effects of the penalty function method. This system is composed of a forecast component that employs a nonhydrostatic model called the mesoscale model (MSM), which is based on the Japan Meteorological Agency Nonhydrostatic Model (JMA-NHM: Saito et al. 2006Saito et al. , 2007Saito 2012) and an assimilation component that contains the four-dimensional variational data assimilation system called JNoVA (Honda et al. 2005;Gustafsson et al. 2018), which was operational until March 2020. The MSM has 817 3 661 horizontal grid points with 5-km resolution and 76 vertical layers extending up to 21.8 km in height, and it covers Japan and the surrounding area. The JNoVA component adopts an incremental approach (Courtier et al. 1994); its inner loop model has 271 3 221 horizontal grid points with 15-km resolution and 38 vertical layers covering the same region as the MSM. The assimilation window is 3 h, and the control variables are the eastward wind, northward wind, surface pressure, potential temperature, and a pseudo relative humidity defined by scaling the mixing ratio by the background saturation mixing ratio. Background error cross correlations between the moisture variable and the other variables are not considered. Hydrometeors are not included among the control variables, and perturbations of the hydrometeors are ignored in the inner forward model. The list of observations used in JNoVA is shown in Table 1. We have many observations related to the temperature field but not so many observations related to the moisture field, which were mainly obtained by radiosonde observations. The inner forward model of JNoVA is a simplified nonlinear JMA-NHM with a large-scale condensation scheme and the Kain-Fritsch (KF) convective scheme (the KF scheme is not considered in the adjoint step). The outer model of JNoVA, which has 5-km resolution, is the full JMA-NHM-with a cloud-physics scheme based on a 6-class 3-ice bulk scheme, and a modified KF scheme-which is the same as in the MSM. Because of differences in the moist physics, the forecasts of the inner and outer models differ from each other, but there is not as much difference between the two as the difference caused by the existence of supersaturated states. This has been checked by conducting experiments with and without preprocessing, as discussed in section 3c. For this reason, we do not consider further the effects that result from differences in the moist physics. Supersaturated states in the analysis data are converted into just 100% relative humidity states, and negative moisture states are converted into zero-moisture states by preprocessing before running the outer model to generate the initial conditions for the MSM. Preprocessing is employed to avoid forecast crashes due to numerical instability. However, preprocessing merely cuts (or adds) extra (or missing) moisture, so it may cause another moisture imbalance. The implementation of the exterior penalty function method in the variational assimilation system is very simple, and the auxiliary function-or the cost function in the unconstrained form-is given by in JNoVA, where Here, x b 0 is the background state at the initial time; y o n is the observation at the nth time step; B and R are the forecast and observation error covariance matrices, respectively; H n is the observation operator; M 0/n is the time development operator; and J dfi is the penalty term for the incremental digital filter initialization (DFI) used to control spurious gravity waves. Actually, because JNoVA adopts an incremental approach, the equations above need slight changes, but we leave them unchanged for simplicity. In the forward step of JNoVA, the arguments of the penalty function QV and QVS [see Eqs. (4) and (6)] are obtained by recalculating the basic fields of the pressure and temperature right after updating the basic fields of control variables in each iteration of the minimization loop. In the adjoint step, the corresponding adjoint variables are calculated by dividing them into cases (supersaturated, negative moisture, otherwise) that are determined by the states of the basic fields. The constraint defined by Eq. (4) has the effect of reducing the mixing ratio and increasing the saturation mixing ratio for the supersaturated state at each grid point. Since the saturation mixing ratio is a monotonically increasing function of the temperature, this constraint has the effect of increasing the temperature. These effects are spread and developed through the minimization process by the covariance matrices B and R and by the nonlinear inner model, and they produce a new moisture balance in the minimal solution (and then the analysis data). To investigate the impact of the penalty function method on the analysis and forecast, we conducted twin data assimilation cycle experiments. In the following, the experiment that employs the usual JNoVA system is called ''Ctrl,'' and those utilizing the new JNoVA system equipped with the penalty term J qv are called ''tests'' (the penalty term J dfi is included in both Ctrl and tests). In the tests, we set a 5 1 and l 5 100, 200, 500, and 1000, which are labeled ''L100,'' ''L200,'' and so on, in order to determine the effects of the parameter l. We use preprocessing in both Ctrl and tests in order to remove the supersaturated or negative moisture states completely and to align the conditions when we make the initial data from the analysis data (see section 3b). The twin experiments extend from 28 June to 8 July 2018, which coincides with the period of the ''heavy rain event of July 2018'' that caused tremendous disasters in Japan. Clearly, it would be very useful to prevent or mitigate such disasters, which might be possible if we could catch the initiation of such severe events earlier. b. Analysis Here we discuss the impacts of the penalty function method on the assimilation procedure and the analysis data at the first cycle (0900 UTC 28 June). The behaviors of the cost values in the minimization process are shown in Fig. 1. The behavior of the total cost seems not to be greatly affected by the new, added penalty term, but the convergence becomes slightly slower as the value of l increases. Since the global minimum of the constrained problem is never less than that of the original problem with no constraint, the fact that the final total costs of L100 and L200 are smaller than that of Ctrl implies either that each final state may be a local minimum of the nonlinear, nonconvex cost function or that the number of iterations may not be sufficient. However, the final total costs of Ctrl and tests (Ges), and (remaining rows) the forecasts from Ctrl and the tests (L100, L200, L500, and L1000). In the RA panel, the region for which analyzed precipitation data are not available is shaded gray. The forecast from the Ctrl analysis data but without the preprocessing to remove supersaturated states is labeled ''Ctrl_nochkqv.'' We focus on the reproducibility of precipitation indicated by the black oval in the RA panel. NOVEMBER 2021 S A W A D A A N D H O N D A are not very different-with the differences lying within the range of 0.2%-so we think that the effect of the imperfectness of the minimization process is quite small. Figure 1b shows that most of the total cost comes from the observation term and that, compared with Ctrl, the tests are slightly closer to the background than to the observations in the minimization process. The values of the penalty terms for DFI and QV are considerably smaller than the other terms. Figure 1c shows the costs of several elements, i.e., the temperature, the relative humidity, and the penalty for the nonphysical mixing ratio (J qv ). In the first iteration (it 5 0), the value of J qv is always 0, since the mixing ratio qv i is within the range [0, qvs i ] at every grid point in the first guess. The values of J qv , which are defined by the violation multiplied by The supersaturated states are more strongly suppressed in the tests that have large l. By definition, the costs of the temperature and the relative humidity are thought to be strongly related to the constraint in the tests, but there are no obvious differences, even though L200 shows a relatively good fit to the observations. This is because the costs shown in Fig. 1 are evaluated over the whole domain of the MSM, while the region affected by the penalty method is only a part of the MSM domain, as discussed later. Too large a value of l may destroy the minimization process based on the gradient method, but in the range l # 1000, the minimization process seems to work well. In an additional test with l 5 10 000, the behavior of the cost is very different from those of Ctrl and the other tests, and the final cost remains large FIG. 7. As in Fig. 6, but for the mixing ratio (kg kg 21 ) at 850 hPa. The positive increment near the sea west of Kyushu in Ctrl at FT 5 0 is indicated by the black oval. NOVEMBER 2021 S A W A D A A N D H O N D A (not shown). Thus, although it is difficult to determine the limit of l, at least we can say that 10 000 is too large for our system. Figure 2 shows the impact of the penalty function J qv on the mixing ratio modification in the minimal solutions. The violation points-which are in supersaturated or negative moisture states and have nonzero J qv -are substantially reduced by the penalty function method as the value l becomes large. While the violation ratio for Ctrl exceeds 5% in the lower troposphere, those of the tests are less than 0.2%. In the upper troposphere and lower stratosphere, more than 1% of the violation points remain in the tests, in agreement with the discussion about the construction of the constraint function. Figure 2b shows the horizontal summations of the violations, given by å i maxf0, g 1i (x 0 ), g 2i (x 0 )g, as a measure of the net impact. The violation is quite effectively suppressed in all the tests, even in L100. In addition, we can see that both the violation rate and the horizontal summation of the violations for negative moisture states are much smaller than those for supersaturated states. Consequently, we hereafter ignore the effect caused by negative moisture. The relative humidity distributions in the minimal solutions of Ctrl and the tests presented in Fig. 3 show that the spurious supersaturated regions in Ctrl-sometimes exceeding the relative humidity of 120%-are removed effectively in the tests, especially in those with larger values of l. Of course, the supersaturated regions are not completely eliminated over the whole MSM area even in L1000, as indicated in Fig. 2, and there are slight differences in the size or location of the supersaturated region among the tests. However, the distributions of relative humidity in the tests are very similar to each other on the whole; in particular, it is hard to find a difference between L500 and L1000. This may mean that the values of the parameter l 5 500 and 1000 are large enough for this atmospheric field. Note that modifications of the atmospheric fields due to the use of the penalty function method do not occur only in the supersaturated regions of the tests shown in Fig. 3, since those regions occur just in the final state of the minimization process. The constraints affect the atmospheric fields, while changing the sizes and locations of the supersaturated regions during the minimization process. The supersaturated regions of the tests are actually generated and spread near the centers of the supersaturated regions of Ctrl in the early stages of the minimization process and then dissipate gradually. The peaks of the penalty term J qv in Fig. 1c are affected by this behavior. Figure 4 shows the effects of the constraint on the atmospheric fields in the minimal solution. Compared with Ctrl, the effects of reducing the mixing ratio and increasing the temperature in the L200 tend to become clearer near the centers of the supersaturated regions of Ctrl, rather than in the analogous regions of the L200. These effects coincide with that expected from the construction of the constraint function and from previous discussions. We also note that there are small but clear regions with increased mixing ratio as shown in the right panel of Fig. 4a, without exceeding the saturation level due to the increased temperature. These features of the atmospheric fields in the minimal solution are inherited to the analysis data. Analyzing a sharper distribution with increased mixing ratio in appropriate locations of the lower-level troposphere may have a significant impact on the initiation and development of deep convection and on the generation of precipitation. c. Forecast Here we consider the effect of the penalty function method on forecasts at the first cycle. Figure 5 shows the 3-h accumulated NOVEMBER 2021 S A W A D A A N D H O N D A precipitation in the twin experiments from the initial time of 0900 UTC 28 June 2018. In this event, we see clear, strong rainfall along the baiu front in the sea northwest of Kyushu, in the radar/rain gauge analyzed precipitation data (RA, treated as the observations). This strong rainfall is not seen in the first guess, and it is not adequately reproduced in the forecast from Ctrl. We also see that weak reproducibility in Ctrl is caused to some extent by the above mentioned preprocessing. In the forecast from the tests, the reproducibility (including the location, distribution, and amount) of precipitation is improved. The distributions of precipitation in the tests (L200, L500, and L1000) are quite similar to each other, and the maximum amounts of the precipitation in L500 and L1000 are almost comparable to that of the RA. Although it seems that in this case L500 and L1000 reproduce the precipitation better than L100 or L200, the differences from Ctrl are all quite big. Consequently, we focus conservatively hereafter on the test of L200 (we consider the effect of the value of l later). In the previous discussion, we have checked the differences between Ctrl and the tests in the analysis. However, the differences-the warmer temperatures and lower mixing ratios around the supersaturated region shown in Fig. 4-cannot directly explain the differences in precipitation shown in Fig. 5. Thus, we have to investigate the time evolution of the atmospheric fields in Ctrl and the tests carefully. Figure 6 shows the time evolution of the temperature increment at 700 hPa. There is a large, obvious, negative increment at the initial time in Ctrl (indicated by the black arrow), while the corresponding increment in L200 is more moderate. The strong, cool increment of Ctrl vanishes rapidly in the time evolution from FT 5 0 to FT 5 1 (where FT indicates the forecast range in hours), and the time evolution behaviors of Ctrl and L200 are relatively similar to each other in the range from FT 5 1 to FT 5 3. Thus it is possible that the heating effect produced by the constraint may help to mitigate the initial imbalance. The time evolution of the mixing ratio increment at 850 hPa is shown in Fig. 7. Note that the supersaturated states are removed by the preprocessing, so the differences between Ctrl and L200 at the initial time of the forecast are smaller than those of the analysis data. In Ctrl, the initial positive increment near the sea west of Kyushu (indicated by the black oval) is rapidly reduced by FT 5 1, and it moves along with the southwest flow. On the other hand, in L200 the corresponding initial positive increment moves along with the flow without changing rapidly, and advection of air with a high mixing ratio from the southwest in the sea west of Kyushu results in the strong precipitation at FT 5 2 and 3 shown in Fig. 5. This high mixing ratio region is supported by advection from the lower layers, as discussed later. In addition, compared with Ctrl, the high mixing ratio region of L200 in the sea west of Kyushu FIG. 11. (a) Evolution of the cycle-mean increments of Ctrl (lines) and of L200 (dashed lines) in the assimilation window. The increments in (left) the mixing ratio (kg kg 21 ) and (right) the temperature (K) at every 1-h forecast period are shown. (b) The 1-h trends in the forecast of (left) the mixing ratio (kg kg 21 h 21 ) and (right) the temperature (K h 21 ) for Ctrl (lines) and for L200 (dashed lines) in the assimilation window. seems to trigger the convection. These features in the mixing ratio also imply that L200 reduces the initial imbalance and gathers the moisture to organize precipitation at the proper location. The evolution of the 1-h accumulated precipitation is shown in Fig. 8. In Ctrl, although the region of weak precipitation is widely distributed at FT 5 1, heavy precipitation like that seen in RA does not appear until FT 5 3. In L200, there also is no heavy precipitation at FT 5 1, and the precipitation area of L200 is smaller than that of Ctrl. However, the precipitation begins to strengthen at FT 5 2, and heavy precipitation occurs at FT 5 3, the location of which is very close to that of the RA. The organization of the precipitation can be seen in Fig. 9, which shows a vertical cross section along the line segment AB in Fig. 8. Although there is no area with clear upward vertical velocities in Ctrl, L200 displays an area of apparent upward vertical velocity that is supported by the advection of air with a high equivalent potential temperature from the lower troposphere, which forms the convection related to the precipitation. d. Cycle experiments Here, we examine the effect of our new method using cycle experiments. We conducted cycle experiments for Ctrl and for the tests (L100, L200, L500, L1000) from 1200 UTC 28 June to 1200 UTC 8 July. The analyses were performed every 3 h (we obtained 81 analyses), and then we performed 12-h forecasts from these analyses every 12 h (21 forecasts). Figure 10a shows that the cycle-mean values of the violation in the tests are effectively reduced compared to Ctrl, as in the case of the first cycle analysis shown in Fig. 2b. Figure 10b shows that the cyclemean value of the mixing ratio in the minimal solutions is reduced mainly in the middle level of the troposphere by the new method, while the cycle-mean temperature is increased mainly in the lower troposphere. These are consistent with the theory described in section 2 and the results in the first cycle. In addition, the profiles of the tests shown in Fig. 10b are similar to each other, and the differences among the tests are not very large compared to the daily variations of the mixing ratio or temperature. This implies that effect of the constraint is introduced adequately even in L100 through cycle assimilation and that the differences in the remaining violation are sufficiently small. The evolution of the cycle-mean increments and of the 1-h forecast trends of the mixing ratio and the temperature for Ctrl and L200 are shown in Fig. 11. The positive mean increments for the mixing ratio at the beginning of the assimilation window (FT 5 0) are reduced in L200, and the profile of the increment for L200 at FT 5 0 approaches the profiles at the other times, while the negative trend of Ctrl at FT 5 0 is mitigated in L200, except near the surface. For the temperature, there are relatively large positive increments in the lower troposphere (below 850 hPa) at FT 5 0 compared to Ctrl, but the trend of L200 at FT 5 0 is smaller than that of Ctrl except near the surface and the upper troposphere. These behaviors imply that the initial shock is reduced in L200 mainly in the lower and middle troposphere, and that the new method provides an analysis data that is consistent with the forecast model. In addition, there is a possibility of improving the first guess through cycle assimilation. We also performed verifications of the 12-h precipitation forecasts from Ctrl and the tests. The forecasts are initiated from the end of the assimilation window every 12 h (0000 and 1200 UTC), during the cycle period. The fractions skill scores (FSSs) with a 10-km verification grid and thresholds of 1.0 and 10.0 mm h 21 are shown in Fig. 12. We can see clear improvements in the FSSs of the tests at both thresholds in the early stages of the forecast. For other cases with different verification grid sizes and thresholds, we confirmed that in general the tests are superior to Ctrl. Although L1000 seems to be better among the tests from Fig. 12, the scores vary with the threshold and the atmospheric state, and the dependence of the FSS on the value of l is not straightforward. Therefore, we cannot determine what value of l gives the best improvement in the FSS, but we can say that the improvement is robust for the values of l we considered. One possibility is that the value l 5 100 may be sufficient in the case of cycle assimilation and the differences among the tests are small compared to the differences between Ctrl and tests. This is consistent with the discussion of the results shown in Fig. 10. e. Validation The penalty function method appears to bring a new moisture balance into the analysis, providing a positive impact on the reproducibility of the precipitation. However, the change in the temperature field is not small, so it is necessary to check whether this change is consistent with the observations. Figure 13a shows the location of the temperature observations obtained from 0900 to 1200 UTC 28 June, which are used in the data assimilation at the first cycle in Ctrl and the tests. Note that these data are mostly airborne measurements and radiosonde observations, and the number of observations in the supersaturated region of Ctrl shown in Fig. 3 is quite small. Figure 13b shows histograms of the residual temperature errors (observation -analysis data: O 2 A) over the whole region shown in Fig. 13a and for the limited space around the supersaturated region outlined by the gray box in Fig. 13a. The temperature from the Ctrl analysis data tends to be cooler than the observations, and the deviation (bias) is large around the supersaturated region. The deviations in both these regions are slightly reduced in L200, but there is no significant difference in the modifications between the two regions. Because of the evolution of the atmospheric field, the effects of the penalty function J qv spread, as shown in Figs. 6 and 7, and the modification does not appear to be limited solely to the supersaturated region at the initial time. Figure 14 shows the residual temperature errors compared to the radiosonde temperatures observed at 1200 UTC 28 June. Examination of the residual errors at the Fukuoka radiosonde site shows that the low temperature bias in the 400-600-hPa range that occurs in the first guess and in the Ctrl analysis data are reduced in the tests. The vertical profiles of the tests are similar to each other, but the differences among them are large around 700 hPa. Because this comparison is done at the end of the assimilation window, it is thought that those differences may reflect a slight misalignment of the high temperature region due to the time development. In the comparison, at the Matsue radiosonde site the differences from Ctrl occur mainly between 500 and 300 hPa, and the profiles of the tests are much more similar to each other. At this location, the lower-level temperature differences due to the supersaturated region at the initial time flow away, and air that originates from the nonsupersaturated region comes in. In the comparison at Kagoshima, there are no large differences among the tests or even with Ctrl, since the location of the radiosonde station is far from the supersaturated region. These results indicate that the effect of the penalty function method is mainly limited to the vicinity of the supersaturated region and the region downstream from it, as expected theoretically, and that this method reduces the residual errors statistically. However, there may sometimes be disagreements with the radiosonde data because of local variations in the atmospheric fields. Figure 15 shows the profiles of the means and the root-meansquare errors of innovation (observation 2 first guess: O 2 B) for the temperature and the relative humidity, during the cycle assimilation period. Except near the surface, the tests in general show a better fit to the observations for both the temperature and the relative humidity. We also see that the differences among the tests are small, especially for the temperature. These imply that the changes in the temperature and the relative humidity field in the tests are valid and that the effect is robust for the values of l we considered. Summary We have introduced an exterior penalty function method into the minimization process employed in variational assimilation in order to suppress nonphysical supersaturated or negative moisture states and to reduce the moisture imbalance in the analysis data that may cause a degradation of the precipitation reproducibility. The exterior penalty function method is a numerical algorithm used for solving constrained optimization problems, and the simplicity of implementing it within the assimilation procedure is a significant advantage. However, to save computational costs we have to choose a proper value of the parameter l-taking into account the sizes of the gradients of other terms in the cost function-when we use this method as an exact penalty function method. The results of twin experiments for the heavy rain event of July 2018, show that the penalty function method effectively suppresses nonphysical supersaturated states in the analysis data-with changes supported mainly by increasing the temperature and reducing the mixing ratio around the supersaturated region compared with Ctrl-as expected theoretically. The forecast initiated from the analysis data by this method shows that the initial shock in the temperature and mixing ratio is mitigated and produces convection at the proper location, and it also gives improvement in the precipitation FSS. These improvements indicate that the penalty function method provides a preferable analysis data, with new moisture balance mainly affecting the mixing ratio and temperature near the supersaturation area. Comparison with the observations shows that this method reduces the departures (O 2 B and O 2 A) statistically, and gives better agreement with the observed data, except near the surface. The choice of the value of the parameter l is directly related to the violation of the constraint. In the first cycle, the value l 5 100 seemed inadequate, because the supersaturated region remained to some extent and the improvement in precipitation reproducibility is small. However, during the cycle period, the differences from different l values are relatively small. We speculate that the effect of the constraint is introduced adequately even in the case l 5 100 through cycle assimilation, and the differences in the remaining violations or atmospheric states are small enough compared to the effects caused by other incompleteness in the assimilation procedure. We also emphasize that the reproducibility of the precipitation depends strongly on schemes or parameterizations in the physical processes of the forecast model. These are adjusted to the analysis data of Ctrl, and some kind of adjustment also may be needed to obtain the benefit of the penalty function method more effectively. We have investigated the effects of the penalty function method using a cycle assimilation for a specific event and have demonstrated its contribution to reducing the moisture imbalance. In future, we plan to investigate the effects on the analysis of and forecasts for other events under various atmospheric conditions, taking into consideration the relevance to the error distributions of the moisture-related variables statistically. We also plan to examine the effects of the new method on the new JMA operational mesoscale assimilation system called ''asuca-Var,'' which is based on a more advanced nonhydrostatic forecast model than JMA-NHM and that includes various changes and improvements in the assimilation procedure.
2021-10-21T15:45:39.644Z
2021-09-09T00:00:00.000
{ "year": 2021, "sha1": "471c660781caa8a8910b7aba3d2361f083207412", "oa_license": "CCBY", "oa_url": "https://journals.ametsoc.org/downloadpdf/journals/mwre/149/11/MWR-D-20-0357.1.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "145bd336a769472491222b5beff106f2dbb4ee9f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }
5735734
pes2o/s2orc
v3-fos-license
Engineering Botulinum Neurotoxin C1 as a Molecular Vehicle for Intra-Neuronal Drug Delivery Botulinum neurotoxin (BoNT) binds to and internalizes its light chain into presynaptic compartments with exquisite specificity. While the native toxin is extremely lethal, bioengineering of BoNT has the potential to eliminate toxicity without disrupting neuron-specific targeting, thereby creating a molecular vehicle capable of delivering therapeutic cargo into the neuronal cytosol. Building upon previous work, we have developed an atoxic derivative (ad) of BoNT/C1 through rationally designed amino acid substitutions in the metalloprotease domain of wild type (wt) BoNT/C1. To test if BoNT/C1 ad retains neuron-specific targeting without concomitant toxic host responses, we evaluated the localization, activity, and toxicity of BoNT/C1 ad in vitro and in vivo. In neuronal cultures, BoNT/C1 ad light chain is rapidly internalized into presynaptic compartments, but does not cleave SNARE proteins nor impair spontaneous neurotransmitter release. In mice, systemic administration resulted in the specific co-localization of BoNT/C1 ad with diaphragmatic motor nerve terminals. The mouse LD50 of BoNT/C1 ad is 5 mg/kg, with transient neurological symptoms emerging at sub-lethal doses. Given the low toxicity and highly specific neuron-targeting properties of BoNT/C1 ad, these data suggest that BoNT/C1 ad can be useful as a molecular vehicle for drug delivery to the neuronal cytoplasm. and into the neuronal cytosol [2][3][4] . Intoxication with BoNT results in inhibition of pre-synaptic neurotransmitter release at the neuromuscular junction (NMJ). The toxic effect of BoNT is achieved through persistent cleavage of components of the Soluble NSF Attachment Protein REceptor (SNARE) complex required for exocytosis of neurotransmitters 3 . Although there are differences among the serotypes in host range, receptor binding and the precise proteolytic target, productive intoxication by any serotype results in neuromuscular paralysis due to exocytotic blockade. We have previously described genetic constructs and expression systems that enable facile design and production of atoxic recombinant derivatives of BoNT serotype A1 (BoNT/A1 ad) that retain the structural and trafficking properties of wt BoNT/A1 5,6 . One such derivative, BoNT/A1 ad was developed and described as a "Trojan horse" prototype molecular vehicle for delivering drugs to the neuronal cytoplasm. BoNT/A1 ad was rendered atoxic by introducing two amino acid substitutions in the active site of wt BoNT/A1 LC [5][6][7] . While BoNT/A1 ad was capable of delivering its LC to the pre-synaptic compartment of neurons at concentration of up to 1 nM, BoNT/A1 ad was only about 100,000 times less toxic than wt BoNT/A1, and therefore, BoNT/A1 ad suffered from a narrow therapeutic dosage window 7 . Here, we describe the design, expression, purification, and functional evaluation of a second-generation neuron-specific delivery vehicle composed of an atoxic BoNT derivative with reduced toxicity that circumvents the limitations of the BoNT/A1 ad. This new delivery vehicle is a derivative of BoNT serotype C1 (BoNT/C1 ad) that was rendered atoxic by three substitutions in the LC. The toxicity and neuron-targeting properties of BoNT/ C1 ad were evaluated in vitro and in vivo. BoNT/C1 ad LC was internalized into the neuronal cytosol, where it stably persisted for at least 8 days. BoNT/C1 ad trafficked to the presynaptic compartment where it co-localized with pre-synaptic proteins, with minor co-localization with endosomal and lysosomal markers. Treatment of neuronal cultures with BoNT/C1 ad did not result in detectable cleavage of SNARE proteins or cytotoxicity, even at concentrations that showed toxicity symptoms in vivo. In an in vivo mouse model, BoNT/C1 ad had significantly lower toxicity than BoNT/A1 ad, and rapidly accumulated at the NMJ in the diaphragm. The extremely low in vivo toxicity of BoNT/C1 ad and its neuron-targeting properties suggest that it will be useful as a molecular vehicle for drug delivery to the neuronal cytoplasm. Results Potency of wt BoNT/C1 batches used in this study. We found differences in potencies as high as 31-fold among the wt BoNT/C1 preparations used in our study. Therefore, in presenting our data, we elected to provide values for wt BoNT/C1 as both molar concentration (when we needed to accent the difference in concentrations between wt BoNT/C1 and BoNT/C1 ad) and LD 50 (expressed as mouse LD 50 units or mouse LD 50 units/mL). We also provide both values for BoNT/C1 ad. BoNT/C1 ad design, expression, and purification. The LC of BoNT/C1 ad differs from wt BoNT/ C1 as a result of three amino acid substitutions (E 238 > A; H 241 > G; Y 383 > A). These substitutions were designed to inactivate the light chain metalloprotease with minimal disruption to light chain/heavy chain interactions within the protein heterodimer. The three amino acid residues selected for mutation in BoNT/C1 ad are 100% conserved among seven different BoNT LC serotypes; these amino acids were selected based on similar mutations described in our previous work with wt BoNT/A1 5,7,8 . To increase the yield of expressed protein, the DNA sequence encoding the full-length BoNT/C1 ad was synthesized de novo and optimized for expression in Sf9 cells 5 . Major steps of protein expression and processing are shown in Fig. 1. BoNT/C1 ad was engineered with three peptide tags to facilitate purification and detection. The pro-peptide has a polyhistidine tag at the N-terminus, followed by a Tobacco Etch Virus (TEV) recognition sequence. A second TEV recognition sequence was incorporated between the C-terminus of the LC and the N-terminus of the HC. Finally, a hemagglutinin (HA) tag was placed at the C-terminus of the HC, followed by a third TEV recognition site and three Strep tag II repeats. The polyhistidine and Strep tag II repeats allow for two-step tandem affinity chromatography purification, and the HA tag enables specific immunodetection of the recombinant BoNT/C1 ad but not wt BoNT/C1. Following baculovirus-mediated expression in Sf9 cells, the BoNT/C1 ad pro-peptide was sequentially purified by tandem affinity chromatography on Ni 2+ -NTA and StrepTactin Sepharose fast flow resins. For generation of the BoNT/C1 ad heterodimer and removal of purification tags, the purified pro-peptide was cleaved by treatment with TEV protease. Pilot-scale production of BoNT/C1 ad yielded approximately 50 mg of sterile pyrogen-free heterodimer per L of culture. Proteomic characterization of BoNT/C1 ad heterodimer. Sequence fidelity of purified BoNT/C1 ad was confirmed by two methods: 1) Mass-spectrometry analysis (Fig. 2); and 2) Western blot using monoclonal antibodies against the HC, LC, and HA tag (Supplemental Figure S1). Mass spectrometry of the heterodimer by liquid chromatography and high-resolution tandem mass-spectrometry resulted in 96% and 98% sequence coverage for the light and heavy chains, respectively. Importantly, light chain peptides containing the mutation sites were identified, and point mutations were confirmed at E 238 > A, H 241 > G, and Y 383 > A ( Fig. 2A). The recombinant linkers and spacers, the TEV protease recognition sequence, and the C-terminal HA tag were also identified. The missing sequence regions were less than 5 residues in length and were likely excluded from MS data acquisition as well as database searching because of their relative lack of specificity. BoNT/C1 ad murine median lethal dose. The standard method of defining the potency or toxicity of BoNTs is by determination of the median lethal dose (LD 50 ) following intraperitoneal (ip) injection 9 . To determine the LD 50 of BoNT/C1 ad in mice, animals were administered doses of BoNT/C1 ad from 0.04 to 6.00 mg/ kg ( Table 1). The LD 50 of BoNT/C1 ad was determined to be 5 mg/kg by two independent laboratories, while mice treated with 2 mg/kg or greater showed clinical symptoms consistent with neuromuscular impairment, SCiENtifiC REPORts | 7:42923 | DOI: 10.1038/srep42923 including wasp-like waist, generalized body weakness, and altered respiratory pattern ( Table 1). Mice that developed adverse effects and survived were clinically asymptomatic by 120 hours after injection. In comparison, the intraperitoneal LD 50 of a "standard" potency batch of wt BoNT/C1 is reported to be approximately 1 ng/kg 10 . These data suggest that the recombinant atoxic derivative is approximately 5 × 10 6 -fold less toxic than its wild type precursor. BoNT/C1 ad is catalytically inactive. The cellular target of wt BoNT/C1 is the presynaptic compartment, where it associates with neuronal SNARE proteins and cleaves Synaptosomal-Associated Protein 25 (SNAP-25) and syntaxin-1 (Syx1). To confirm that the active site amino acid substitutions eliminated light chain enzymatic activity, mass-spectrometry studies were performed using a metalloprotease activity-based assay developed at the CDC 11 . This assay tests for the cleavage of a short wt BoNT/C1 substrate (SubC) using a highly sensitive mass spectrometry platform that can detect cleavage of substrate containing as little as 1 mouse LD 50 of the toxin. Incubation of SubC with wt BoNT/C1 (18.4 pg, equivalent to 1 mouse LD 50 ; 184 pg, equivalent to 10 mouse LD 50 , and 100 ng, equivalent to 5,435 mouse LD 50 ) resulted in the proteolytic conversion of intact SubC (m/z = 2405.6) to peaks at m/z 1059.7 and 1363.8 (Fig. 3A, B and D). In contrast, incubation of SubC with 10,000 ng of BoNT/C1 ad (equivalent to 6.7 mouse LD 50 ) did not result in cleavage, indicating that BoNT/C1 ad is catalytically inactive (Fig. 3E). These data indicate that the residual catalytic activity of BoNT/C1 ad is reduced by at least 543,478-fold BoNT/C1 ad pro-peptide to heterodimer by proteolytic cleavage with TEV protease: lanes 1 and 12 -protein MW ladder; lanes 2-6 -samples reduced by incubation with β -mercaptoethanol; lanes 7-11 -non-reduced samples; lanes 2 and 7 -no TEV protease added; lanes 3 and 8 -BoNT/C1 ad pro-peptide incubated with TEV protease for 1 hour at 25 °C; lanes 4 and 9 -incubated with TEV protease for six hours; lanes 5 and 10 -incubated with TEV protease for 24 hours; lanes 6 and 11 -incubated with TEV protease for 48 hours. (C) Removal of TEV protease from the reaction mixture by Ni 2+ -NTA affinity chromatography: lanes 1 and 13protein MW ladder; lane 2 -loading material; lanes 3-5 -sequential washes with 15 mM imidazole buffer; lanes 6-9 -sequential washes with 45 mM imidazole buffer; lanes 10-12 -sequential elution with 250 mM imidazole buffer. in comparison to wt BoNT/C1. Notably, no evidence of SubC cleavage was detected when BoNT/C1 ad was used at concentrations corresponding to 6.7 mouse LD 50 doses, suggesting that in vivo symptoms are not attributable to proteolytic activity. To confirm that BoNT/C1 ad was catalytically inactive in a cellular environment, we treated 14-day E19 rat cortical neuron cultures with 5, 25, and 100 nM BoNT/C1 ad (equivalent to 0.01, 0.05, and 0.2 mouse LD 50 units/mL of culture medium, respectively) or with an amount of wt BoNT/C1 equivalent to 405 mouse LD 50 units per mL for 96 h, and assessed integrity of SNAP-25 and Syx1 by immunoblot analysis. Cultures treated with wt BoNT/C1 exhibited a loss of Syx1 immunoreactivity, and displayed a lower molecular weight band indicative of cleaved SNAP-25 (Fig. 4). In contrast, cultures treated with up to 100 nM BoNT/C1 ad exhibited no detectable cleavage of SNAP-25 or Syx1. BoNT/C1 ad light chain persistence in primary neurons in vitro. Experiments were performed to determine the persistence of the BoNT/C1 ad light chain in the cytoplasmic compartment of cortical neuronal cultures. Primary cortical neurons (14 day after plating) were treated with 25 nM BoNT/C1 ad for 48 h, and SNAP-25 integrity was evaluated by immunoblot 1, 4, or 8 days later. No cleavage of SNAP-25 was observed at any time point (Fig. 5A). BoNT/C1 ad LC was detected throughout the 8 days of chase (Fig. 5B). No evidence of (Table 2). Notably, no cleavage of the wt BoNT/C1 substrate, SNAP-25, was detected, even with a chase time up to 8 days in culture (Fig. 5A). This intracellular BoNT/C1 ad LC concentration exceeds the concentration of BoNT/C1 ad added to the medium (25 nM) by 28.2 ± 3.8 fold (range 32-24.4 fold), which suggests that the toxin was enriched within cells through a specific mechanism of internalization. In our hands, and consistent with other reports 12 , primary hippocampal neurons maintained in Neurobasal medium with B27 serum-free supplement started to exhibit a decline in neuronal survival after about 25 days in culture, and therefore, the total duration of these experiments was not extended beyond 24 days. These data indicate that BoNT/C1 ad LC accumulates within neurons and persists for at least 8 days with no detectable catalytic activity. Calculations of EC 50 for the wt BoNT/C1 expressed as mouse LD 50 /mL. The EC 50 (the concentration of toxin sufficient to cleave 50% of Syx1 and SNAP-25 in neurons treated for 24 hours) for the wt BoNT/ C1 was previously determined to be 14.24 pM 13 . Since different batches of toxin have variable potencies, toxin is described in terms of mouse LD 50 units/mL. One mouse LD 50 unit for this batch of toxin was equal to 31.25 pg. The blood volume of an average 25 g mouse is equal to ~1,675 μ L 14 ; thus, the concentration of the protein after injection in vivo is equal to 31.25 pg/1,675 μ L, or 18.7 fg /μ L; whereas, 150 fg/μ L represents a 1 pM toxin solution. Thus, 18.7 fg/μ L represents a 125 fM solution. Therefore, a 14.24 pM solution is equal to 114 mouse LD 50 units (14,240/125) per mL. We can not calculate an EC 50 ratio for the BoNT/C1 ad because at all concentrations of BoNT/C1 ad used in vitro in this study, we could not detect either syntaxin-1 or SNAP-25 cleavage. However, the EC 50 data suggested that the assay utilizing cleavage of cellular substrates, followed by Western blot detection was not very sensitive. In contrast to EC 50 measurements, IC 50 measurements (50% inhibition of neurotransmitter Evaluation of intraneuronal concentration of BoNT/C1 ad light chain: 1. Molecular weight of BoNT/C1 ad LC is 5.2 × 10 4 . 8. Total amount of BoNT/C1 ad LC associated with total extracted protein (from 6) was 540. release) are 10-100 orders of magnitude more sensitive 15,16 . We used a similar approach (IC 80 ) in the current work (next paragraph). Effect of BoNT/C1 ad on synaptic transmission in mESNs. Previous work has demonstrated that intoxication by wt BoNT/A-/G impairs spontaneous synaptic transmission in synaptically networked cultures of mouse embryonic stem cell-derived neurons (mESNs) 15,16 . To determine if BoNT/C1 ad affects synaptic function in situ, synaptically active cultures of mESNs were treated with vehicle, wt BoNT/C1 (400 fM, equivalent to 2 mouse LD 50 units/mL), or BoNT/C1 ad (100 nM, equivalent to 0.2 mouse LD 50 units/mL) for 20 h, and spontaneous miniature excitatory post-synaptic current (mEPSC) frequencies were quantified from whole-cell voltage-clamp recordings (Fig. 6A, B). In cultures treated with wt BoNT/C1, mEPSC frequencies were reduced approximately 80% compared to vehicle-treated controls, indicating that wt BoNT/C1 treatment blocked spontaneous release of neurotransmitter. In contrast, treatment with a 250,000-fold higher concentration of BoNT/C1 ad did not affect mEPSC frequency, suggesting that BoNT/C1 ad did not impair neurotransmitter exocytosis from the pre-synaptic compartment. Furthermore, neither wt BoNT/C1 nor BoNT/C1 ad altered passive membrane properties such as cell membrane capacitance (Fig. 6C) or resting membrane potential (Fig. 6D). These findings suggest that prolonged incubation with 100 nM BoNT/C1 ad does not acutely interfere with action potential propagation or neurotransmitter release, indicating that BoNT/C1 ad is not cytotoxic, even when administered at relatively high concentrations. Absence of BoNT/C1 ad cytotoxicity in vitro. At high doses, wt BoNT/C1 causes acute degeneration of cultured neurons, which is believed to result from the dual cleavage of SNAP-25 and Syx1 17 . To evaluate the cellular effects of intoxication by the catalytically inactive BoNT/C1 ad, 11-13 d cultures of E19 rat hippocampal, cortical, or ventricular zone neurons were exposed to vehicle, BoNT/C1 ad (100 nM, equivalent to 0.2 mouse LD 50 units/mL), or wt BoNT/C1 (500 pM, equivalent to 2,508 mouse LD 50 units/mL) for 48 h, and the integrity of axonal and dendritic processes were evaluated by immunocytochemistry. Cultures treated with BoNT/C1 ad retained morphologically normal axodendritic arbors, as demonstrated by contiguous neurofilament staining of axons and by the presence of dendrites (Fig. 7). In contrast, neurons treated with wt BoNT/C1 exhibited fragmented and degenerated axons, while still retaining normal-appearing dendritic arbors, a pattern of degeneration that is consistent with anterograde toxicity 18 . Collectively, these data indicate that BoNT/C1 ad is more than 200-fold less cytotoxic than the wt toxin. BoNT/C1 ad LC co-localizes with synaptic proteins. The cellular target of wt BoNT/C1 is the presynaptic compartment, where the LC associates with and cleaves the neuronal SNARE proteins SNAP-25 and Syx1. To confirm that BoNT/C1 ad undergoes neuronal uptake and trafficking in a fashion similar to wt BoNT/C1, we evaluated its co-localization with presynaptic proteins in cultured neurons. Primary rat cortical or hippocampal neurons were exposed to 25 nM BoNT/C1 ad for 24 hours. Immunocytochemical analysis showed that the light Cultures of 14-DIV rat hippocampal neurons were exposed to 25 nM BoNT/C1 ad (probed with anti-human IgG 4C10.2) for 16 hours. Cells were prepared for immunocytochemistry, and analyzed using confocal microscopy for the light chain of BoNT/C1 ad (red staining) and for pre-synaptic markers SNAP-25 (A) and VAMP-2 (B), lysosomal marker LAMP-1 (C), and early endosomal marker EEA-1 (D) (green staining). Yellow color in merged images shows co-localization of BoNT/C1 ad LC and the specific marker. Bar = 10 μ m. chain of BoNT/C1 ad co-localized with pre-synaptic SNARE proteins SNAP-25 and VAMP-2 (Fig. 8A, B). To confirm that BoNT/C1 ad LC was not associated with endosomal vesicles and likely destined for degradation, we co-stained cell cultures exposed to BoNT/C1 ad with mAbs against EEA-1 (early endosome antigen 1) and lysosomal marker LAMP-1 (lysosome-associated membrane protein 1). There were some minor instances of co-localization with EEA-1, consistent with the role of the early endosome as a transient reservoir associated with BoNT trafficking, but the overall distribution patterns were distinct, indicating that the majority of BoNT/C1 ad did not accumulate in early or late endosomes (Fig. 8C, D). Detection of BoNT/C1 ad at the NMJ in vivo. The physiologic target of circulating BoNTs is the pre-synaptic compartment of peripheral motor neurons, autonomic synapses, and preganglionic neurons. To determine if in vivo trafficking of BoNT/C1 ad following systemic exposure was similar to that of wt BoNT/C1, 6 week-old CD-1 female mice were injected ip with 0.4 mg/kg of BoNT/C1 ad, and the diaphragm was isolated for co-localization studies 24 h after injection. Processed diaphragms were stained with antibodies against BoNT/ C1 heavy chain (8DC1.2) and Syx1 (as a motor nerve terminal marker). The post-synaptic motor endplate was stained with alpha-bungarotoxin. Microscopic examination of the stained tissues indicated that BoNT/C1 ad closely matched localization of both Syx1 and alpha-bungarotoxin staining, confirming that BoNT/C1 ad traffics to the NMJ in vivo after systemic administration (Fig. 9). BoNT/C1 ad is not associated with autophagosome marker. To explore a possible relationship between the toxicity exhibited by BoNT/C1 ad in vivo and the recycling of internalized protein through autophagosome as a compensatory mechanism for the potential transient overload of the endosomal/lysosomal pathway, we also examined the pattern of co-localization and intensity of autophagy-related protein 7 (APG-7), an autophagy marker, in neuronal cultures treated with 25 nM BoNT/C1 ad for various times of incubation and chase with fresh medium. Neuronal cultures were treated with 25 nM BoNT/C1 ad for 24 hours and chased with fresh 50% conditioned medium for 48 and 72 hours (Fig. 10). Neurons not treated with BoNT/C1 ad, or treated with BoNT/C1 ad for various time periods, and not chased, or chased after treatment with fresh medium for various time periods showed the same intensity and distribution pattern of APG-7. No significant co-localization of the BoNT/C1 ad heavy chain with APG-7 was detected. Discussion The studies presented here are part of our ongoing effort to engineer recombinant derivatives of BoNTs capable of delivering a therapeutic agent to the neuronal cytoplasm by exploiting the trafficking pathways used by native wt BoNT. Here, we designed, expressed, purified, and evaluated a derivative of BoNT serotype C1 (BoNT/C1 ad) rendered atoxic through amino acid substitutions E 238 > A; H 241 > G; and Y 383 > A. We confirmed that BoNT/ C1 ad targets pre-synaptic neuronal terminals in vitro and in vivo. We found that BoNT/C1 ad maintained the structural integrity of wt BoNT/C1, and trafficked to the pre-synaptic compartment at the NMJ of the diaphragm after systemic administration. BoNT/C1 ad light chain co-localized with presynaptic SNARE proteins, and failed to co-localize with endosomal, lysosomal or autophagosomal markers, suggesting that it traffics to the presynaptic cytosol without being sequestered in a vesicular compartment or destined for rapid degradation. Using an in vivo mouse bioassay, we determined the LD 50 of the BoNT/C1 ad to be 5 mg/kg. No metalloprotease activity was detected for BoNT/C1 ad in multiple in vitro assays over a wide range of concentrations, even though in some of these assays the concentration of the protein exceeded the concentration that was associated with symptoms in vivo. It was also found that BoNT/C1 ad was approximately 5 × 10 6 -fold less toxic than wt BoNT/C1 10 . Extrapolations from in vivo mouse bioassay data suggested that the high-end dose that can be safely injected into an average (70 kg) human without causing side effects is equal to 56 milligrams. Collectively, these findings indicate that BoNT/C1 ad may be useful as a molecular vehicle for drug delivery to the cytosolic pre-synaptic compartment of neurons. The decision to switch from BoNT/A1 to BoNT/C1 as the precursor for development of a neuronal delivery vehicle was influenced by several factors. BoNT/A1 ad would be ineffective as a delivery vehicle for a therapeutic against wt BoNT/A1, which is the serotype responsible for nearly 50% of clinical cases of botulism in the United States from 2004-2014 19 . This is because any anti-BoNT/A1 therapeutic moiety (small molecule inhibitors, peptides, or antibodies) intended to bind and inactivate the wt BoNT/A1 LC in the neuronal cytoplasm would also recognize the delivery vehicle itself, and likely would render the therapeutic moiety inactive. wt BoNT/C1 rarely causes human botulism 20 , and therefore, there are no similar restrictions on use of the wt BoNT/C1-based vehicle Figure 10. Immunostaining of neurons treated with BoNT/C1 ad and chased with medium. Cultures of rat hippocampal neurons, 14 days after plating, were exposed to 25 nM BoNT/C1 ad for 48 hours, chased with fresh medium for 48 or 72 hours, prepared for immunocytochemistry as described in Materials and Methods, and analyzed using confocal microscopy. Cells were stained for autophagy marker APG7 (red) and for the heavy chain of BoNT/C1 ad (green). Panel A: cells not treated with BoNT/C1 ad, Panel B: 48 hours chase after exposure to BoNT/C1 ad, Panel C: 72 hours chase after exposure to BoNT/C1 ad. Bar = 10 μ m. SCiENtifiC REPORts | 7:42923 | DOI: 10.1038/srep42923 for delivery of anti-BoNT therapies against major human BoNT pathogens, such as serotypes A, B and E. In contrast to BoNT/A1 ad, a BoNT/C1 ad-based delivery vehicle could deliver a therapeutic moiety against these serotypes into the neuronal cytoplasm without the danger of self-recognition and inactivation. Cellular entry of wt BoNT/C1 employs a mechanism that is different from the six other BoNT serotypes. As shown in several reports, the presence of Sia-5 and Sia-7 gangliosides on the neuronal plasma membrane appears to be sufficient for BoNT/C1 internalization 21 . Although attempts to identify a protein-based receptor for BoNT/ C1 have failed [22][23][24] , evidence of increased toxicity following potassium stimulation of neuronal cultures suggests that entry may be enhanced by cell surface components that are involved in activity-dependent synaptic endocytosis 21,23,25 . wt BoNT/C1 is able to enter neurons that were previously intoxicated with wt BoNT/A1, which is consistent with the hypothesis that these two serotypes use different mechanisms of internalization 26 , and suggests that BoNT/C1 ad can be used as a molecular vehicle to deliver wt BoNT/A1-inactivating therapeutic entities to the neuronal cytoplasm of wt BoNT/A1-intoxicated neurons, an important feature, because the therapeutic cargo will reside in the same subcellular compartment as the active toxin. The idea that BoNT can be used as a vehicle for delivery of therapeutic cargo to neurons has a long history. From the structural prospective, the best option for the placement of therapeutic cargo (small molecule, peptide or protein) can be achieved through the linkage of the cargo to the N-terminus of metalloprotease-inactivated LC of the toxin heterodimer (LC-fused cargo) 5 . In our design of BoNT/C1 ad, nine additional amino acids are placed at the N-terminus of the atoxic light chain. The primary purpose of these extra amino acids is spatial separation of the affinity tag (His tag) from the rest of the molecule. In our earlier experiments, we found that when this tag was directly linked to the first proline residue of the BoNT light chain, accessibility of the affinity tag was compromised (unpublished observation). These nine amino acids can be viewed as mini-cargo that we were able to deliver to the intraneuronal cytoplasm along with the rest of the light chain. The HC of BoNT does not translocate into the cytoplasm, and thus, placement of cargo in this location is unsatisfactory. Moreover, placement of large cargo at the C-terminus of the HC could interfere with the recognition of the cognate receptor, and could abrogate the ability of the molecule to deliver the cargo to the neuronal cytoplasm. Placement of large cargo at the C-terminus of the LC or N-terminus of the HC could interfere with the tight organization of the LC-HC macromolecule, and could disrupt the disulfide linkage of the native heterodimer. Thus, if the designed molecule is internalized and processed similar to wt BoNT, then LC-fused cargo should escape the early endosome and be translocated into the neuronal cytoplasm. Specific neuronal delivery of the therapeutic cargo through this wt BoNT mechanism is expected to circumvent problems associated with impermeability of the cellular plasma membrane, and toxicity due to drug reaching off-target cells. BoNT/C ad LC is delivered to the pre-synaptic compartment of neurons (Fig. 8). This is in contrast to reports in which recombinant BoNT derivatives delivered to neurons have become entrapped in endosomes with little or no co-localization with SNARE proteins, suggesting that the BoNTs did not gain complete access to the cytosolic compartment [27][28][29][30][31] . There is, however, at least one report in which the authors convincingly show delivery of large protein molecules to the neuronal cytoplasm as a fusion with full-length recombinant BoNT/D 1 . BoNT/C1 ad has an improved therapeutic margin, compared to BoNT/A ad 7 . Mice injected with high but sub-lethal (2 mg/kg) doses of BoNT/C1 ad ( Table 1) exhibited symptoms of neuromuscular disruption (reduced mobility, decrease in respiratory patterns, and wasp-like waist) that resolved within 72 hours after challenge. One possible explanation for the observed symptoms may be due to the ability of the light chain of BoNT/C1 ad to accumulate in the presynaptic compartment and to bind to its native endogenous substrates-SNAP-25 and Syx1. Even though cleavage of the substrates is disabled, binding of BoNT/C1 ad to the substrates may interfere with their assembly into the SNARE protein fusion complex necessary for synaptic exocytosis. However, there is also considerable experimental evidence against this explanation: 1) data from unaltered spontaneous neurotransmitter release presented here suggest that SNARE sequestration is not a major factor for in vivo toxicity of BoNT/C1 ad; 2) the transient characteristic of the toxicity did not correlate with the stability of the light chain of BoNT/C1 ad in neuronal cytoplasm; 3) the comparative molar ratio of BoNT/C1 ad light chain to SNAP-25, approximately 1 molecule of the BoNT/C1 ad light chain to over 100 molecules of SNAP-25, makes it unlikely that BoNT/C1 ad light chain could be responsible for the mechanical sequestration of SNARE proteins in the absence of cleavage. BoNT/C1 ad light chain has a long lasting persistence in neuronal cytoplasm. In pulse-chase experiments, BoNT/C1 ad light chain was detected in whole cell lysates for as long as eight days after exposure, which is similar to our observation for the stability of the light chain of BoNT/A1 ad 7 . Unlike the slow decline of the intracellular concentration of BoNT/A ad light chain starting at day 7 of chase 7 , we did not detect significant changes in intracellular concentration for the light chain of BoNT/C1 ad between days 1 and 8 of chase (Fig. 5). Alternatively, the observed in vivo toxicity of BoNT/C1 ad may result from overload of intracellular protein homeostasis systems. However, the data presented here seem to rule out this possibility as well, in that the in vivo toxicity of BoNT/C1 ad was not a consequence of transient overload of the endosomal/lysosomal pathway leading to recycling of the protein via an alternative stress-response system such as autophagy (Fig. 10). Ganglioside-mediated cellular entry of BoNT/C1 may provide yet another clue for the transient toxicity of BoNT/C1 ad in vivo. Gangliosides have a natural predisposition to lateral segregation within membranes, and represent an important integral component of membrane microdomains (or lipid rafts) that are enriched with cholesterol 32 . Sphingolipids and gangliosides are important modulators of membrane receptors, ion channels, and downstream signaling pathways. Regulation occurs by different mechanisms, some of which are rather general and others that are receptor and/or ganglioside-specific 32 . Binding of the BoNT/C1 ad heavy chain to gangliosides may contribute to their sequestration from interaction with their endogenous partners, and thereby, change the physical properties of the plasma membrane. This sequestration may also lead to transcriptional activation, resulting in increased endogenous de novo synthesis of gangliosides, as a compensatory mechanism 33,34 . The time frame (24-72 hours) required for disappearance of BoNT/C1 ad toxicity in vivo is consistent with this hypothesis. However, the exact nature of residual in vivo BoNT/C1 ad toxicity needs further investigation, not only for SCiENtifiC REPORts | 7:42923 | DOI: 10.1038/srep42923 understanding the molecular mechanisms, but also from the practical perspective of developing neuronal delivery vehicles with an expanded therapeutic window. Development of tools aimed at delivering therapeutic moieties to the cytoplasm of neurons is challenging and plagued by numerous side effects, such as cytotoxicity of lipophilic micelle-forming agents and cell-permeabilizing peptides 35,36 . Existing viral delivery vectors are taken up by a majority of cell lines, reflecting the lack of specific entry mechanism 37 , and even replication-incompetent viruses pose a long-term risk related to their abilities to 1) integrate into genomic DNA, 2) contribute to the synthesis of proteins other than intended therapeutic cargo with the potential deleterious side effect of unintended expression, and 3) cause cytotoxicity via internalization of their integral envelope/nucleocapsid proteins. Use of viral delivery vectors also poses regulatory challenges, especially from the perspective of defining the proper dosage of therapeutic delivered in the form of DNA that needs to be expressed. In this report we have proposed a practical alternative: the use of a relatively simple, multi-domain BoNT molecule as a molecular delivery vehicle. This can have multiple advantages, including simplicity of internalization, absence of cytotoxicity at the high concentrations used (up to 500 nM), high specificity, and functional flexibility (for example, through the introduction of features to modulate intracellular stability and duration of the therapeutic cargo 38 ). Although the potential advantages of delivery vectors such as BoNT/C1 ad are clear, expression of recombinant BoNT/C1 proteins in properly folded, physiologically active di-chain form has proven technically challenging. In this regard, an early publication 39 reporting the expression of the BoNT/C1 atoxic protein in E.coli, with a design similar to ours, provided no evidence of satisfactory yield, successful processing of the expressed pro-peptide, or whether the expressed protein mimicked the trafficking pattern of the wt BoNT/C1. In summary, the data presented here suggest that BoNT/C1 ad provides a promising molecular vehicle for drug delivery to therapeutic targets in the pre-synaptic compartment of neurons. BoNT/C1 ad is a neuron-specific non-viral vector capable of targeting the cytoplasm of neurons in vivo. Relevant intra-neuronal targets are of interest for multiple neurological diseases, including reversing intoxication with BoNT itself. The generalizability of this approach is under active investigation. Animals. Eight week old, CD-1 female mice (Charles River's Laboratories) were housed five per cage in a barrier facility, and were maintained on a 12-hour light/dark cycle (7 AM to 7 PM) with ad libitum access to food and water. The average weight of mice used for determination of BoNT/C1 ad toxicity in vivo was 23.2 grams. The average weight of mice used for determination of wt BoNT/C1 potency in vivo was 25.8 grams. Reagents/Supplies. Reagents included ammonium bicarbonate (Acros Expression and processing of BoNT/C1 ad. The gene for BoNT/C1 ad was engineered with tRNA bias typical for the Sf9 cell translational machinery and synthesized de novo. The full-length BoNT/C1 ad DNA was incorporated into recombinant baculovirus as described 5 , and the protein was expressed as a secreted pro-peptide, in accordance with CDC regulations (GenBank accession number for BoNT/C1 ad is KX496548). Protein was expressed and purified using the same baculovirus methodology described for BoNT/A1 ad 5 . SF 900II medium supernatants containing secreted BoNT/C1 ad pro-peptide were collected. The BoNT/C1 ad was sequentially purified by tandem affinity chromatography on Ni 2+ -NTA and StrepTactin Sepharose fast flow resins. Protein eluted from the Ni 2+ -NTA resin with buffer containing 250 mM imidazole was loaded on StrepTactin sorbent, and after several washes was eluted with 5 mM D-desthiobiotin. Aliquots representing loading material, flow through, washes, and eluates from both sorbents during the purification process were loaded on reducing SDS PAGE, separated, and stained with Coomassie Brilliant Blue R250 (Fig. 1A). A homogeneous pro-peptide, eluted from StrepTactin resin (lane 13, Fig. 1A) was concentrated, dialyzed against 40% glycerol/PBS and stored at − 80 °C. For generation of the BoNT/C1 ad heterodimer and removal of the affinity tags, the purified pro-peptide was treated with 6-His-TEV protease (1 mg 6-His-TEV per 5 mg of BoNT/C1 ad pro-peptide) for 48 hours at 25 °C in the presence of 3 mM reduced/0.3 mM oxidized glutathione, to provide necessary reducing potential for TEV activity without contributing to the reduction of the disulfide bridge between LC and HC of BoNT/C1 ad heterodimer. 6-His-TEV protease was expressed in the laboratory using methods adapted from 40 . Briefly, expression plasmid pRK793, encoding 6-His-TEV protease, was transformed into BL21 (DE3) CodonPlus-RIL cells, and grown in 1.5L cultures producing 5 mL cell paste per L. The yield of purified protease was about 4.5 mg/L of culture. Analysis of TEV cleavage after 48 hours of incubation with the pro-peptide showed that more than 95% of the BoNT/C1 ad was cleaved into heterodimer (Fig. 1B). The removal of 6-His-TEV was performed by another round of Ni 2+ -NTA affinity chromatography; BoNT/C1 ad heterodimer remained in the flow through fraction and initial washes with low imidazole (Fig. 1C). After concentration of the fractions containing BoNT/ C1 ad heterodimer, the protein was subject to final chromatography on HiLoad 26/600 Superdex ™ 200 pg gel filtration column for removal of aggregates and low molecular weight contaminants. The protein from the major peak (Supplemental Figure S2) was concentrated, dialyzed against 40% glycerol/PBS, sterile filtered through a 0.22 micron filter, and concentration was determined by BCA assay. Final concentration of the heterodimer was normalized to 10 mg/mL, and the protein was aliquoted and stored at − 80 °C. Proteomic characterization of BoNT/C1 ad heterodimer. PAGE separation. A 25 μ g protein sample in 5 μ L of a solution consisting of 40% glycerol, 100 mM NaCl, 25 mM sodium phosphate, pH 7.5 was mixed with 7 μ L water and 4 μ L 4x SDS PAGE sample loading buffer, and heated at 90 °C for 10 min. The sample was split into equal aliquots and loaded in triplicate into three wells of a 4-12% Bis-Tris NuPAGE gel. Gel electrophoresis was performed at a constant voltage of 180 V for 20 min in NuPAGE MOPS SDS running buffer. The gel was rinsed three times with 100 mL water for 5 min and stained with 20 mL of SimplyBlue SafeStain staining solution for 1 h at room temperature followed by washing four times with 100 mL water for 10 min. In-gel proteolytic digestion. The gel band containing BoNT/C1 ad from each of the sample wells was excised in 1 mm cubic pieces and pooled in a 1.5 mL Eppendorf tube. Each sample was reduced and alkylated by sequential incubation in 150 μ L of 20 mM DL-dithiothreitol at 60 °C for 1 h, followed by incubation in 150 μ L of 55 mM iodoacetamide in the dark room temperature for 45 min; reagent solutions were removed after each step. Gel pieces were de-stained twice in 300 μ L of 1:1 (v:v) acetonitrile/100 mM ammonium acetate solution for 1 min on a shaker, and dehydrated in 150 μ L acetonitrile for 10 min. After acetonitrile removal, the gel was consecutively digested overnight at 37 °C in 150 μ L of freshly prepared protease solutions made up in 50 mM ammonium bicarbonate; trypsin, chymotrypsin and flavastacin (Asp-N) with protease concentrations at 0.02 μ g/μ L. Supernatant from each digestion was collected in a separate tube and vacuum-centrifuged to dryness. Tryptic, chymotryptic, and Asp-N digest were re-suspended respectively in 80, 60, and 40 μ L of 5/95/0.1 acetonitrile/water/formic acid (v:v) for nanoLC-MSMS analysis. nanoLC-HRMSMS analysis. The digests were analyzed in triplicate on a nanoLC-MSMS system composed of a Thermo nLC EASY1000 interfaced with a Thermo Fisher Scientific Fusion Orbitrap mass spectrometer. There was a range of peptide concentrations in these digests; trypsin digests were the most concentrated, and Asp-N digests were the least concentrated. Trypsin (2 μ L), chymotrypsin (3 μ L), and AspN (4 μ L) digest samples were loaded on a 100 μ m id × 20 mm × 5 μ m 200 Å AQC18 trap column. Trapped peptides were eluted and separated on a 75 μ m id × 18 cm × 5 μ m 100 Å AQC18 analytical column using a 120 min nanoUPLC run. Reversed phase chromatography was performed using 0.1% formic acid in water and 0.1% formic acid in acetonitrile as mobile phases A and B, respectively. Mass spectrometric data were collected using a 3 sec top-speed data-dependent acquisition (DDA) method in which full MS scans were collected on an Orbitrap detector at a resolving power of 120,000 over a m/z 350-1500 range. Peptide precursors were isolated for collision induced dissociation (CID), and MSMS spectra from the ion trap were collected using a rapid scan rate. Data processing. The acquired data were processed with MaxQuant software version 1.5.3.30, and searched against a sequence database for BoNT/C1 ad light and heavy chains to identify peptides with full-specific trypsin cleavage or semi-specific chymotrypsin and AspN cleavages. Using full scan spectra, peptide matches were performed employing a mass tolerance of 10 ppm, and a 0.5 Da window was used to isolate product ions in MSMS scans. Peptides were identified with a false discovery rate (FDR) of 0.01%. Western Blot Analysis. Preparation and maintenance of E19 rat cortical and hippocampal neurons were performed as previously described 7 . Neurons were exposed to BoNT/C1 ad or wt BoNT/C1 kindly provided by Dr. Eric Johnson (University of Wisconsin at Madison) (potency of the batch was equal to 3.23 × 10 6 mouse LD 50 /mg of protein), as described in Results and figure legends. Neurons were harvested and solubilized; protein was separated and transferred to nitrocellulose membranes as previously described 7 . Membranes were incubated with primary antibodies overnight at 4 °C, and with secondary antibodies for 45 minutes at room temperature. Super Signal West Pico chemiluminescent substrate was used for visualization by autoradiography. Mouse Lethality Analysis of BoNT/C1 ad LC stability, concentration, and residual SNAP-25 cleavage in neurons. Approximately 3 × 10 6 primary rat fetal E19 neurons were isolated and plated according to a described procedure 7 . Neurons were incubated in 50% conditioned medium for 14 days after plating. Incubation was continued for 48 hours either in 50% conditioned medium (control) or in 50% conditioned medium in the presence of 25 nM BoNT/C1 ad (equivalent to 0.05 mouse LD 50 units in the volume used). Medium was then aspirated and cells were washed and chased with 50% conditioned medium without BoNT/C1 ad for 24, 96, or 192 hours (192 hours only for controls). After incubation, cells were harvested and solubilized on ice in 500 μ L lysis buffer containing 0.5% Triton X-100 and with protease inhibitors, and total protein concentration was measured and normalized. Approximately 55 micrograms of total protein were loaded per lane for each sample, separated on the reduced SDS PAGE, and transferred to a 0.2 μ m nitrocellulose membrane. Following transfer, membranes were blocked in 10% fat-free milk supplemented with 5% Normal Goat Serum in TBST (150 mM NaCl, 10 mM Tris-HCl, pH 8.0, 0.1% Tween ® 20) at room temperature for 1 hour. Membranes were incubated with primary antibodies overnight at 4 °C and with secondary antibodies 30 minutes at room temperature. Primary antibodies used were anti-BoNT/C1 ad LC (4C10.2, provided by Dr. James Marks, UCSF), anti-SNAP-25, and anti-β -actin. Following incubations, blots were washed with TBST 3 times for 5 minutes. Super Signal West Pico chemiluminescent substrate was used for visualization. Li-Cor Odyssey Fc imaging system was used to obtain Western blot images. Li-Cor Image Studio software (version 5.2.5) was used to analyze band signal intensities. Electrophysiology. R1 embryonic stem cell lines were obtained from ATCC (Manassas, VA, USA) and differentiated into mouse embryonic stem cell-derived neurons (mESNs) as previously described 13 . Briefly, mESNs were plated at 150,000 cells/cm 2 in 6 cm dishes coated with polyethylenimine and maintained at 5% CO 2 , 37 °C, and 95% humidity in Neurobasal medium with B27 supplement. Experiments were performed 24 to 26 days after plating. For intoxication with wt BoNT/C1 (toxin was purchased from Metabiologics, Madison, WI, batch potency was equivalent to 2 × 10 7 mouse LD 50 /mg of protein) and BoNT/C1 ad proteins were prepared at 100x final concentration in fresh medium and diluted into mESN cultures in a tissue culture glove box (Coy Labs, Grass Lake, MI, USA). Whole-cell patch-clamp electrophysiology was performed to record miniature excitatory post-synaptic currents (mEPSCs) as previously described 15 . mEPSCs were detected from whole-cell recordings using Mini-Analysis v6 with default detection setting from AMPA receptor currents (Synaptosoft Inc., Fort Lee, NJ). Resting membrane potential was determined using the zero-current-clamp method immediately after establishment of whole-cell configuration, and were corrected for a calculated liquid junction potential of 15.6 mV. Membrane capacitance was determined from c-slow values obtained by Heka Patchmaster 2.53 software (Heka, Lambrecht/Pfalz, Germany). Data analysis and graphing were performed in Prism v6.1 (GraphPad software, La Jolla, CA). Averaged mEPSC frequency data were normalized to age-and lot-matched vehicle-treated controls and presented as percent synaptic activity. Significance of differences among means were determined using one-way ANOVA followed by Dunnett's test; P < 0.05 was considered significant. Immunocytochemistry. Fresh E18 rat hippocampal, cortical, and ventricular zone tissues were obtained from BrainBits LLC (Springfield, IL, USA), dissociated according to the manufacturer's instructions, and plated at a density of 75,000 cells/cm 2 on polyethylenimine/laminin-coated glass coverslips (Sigma-Aldrich, St. Louis, MO, USA). Neuronal cultures were maintained at 5% CO 2 , 37 °C, and 95% humidity in NbActiv4 medium. Experiments were performed 11 to 13 days after plating (DAP). For intoxication with wt BoNT/C1 (Madison, WI; batch potency equivalent to 2 × 10 7 mouse LD 50 /mg protein) or BoNT/C1 ad, proteins were prepared at 100x final concentration in fresh NbActiv4 medium, and diluted into neuronal cultures. Coverslips were then washed with ice-cold DPBS, fixed with ice-cold 4% formaldehyde for 45 minutes at room temperature, and then permeabilized and blocked for 1 h with 0.1% saponin and 3% bovine serum albumin in DPBS (PBSS). Coverslips were then treated with primary antibodies against microtubule associated protein 2 (MAP2, dilution 1:10,000) and neurofilament middle (NF-M, dilution 1:250) in PBSS for 1 h. After washing, coverslips were incubated for 1 h with Alexa-labeled secondary antibodies diluted 1:500 in PBSS. Coverslips were then mounted onto slides with Prolong Gold DAPI mounting media and imaged by confocal microscopy using a Zeiss LSM 700 (Carl Zeiss Inc, Thornwood, New York). Alternatively, preparation and maintenance of E19 rat cortical and hippocampal neurons were performed as previously described 7 . Cells were exposed to BoNT/C1 ad for different times as indicated in figure legends and/or Results. Image scanning was performed on a Nikon LSM 510 confocal microscope, and images were analyzed using Zeiss LSM confocal microscopy software. Accumulation of BoNT/C1 ad at the diaphragm. Six week-old female CD-1 mice (18 to 22 g, n = 5) were injected ip with 10 μ g of BoNT/C1 ad in 250 μ L of 0.5% mouse serum albumin in 1x DPBS. Nerve-muscle diaphragm preparations were dissected 24 hours after injection, mounted, and prepared for immunohistochemical staining as previously described 6 . Slides were examined by confocal microscopy using a NIKON LSM-510 microscope equipped with argon and He-Ne lasers, and images were analyzed using Zeiss LSM confocal microscopy software. Examination of autophagy compensatory mechanism following internalization of BoNT/C1 ad. Preparation and maintenance of E19 rat cortical and hippocampal neurons were performed as previously described 7 . For immunocytochemical studies, 1.5 × 10 5 cells were plated on cover slips inserted into 6 × 35 mm/ well plates in 3 mL medium/well. Neurons (14-day after plating) were exposed to BoNT/C1 ad (25 nM) for 48 hours, washed twice with fresh Neurobasal media, and maintained in 50% conditioned media for 48 and 72 hours after washing. Immediately after incubation, cells were washed three times with ice-cold DPBS, fixed with 4% formaldehyde for 15 minutes, and permeabilized with 0.1% Triton X-100 for 5 minutes. After fixation, the permeabilized cells were washed three times with DPBS, blocked for 45 minutes at room temperature with 10% BSA in DPBS, and incubated overnight at 4 °C with primary antibodies: 8DC1.2 (provided by J. Marks, final concentration 1 μ g/mL), APG7 (final concentration 0.8 μ g/mL). Primary antibodies were diluted in DPBS-NGS. Cells were washed three times with DPBS and incubated for 1 hour with the following secondary antibodies: goat anti-rabbit IgG Alexa Fluor ® 555 secondary antibody, or goat anti-human IgG Alexa Fluor ® 488 secondary antibody. Final concentration for all secondary antibodies was 0.66 μ g/mL. Cells were washed three times with DPBS, and the cover slips were mounted on slides with mounting medium. Image scanning was performed on a Nikon LSM 880 confocal microscope and images were analyzed using Zeiss LSM confocal microscopy software.
2018-04-03T03:25:26.319Z
2017-02-21T00:00:00.000
{ "year": 2017, "sha1": "34189273ed764a3a3fd935242a5c844c8bc97eb9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep42923.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "e50fdc8a2eb247c58fcde030e0644b6d9b607e6b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
6599
pes2o/s2orc
v3-fos-license
The Role of Electrochemotherapy in the Treatment of Malignant Melanoma About 68,130 new melanomas will be diagnosed in the United States during 2010 (38,870 men: 29,260 women) and of those 8,700 people will die of the disease (5,670 men; 3,030 women). The death rate has been dropping since the 1990s for those younger than 50, but has remained stable or is rising for older individuals. However, the incidence of melanoma has been increasing for at least 30 years, and this trend has become more pronounced in young white females and in older white men1. Malignant Melanoma is the seventh most common type of cancer, but it is the first cause of death from cutaneous skin cancers2. It has been estimated that the lifetime risk of developing malignant melanoma is 2% (1 in 50) for Caucasians, 0.1% (1 in 1,000) for those of black descent, and 0.5% (1 in 200) for Hispanics3 . When melanoma is detected in advanced stages, it carries a dismal prognosis, with a mean survival of about 8 months and a 5-year survival as low as 5% 4-6. The disease spreads both by the lymphogenous and the haematogenous routes and can metastasize to virtually any organ in the body. When secondary tumours emerge, these usually follow a sequential pattern to regional lymph node basins, followed by distant sites including skin, subcutaneous tissue, lung, liver, brain, bone and other viscera5-6. Local recurrence, intransit metastases and satellitosis (cutaneous metastases within 2 cm of original lesion) represent the same dissemination process4 in the dermal lymphatics. When the patients present with cutaneous metastases, they are considered to have stage IIIB disease7. Cutaneous metastases occur in 2-20% of patients, depending on tumor thickness 6, 8-9 and can occur either during the early or late phase of the disease 10. In many instances they can represent the first site of recurrence after surgical excision of the primary tumor 11 . The majority (70-80%) of recurrences are diagnosed within the first 3 years of initial diagnosis, and the median time to the presence of in-transit disease could range between 3 to 16 months4. The recurrences present as local or in-transit disease in 20-28%, regional disease in 26-60% and as distant metastases in 15-50% of patients. Even though local recurrence is Introduction About 68,130 new melanomas will be diagnosed in the United States during 2010 (38,870 men: 29,260 women) and of those 8,700 people will die of the disease (5,670 men; 3,030 women).The death rate has been dropping since the 1990s for those younger than 50, but has remained stable or is rising for older individuals.However, the incidence of melanoma has been increasing for at least 30 years, and this trend has become more pronounced in young white females and in older white men 1 .Malignant Melanoma is the seventh most common type of cancer, but it is the first cause of death from cutaneous skin cancers 2 .It has been estimated that the lifetime risk of developing malignant melanoma is 2% (1 in 50) for Caucasians, 0.1% (1 in 1,000) for those of black descent, and 0.5% (1 in 200) for Hispanics 3 .When melanoma is detected in advanced stages, it carries a dismal prognosis, with a mean survival of about 8 months and a 5-year survival as low as 5% [4][5][6] .The disease spreads both by the lymphogenous and the haematogenous routes and can metastasize to virtually any organ in the body.When secondary tumours emerge, these usually follow a sequential pattern to regional lymph node basins, followed by distant sites including skin, subcutaneous tissue, lung, liver, brain, bone and other viscera [5][6] .Local recurrence, intransit metastases and satellitosis (cutaneous metastases within 2 cm of original lesion) represent the same dissemination process 4 in the dermal lymphatics.When the patients present with cutaneous metastases, they are considered to have stage IIIB disease 7 .Cutaneous metastases occur in 2-20% of patients, depending on tumor thickness 6,[8][9] and can occur either during the early or late phase of the disease 10 .In many instances they can represent the first site of recurrence after surgical excision of the primary tumor 11 .The majority (70-80%) of recurrences are diagnosed within the first 3 years of initial diagnosis, and the median time to the presence of in-transit disease could range between 3 to 16 months 4 .The recurrences present as local or in-transit disease in 20-28%, regional disease in 26-60% and as distant metastases in 15-50% of patients.Even though local recurrence is tails are embedded within the plasma membrane.This porosity is transient and reversible if short duration high amplitude square wave electrical pulses are optimized. 18lectroporation (EP) of tumour nodules allow the permeation, into the cancer cells, of poorly permeable antineoplastic drugs, such as bleomycin or cisplatin which are given either systemically or intratumorally (IT) 18,39,[43][44][45][46] The temporary permeability of the cell membrane caused by the electric pulses facilitates a potent localized effect and magnifies the drugs cytotoxicity by orders of magnitude 18,39,[46][47][48][49][50] . Types of Drugs After several studies of different cytotoxic drugs, two have been identified as the best candidates for ECT: Bleomycin and Cisplatin 17,45 .Of importance in these studies EP or the drugs on their own do not influence the growth of tumours, yet their combination have potent tumoricidal effects 18,[51][52][53] ( See Figure 1).An advantage of ECT is that it requires much lower doses of the cytotoxic drug for optimal effects than those usually used for systemic treatments.In addition there is little in the way of collateral damage or complications, since the the cell killing is confined to the tissues affected by the electric field.Bleomycin intercalates into the cellular chromatin causing single and double stranded breaks in DNA resulting in a mitotic cell death by pseudoapoptosis.( Figure 1 B,C & D).On the other hand, Cisplatin creates an apoptosis effect on the cell 18,46 .This cytolytic activity is potentiated more than 1,000 fold for Bleomycin and 100 fold for Cisplatin by the addition of EP 12,19,[48][49] . The Vascular Lock The electric pulses produce a transient state of hypoperfusion by local reflex vasoconstriction at the arteriolar level (lasting 1-2 minutes) and a phase of interstitial edema (that resolves with membrane resealing).However the effect may last longer (12 hours to 5 days) in rapidly dividing tumor cells and is more prominent in tumours with a less mature endothelial lining and higher interstitial pressure.This phenomenon mediated by the sympathetic nervous system is termed the "vascular lock" and it has implications for timing of drug administration [18][19] .After application of the electric current, there is a retention of drugs already in the tumour but there is also an impairment of entry of drugs from the circulation.Thus when the cytotoxic drugs are administered systemically sufficient time should be allowed to achieve optimal intratumoral drug concentration prior to application of EP.ECT produces other vascular influences, which are believed to be secondary to a reduction in local angiogenic factor production, such as endothelial cell destruction and neovascular reorganization 46,[54][55] .The combined vascular influences have been successfully exploited in the treatment of bleeding melanomas [56][57] -which may be a difficult to manage problem, sometimes refractory and fatal, in patients with unresectable disease. The Equipment The electric pulses may be applied to the tumors either by plate electrodes on the skin surface or by needle electrodes inserted into the lesion (figure2).The electric field distribution is determined by the geometry of the electrodes.Regardless of the type of electrode the electric field is highest around and between the electrodes [17][18] more suitable for use in superficial skin lesions, while needle electrodes are used for deeper seated lesions, such as exophytic and thick lesions (maximum depth 3 cm) 52 .A disadvantage of plate electrodes over the needle type is the potential skin damage that may be generated by the higher impedance/resistance of the skin, especially when treating larger affected areas 19,58 .Care must be taken to avoid inserting the needle electrodes into the healthy tissue surrounding the tumors, which may also result in local subcutaneous burns 39 .There are three types of electrodes in common usage (FIG 2 ).Type I electrodes consist of two parallel stainless-steel plate electrodes, used for superficial lesions and do not penetrate the skin. Type II electrodes are used for smaller lesions and consist of two rows of eight needles with 4 mm distance between them, while Type III electrodes are recommended for larger lesions (>1cm), with the needles in a hexagonal configuration.The needles are inserted encircling the tumor and down to the subcutaneous tissue, slightly deeper than tumor depth 17,19,52,59 . It is recommended to elevate or tent the skin at the time of delivering the electric current , if the electrodes are to be inserted in a superficial or shallow subcutaneous areas such as near the knees, tibial tuborosities, the scalp, or close to any other osseous structures 52 Anesthesia The procedure can be performed in the outpatient or ambulatory setting, under local anesthesia in association with conscious sedation.General anesthesia may be preferred for larger tumors, or tumors located in prior irradiated or fibrotic tissues where infiltration of local anesthetic may be painful and less likely to diffuse and thus achieve adequate pain control and for those located too close to vascular or bony structures.The dose of local anesthetic should be recorded and should not exceed the maximum allowed per body weight for Lidocaine without epinephrin (3 mg/kg ) 52,[59][60] .In addition, a mixture of 0 2 /air with Fi0 2 of 40 % is administered to the patients during conscious sedation. The Electric Current Permeabilization occurs in the cell when the internal transmembrane potential has surpassed the critical value between 200-300mV.The extent of the electroporative effect depends on the number and duration of the electric pulses 18,55 .Two pulse parameters have been evaluated -exponentially decaying pulses and square wave pulses.Of the two, square wave pulses are preferred as they permit independent control of the length and amplitude.The pulse parameters selected for treatment depend on the type of electrode 18,52 . Pulse parameters Ideally for Type I (plate) electrodes, pulse parameters of 8 square waves with amplitude of 1300 V/cm, duration of 100 µs, and frequency of 1Hz are used.The current should be delivered in two perpendicular directions. For type II (needle) electrodes the voltage amplitude may be reduced to 1000V/cm.The pulses are administered simultaneously between the needle pairs.For Type III, the needles are positioned hexagonally and 96 pulses are given together (12 pairs x 8 pulses) at a frequency of 5 kHz 61 .Due to the already described vascular lock phenomenon, it is recommended that the electric current is applied between 8 to 28 minutes after IV administration of the drug 62 , or immediately (within 2-10 minutes) after intratumoral administration 19,59,[63][64] . Drug Dosage Recommendations Initial studies compared the routes of administration of Bleomycin suggested advantages for the intralesional over intravenous administration (77% vs. 45 % complete responses) 45 but later the prospective multi-institutional ESOPE (European Standard Operating Procedures of Electrochemotherapy) study in 2006, concluded that IV or IT Bleomycin were comparable when given to tumors of volumes less than 0.5 cm 3 52 .For Cisplatin, the studies have shown that IT is more effective than the IV route, with CR rates of 82% vs. 48% respectively 19,45 .In the ESOPE study local tumor control was achieved in up to 88% of tumors treated with IV Bleomycin, 73% with IT Bleomycin and 75% with IT Cisplatin 52 .If a lack of uniformity of drug distribution within the tumor is anticipated either by the presence of large and extensive disease, harder fibrotic tumor nodules, lymphedema or limb fibrosis, the IV route would be more suitable 52 .The Intratumoral route is more feasible for those less perfused nodules located in previously pretreated areas 19 .When Bleomycin is given IV the dose is 15,000 IU/m2 of body surface area in a bolus lasting 30-45 seconds.But when the IT route is chosen, both drugs are given in a dose calculation based on tumor burden.The IT dose of Bleomycin based on tumor size is calculated as follows: for tumor nodules less than 0.5 cm 3 a dose of 1000 IU/cm 3 , tumor nodules between 0.5 and 1 cm 3 a dose of 500 IU/cm 3 and for tumors larger than 1 cm 3 a dose of 250IU/cm3 is administered.The cumulative dose should not exceed 400,000 IU/m 2 due to the cumulative risk of lung fibrosis 46,52 .If the cumulative dose surpasses 60,000 IU /m2, objective respiratory function tests should be made at intervals and the drug discontinued if the respiratory diffusion capacity is abnormal 60 .The IT dose of Cisplatin is given as follow: for nodules less than 0.5 cm 3 a dose of 2mg/cm 3 , for nodules between 0.5-1 cm 3 , a dose of 1mg/cm 3 and for nodules larger than 1 cm 3 a dose of 0.5mg/cm 3 is given.(Table 1) Table 1. Tumor volume and IT drug dose concentration The rationale for reducing the intralesional dose per cm 3 when treating areas with larger tumor burdens is to reduce the risk of systemic toxicities of the absorbed drug without compromising local efficacy 19 .Prophylactic antibiotics against skin flora should be given intravenously prior to the start of the procedure, especially when lesions are ulcerated or necrotic.The required time for the procedure is short, with a median treatment duration of 25 minutes 32 .Repeated treatments are usually well tolerated by the patients 11, 65-66 and these can usually be performed at 1-6 weekly intervals, without evident resistance 45 . Patient monitoring Prior to each treatment session patients should have an electrocardiogram (ECG), blood work for evaluation of renal function, coagulation and electrolyte values.All lesions should be photographed and the tumor burden documented.Cellular debris may be released from the tumour during electroporation and these may interfere with the clearance of cytotoxic drugs by the kidney.Thus, when using IV Bleomycin, the serum creatinine should be maintained at less than 150mol/L to ensure proper renal clearance.Physiological monitoring includes visual display of O2 saturation, pulse rate, blood pressure, continuous ECG tracing and respiratory parameters.Acetominophen or an anti-histaminic medication may be given to prevent the mild febrile reaction that may occur in the early post procedural period when Bleomycin is administered 46,59 . Technical pearls 1. Test the device and the electrodes prior to beginning the procedure 2. Prep and drape following sterile technique principles 3. Keep in mind correct timing of drug administration and application of the electric current 4. Protect osseous surfaces by tenting the skin 5. Communicate with awake patients and assisting staff, prior to delivery of electric current 6.Check electroporator traces and waves to confirm proper current delivery 7. Do not overlap treating fields with normal tissue 8. Dress wounds according to presenting symptoms www.intechopen.com ECT development and clinical applications in melanoma Neumann in 1982 67 published the first paper regarding EP as a method to transfer genes into mammalian and bacterial cells.There were other in vitro 65,68 and in vivo studies in early and late 1980's using EP in combination with drugs 69 , but the first clinical trials demonstrating its effectiveness over Bleomycin alone for the treatment of a diversity of cutaneous tumors of the head and neck region, were published in early 90's by Mir et al 47 and Belehradek et al. 70 from the Institute Gustave Roussy, in Villejuif France.These first 8 patients were treated for squamous cell carcinoma tumors and a complete response (CR) was observed in 57% of the lesions.These early publications encouraged other investigators to expand the principles of this technique to other tumor types including basal cell carcinomas, Kaposi sarcoma, and melanoma metastases 6,62,[71][72][73][74] .By the mid 90's, Rudolf et al. 71 and Heller et al. 73 published the results of an initial small group (5) of melanoma patients that underwent treatment with ECT and IV Bleomycin.They reported overall response rates (OR) in 92% and 50% of 24 and 10 metastatic nodules respectively 19 .That same year, Glass et al. 75 from the University of South Florida in Tampa, USA, reported on the first study using ECT with IT Bleomycin in melanoma metastases obtaining OR rates of 92% (78% CR and 14 % PR) in 20 metastatic lesion -results similar to those obtained by the IV route in the previous studies.Later in 1998, the same group of investigators published the results of a bigger cohort of patients with a variety of cutaneous and subcutaneous malignant nodules.Twelve of 34 patients had 84 metastatic melanoma nodules with documented OR rates up to 99% (89% complete response (CR) and 10 % partial response (PR)) 76 .That same year, the combined data produced by five institutions in USA, France and Slovenia was published by Mir et al. 39 In this study, twenty patients with metastatic melanoma lesions showed responses in 131 (92%) of 142 lesions, with CR of 53% and PR of 39%.A major finding of this study was that the results were comparable among the institutions even though their treatment protocols and the route used for administering the Bleomycin were not standardized.Additional small studies 56, 77-79 using ECT with IT Bleomycin for melanoma lesions continued to show good responses, with OR rates of 71 to 100% (CR 23%-100% and PR 0%-62%).Rols et al. 80 in 2000 continued to demonstrate OR's of 93% using ECT and IV Bleomycin in the treatment of 54 metastatic melanoma nodules.Sersa et al. [81][82] introduced Cisplatin as a therapeutic option in 1998.In their studies they reported CR rates of 100% for 2 patients with 13 lesions treated with IT Cisplatin, but low CR rates of 11 % in 9 patients with 27 lesions treated using the IV route.Additional studies were published using this drug by the IT route only 61,[82][83] with OR rates ranging from 81% to 100% (CR 0%-70 % and PR 6%-100%).The main issue with most of the studies of the 1990's and early 2000's was the utilization of a variety of treatment protocols with different pulse parameters and pulse generators in combination with different electrode types and drug administration routes 6, 39, 44, 47, 52, 56, 58, 63, 70, 72-73, 75-78, 81-88 .But in 2006 the results of a pivotal prospective non-randomized multicenter ESOPE study 52 were published, thus providing recommendations for a standardized protocol for the procedure.The study included 102 patients, 61 evaluated for response and 41 for toxicity respectively.The protocol allowed for administration of Bleomycin either IV or IT, or with Cisplatin IT, and included several histological types of lesions.Ninety-eight lesions from 20 melanoma patients were evaluated -The OR rates were 81% with a CR of 66%.The results confirmed the effectiveness of ECT in the treatment of lesions of different histology, demonstrating an 85% objective response rate and a CR rate of 74% for all lesions.These results were independent of the drug used or the route of administration chosen.Additionally subsequent studies 11,49,66,[89][90][91][92] of ECT evaluating its effect in the treatment of melanoma and other skin cancers continue to demonstrate the efficacy of the treatment, with response rates comparable to the earlier studies, ranging from 46-100%.Repeated treatments are feasible as demonstrated by Campana et al. 90 and Quaglino et al. 11 producing additional clinical responses in patients who had initial non or partial responses or who presented with new recurrent lesions.In the Campana 90 study 34 patients out of 52 were diagnosed with unresectable melanoma, but the response rate for the entire cohort of patients treated with either IT or IV Bleomycin improved significantly from a CR of 50%, up to 83 % after the third ECT treatments.There are approximately 60 institutions in Europe and in the United States that continue to investigate and offer ECT as a palliative treatment for a variety of unresectable tumors, including melanoma; and in an occasional report it has been used as an alternative curative therapy 93 . ECT in the treatment of advanced melanoma In patients with unresectable recurrent or in-transit melanoma disease who are not candidates for standard surgical or medical treatment, ECT is now an important therapeutic option 18,43,49,75,84,94 .These cases include those with unresectable disease due to the extensive number of nodules or lesions located in compromising anatomic areas, such as those around joints, nerves, distal leg and in previously operated fields.Encouraging results with long term remissions have been documented 17,45,78,93,[95][96] .( Figure 3) Fig. 3. A) An Exophytic type recurrent malignant melanoma after isolated limb perfusion.B) Six months following treatment by ECT . Author No. Its effectiveness has been proven when providing palliative treatment for hemorrhagic and painful tumor nodules 56-57, 79, 97-98 .This is a benefit believed to be secondary to the "vascular lock" phenomenon The vasoconstriction at the arteriolar level produces an imediate and dramatic reduction of perfusion of the malignant lesions, thus controlling the bleeding 18,45,98 .ECT can also be useful as a neoadjuvant treatment for cytoreduction and organ sparing treatment.Its benefits has been reported in patients with perineal melanoma treated with Cisplatin 85 and for a sphincter saving procedure in anal melanoma 83 .ECT can also be suitable for and more tolerable by those patients with a prohibitive surgical risk due to significant comorbidities 52 because the length of the procedure is relatively short 52 and patients are able to tolerate multiple sessions.In some anatomical locations ECT could provide good and sometimes better cosmetic results than surgery 93 . Patients ECT in combination with cytokine therapy or gene coding immunotherapies has advanced to clinical trials in advanced melanoma.In a phase II study, patients treated with injections of low dose perilesional IL-2 and ECT with Bleomycin, the cytotoxic T lymphocytes response against the known melanoma antigens initially decreased after treatment to reappear when IL-2 was stopped.The tumor-specific peripheral T cells could be detected later in the lesions.The authors theorized that cell death produced by ECT may have attracted and primed denditric cells with the tumor antigens, which later migrated to the draining lymph node basin, and elicited a T cell response against those antigens expressed by the melanoma 100 . The first Human phase I trial of in vivo DNA electroporation of recurrent malignant melanoma was published in the USA in 2009 101 .EP with dose escalation of the interleukin-12 plasmid (in vivo DNA EP) was used in the treatment of 24 patients with stage III B/C or IV disease.It resulted in significant necrosis of melanoma cells and regression of the majority of treated lesions.In addition, clinical regression of untreated lesions suggested the induction of systemic anti tumor immune responses.The treatment was found to be safe with no significant reported toxicities .While additional studies in larger cohorts of patients are still necessary, to obtain reproducibility of these results, these data show a new method of inducing potent tumour specific immune responses individual to the patient.There are no studies comparing ECT with other surgical modalities, however when taking into consideration the learning curve of other complex regional treatment modalities, the techniques of ECT are considered to be "user friendly" and easy to teach and learn.This technique can also be highly advantageous and useful in countries or hospitals where other modalities or resources are limited. 45 Advantages of ECT 1. Excellent local tumor control rates (80-90%) 2. Minimal risk of damage to healthy surrounding tissue 3. Lower chemotherapy doses needed, minimizing the toxicity profile of the drug 4.There is no protein denaturation, which may elicit an undesireable immune response against self antigens 5. High safety profile without severe side effects 6. Good cost/benefit ratio profile: ambulatory setting, lower cost for drugs, and minimal equipment needed.7. Treatments are well tolerated by patients 8. Improvement in the perceived quality of life Patient selection and limitations of the procedure The contraindications of the ECT could be divided into drug and procedure related. A. Drug related contraindications 60 : 1. Known allergy to the drug to be administered 2. Interstitial lung fibrosis, if Bleomycin is going to be used 3. Kidney failure or limited renal function 4. When the cumulative dose of Bleomycin has reached >400,000 UI/m 2 B. Procedure related contraindications 43 : 1.For safety reasons ECT should not be used in patients with implanted electric devices such as pacemakers 2. In patients who may carry a higher risk of bleeding such as those on anticoagulants or with increased INR and platelets count < 70,000. Limitations With the current available electrodes, ECT has limitations when treating deep seated tumors 43 .Tumors larger than 3 cm 2 appear to have lower response rates (CR 73 %) to ECT 11,49,90 as compared to nodules smaller than 1cm 2 (CR 98%).These findings were not affected by either the cutaneous or subcutaneous location of the nodules 11 . When tumor nodules are located in irradiated or fibrotic tissues the needle electrode penetration may be problematic with a suboptimal delivery of the electrical current or drugs 90 .Nonetheless, if optimal needle penetration is achieved, ECT is equally effective in irradiated as in non-irradiated tissue 52 .If extensive disease (more than 15 lesions) is present repeated sessions may be necessary.In aggressive disease -while undergoing ECT new cutaneous nodules may emerge but palliative retreatment is worthwhile eventhough the systemic disease progresses rapidly 60 .This treatment modality has not been studied in randomized trials with other treatment techniques, such as ablative and perfusion or infusion procedures, or radiation therapy.More studies with longer follow up are still needed to evaluate disease free survival and to compare ECT to surgical excision, not only in the palliative setting but as a curative alternative in those patients unsuitable or unfit for a surgical procedure 19 . Toxicity and side effects ECT has a low toxicity profile with limited side effects as compared to other regional therapies such as HILP or ILP 41 .However one of the limitations of fully assessing the toxicities of the treatment is the inconsistency of the large majority of the published studies documenting complications.The systemic dose of Bleomycin is one twentieth of that used in the majority of chemotherapeutic regimens and thus the systemic side effects appear to be limited to nausea 18,90 .There has been two reported cases of post procedure lypothymia 90 . The most common local side effects reported by the majority of patients are pain (75%) and erythema limited to the tumor and surrounding treated tissue 11,45,52,66 .Most of the patients considered those symptoms tolerable as documented by the ESOPE study 52 .The erythematous reaction usually recedes within a few days 11 .Local tumor necrosis has been reported in 42% of cases 66 .Delayed wound healing that may take several weeks or months to resolve, and epidermal erosions, and hematomas have been reported as rare events 66,73,75 . The injection site reactions appear to be low (Type I and II) in the Wiebendirk toxicity scale 90 . Transient muscle spasms myoclonus, secondary to muscle stimulation by the electrical pulses, have been reported in 25% with lower intensity contractions in up to 78% of patients 52,66 .Some authors advocate the administration of diazepam to alleviate these particular symptoms 73,75 .Interestingly the majority of patients are willing to continue treatment if indicated, since the side effects are tolerable 19 . Equipment evolution Bioengineering developments and evolution of the technique continue to expand the applications of ECT as an alternative treatment for tumors that are inaccessible to current electrical probes.A redeveloped electroporator generator provides more flexibility to deliver the electric current at different phases of the cardiac cycle and the facility to connect several electrode probes around the tumor to deliver the electric pulses in synchrony.The other development is related to the type of electrodes.Longer array electrodes are now available to treat larger and deeper seated tumors, which in the past was a limitation of the procedure.These longer electrodes are insulated proximally to prevent short circuiting of current and to protect the normal tissues that are transgressed en-route to the tumor.These new devices have recently been applied clinically 19,[102][103] .Kos et al. 103 have proposed an algorithmic computer optimized analysis to treat deeper tumors, to minimize errors and to maximize treatment benefits.Another novelty is the creation of finger applicators which allows the application of electrodes into lesions located in difficult to reach anatomic locations, such as the inside of the oral cavity.These finger electrodes have been already tested in the treatment of melanoma of the oral mucosa, and head and neck regions, with complete tumor regression 90 .New endoluminal electrodes to reach internal lesions within the gastrointestinal tract have been used in animal models, as well as tumors transplanted into rabbit liver and murine models 87,[104][105][106] .These studies are encouraging and have demonstrated both in vitro and in vivo (human solid tumor masses in nude mouse models), that the use of flexible electrodes is safe, feasible and reproducible. There is still the need to create harder needle probes for use in those difficult to treat subcutaneous and cutaneous tumors.These needles must be capable of penetrating into hard fibrotic tissue, while minimizing the bleeding risks and maintaining an adequate electrical distance between the probes. Nanopulses Higher amplitude electric pulses or "nanopulses" are currently being evaluated.The use of shorter pulse durations in the nanosecond range are believed to create smaller pores that allow ions but not large molecules to penetrate the membrane.These higher electric fields increase the possibility of producing non resealable pores, thus producing the effect of irreversible electroporation, and consequently allowing the cells to lose their cytoplasm with concomitant cell death.This principle is being used for tumor ablation, palliation or both 43,[107][108][109] .However in the treatment of melanoma, the advantages of these higher electric fields are not immediately evident. ECT combination with Gene transfer, immunology and nanomolecules Several studies suggest that the immune system is also involved in the mechanisms of response to ECT treatment and that this could be exploited for systemic disease control.In a murine model 114 , ECT followed by CpG oligonucleotide injection locally, produced an enhancement of the complete regression responses of tumors from 43 to 100 % , while also triggering systemic antitumor phenomenon with specific immune memory.Activation of dendritic cells released from the tumors are believed to be involved in this response with a recruitment of CD11c and CD11b receptors and an increase of TLR9 expression .EP in combination with gene transfer is termed "Gene Electrotransfer ".It uses gene coding plasmids in order to transfer intracellularly a combination of genes, to either knock down the expression of a particular gene, or to stimulate temporary patterns of gene expression).The technique allows for avoidance of the biohazard issues intrinsic to viral vectors 101,[115][116] .Scientists at the Cork Cancer Research Centar in Ireland have investigated in an in vivo murine model, the application of EP and local gene therapy for malignant tumors.A plasmid coding for two immunogenes, granulocyte-macrophage colony-stimulating factor (GM-CSF) and the B7-1 co-stimulatory immune molecule were delivered by EP.This resulted in the complete regression of the majority (60%) of non immunogenic tumors, while eliciting a tumour specific systemic response that hindered the metastatic growth in the liver in a 100 % of the mice.This was a durable potent response with tumor specificity.When the remaining non responders non responders tumors were excised, an improved survival was observed when compared to the control groups, thus suggesting that the use of neoadjuvant electroporation and gene therapy given at an appropriate time interval prior to tumour excision or ablation could prevent the surfacing of metastatic disease [117][118][119] .Regulatory T cell depletion at the time of electrogenetherapy stimulated improved complete response rates from 60 to near 100% suggesting a potential for improving efficacy of electroporation based immunogene therapy. ECT with injection of TNF-α intra or peritumorally and suboptimal doses of Bleomycin in mice might have a positive immunomodulatory effect, and possibly adds a systemic component to the localized ECT treatment 120 .It has been noted that EP with INF-α used to treat Mycosis Fungoides (subcutaneous lymphoma) lesions produced a 100 % CR 121 The cytotoxic action of INF-α was attributed to its increased tumoral concentration and the prolonged time of action produced by the EP.ECT either with INF-α or TNF-α might permit the usage of less toxic doses, while enhancing clinical local and systemic immunological response rates and minimizing systemic side effects 19 . ECT, radiotherapy and activation of bioreductive drugs The "vascular lock" principle of EP has the potential to be used to "activate" bioreductive drugs such as Tirapazamine against neoplastic cells 43 , due to its vascular disrupting effect of tumor blood supply.In addition ECT has been shown to have a synergistic effect with radiotherapy in preclinical investigations, opening the possibility to use it as a radiosensitising tool for the palliation of subcutaneous lesion 43,122 .The continued development of improved diagnostic methods will allow for earlier diagnosis of metastases, and possibly open the opportunities for in situ neoadjuvant treatment of higher risk malignant melanomas by means of immunogene or cytokine therapies, hence establishing tumour specific responses which could prevent recurrence and eradicate disseminated micometastases. Summary ECT with Bleomycin or Cisplatin is an effective treatment in the palliative management of unresectable recurrent cutaneous or subcutaneous melanoma metastases or in-transit disease, with OR rates of approximately 80-90%.ECT should now be considered as part of the armamentarium for treatment of loco regional advanced melanoma.The technology of ECT continues to evolve allowing for the treatment of metastatic lesions in other organs or anatomic regions.The principles of EP are already being applied in the clinical setting for the delivery of targeted therapies such as gene transfer and immunotherapy.These therapies along with ECT have the potential not only for local, but for distant treatment of tumors such as those of malignant melanoma, by stimulating a self-driven immune response to achieve systemic control of the disease However studies evaluating long term follow up results of ECT in melanoma are still needed prior to considering it as an option for curative intent.ECT has been proven to be an excellent palliative option in treatment of recurrent unresectable or in-transit disease. Fig. 1 . Fig. 1. A. Growth curves of experimental cancers in mice, which demonstrate that electropermeabilisation or intratumoural bleomycin alone, have no influence on tumour growth but when used together completely ablate the tumour.B) Positive Tunel stain 48 hrs post ECT indicating tumor cell death by apoptosis.C &D Tumour before showing normal cellularity and D 48 hrs post ECT showing regions of denucleation Table 2 . Summary of most relevant studies using ECT for unresectable or In-Transit Melanoma
2017-09-13T22:14:13.495Z
2011-10-05T00:00:00.000
{ "year": 2011, "sha1": "3171abe5766ce06a3e8687a3a2543f224e377775", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/21343", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "0c253150888c64a5d36b90ea481973413b5c297b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221672598
pes2o/s2orc
v3-fos-license
An Outbreak of Kawasaki-like Disease in children during SARS-CoV- 2 Epidemic: No Surprise? Background and aim: Kawasaki disease is an acute systemic febrile illness of unknown aetiology, which usually affects children under 5 years of age. It is well known that Kawasaki disease is one of the most common causes of acquired heart diseases in children in the developed countries. Many studies, have suggested that heterogeneous infectious agents, such as common viruses, may trigger Kawasaki disease in young children with genetic background. Nowadays we are facing a pandemic caused by a Novel Coronavirus named SARS-CoV-2. Consequently, it could be possible that once exposed to this new coronavirus, some children, genetically predisposed, may mount an exaggerated inflammatory response which clinically manifests as Kawasaki Disease. Methods: from January to May 2020 a systematic search was performed on Pubmed for the following search terms: “COVID-19”, “children”, “SARS-CoV-2”, “complications”, “Kawasaki disease”, “cytokine storm”. Results: Usually, infants and children present milder symptoms of SARS-CoV-2 disease with a better outcome than adults. At variance, some children may be genetically disposed to a more robust inflammatory response to SARS-CoV-2, similar to Kawasaki disease. In fact, Kawasaki disease is the result of an abnormal immune response, in susceptible children, to an external trigger such as an infection. Thus, according to the pathogenesis of Kawasaki disease, paediatricians may expect an increase in cases of Kawasaki disease during the COVID-19 pandemic. (www.actabiomedica.it) Kawasaki disease Kawasaki disease (KD) is an acute vasculitis, which usually affects children under 5 years of age and leads to coronary artery aneurysms in approximately 25% of untreated cases. It has been reported worldwide and is the leading cause of acquired heart disease in children in developed countries. Table 1 and table 2 show the clinical criteria that are used to define KD. In Europe, KD is reported in 5-15/100 000 children under 5 years of age annually. In the US 19 per 100 000 children younger than five years are hospitalized with KD annually. The incidence of KD in northeast Asian countries such as Japan, South Korea, China and Taiwan is 10-30 times higher than in the US or Europe (1). Higher rates of KD in siblings of patients and twins suggest a genetic predisposition that may interact with a pathogenic agent in the environment (2). Evidence suggests that KD susceptibility and outcome, are influenced by several different genes and signalling pathways. In particular, family linkage studies have shown a relation between some genes, such as CASP3, HLA II, BLK, CD40 and the onset of the disease (3,4). Actually, the cause of KD remains unknown. Current literature suggests that some pathogens (particularly virus with RNA) that infect the upper respiratory tract, may be a trigger for the development of the disease (5). Infections associated with Kawasaki disease Infection has long been considered the main cause of KD. The infectious evidence of Kawasaki disease includes: • temporal clustering and seasonality, • geographical and epidemic clustering, • familial clustering, • a high association between Kawasaki disease and infectious disease surveillance, • age distribution, with highest incidence rates among children aged 6 months to 2-years who have low maternal antibodies (6). Studies have reported that bacterial agents may be related to KD because of superantigens. Bacteria isolated from patients with KD include Staphylococcus aureus, Streptococcus pyogenes, and Mycoplasma pneumonia (7)(8). Moreover, many studies have reported the role of viral infection in KD, including coronavirus (9), enterovirus (10), parainfluenza (11) and adenoviruses (12). In fact, according to a prospective study conducted by Chang et al., it was found that some infectious agents, that usually result in asymptomatic or mild infection, cause KD in a small group of genetically predisposed children. In their study, KD cases have significantly higher positive rates of PCR for various viruses including enterovirus, adenoviruses, rhinoviruses and pan-coronaviruses (6). The mechanism that explains the relationship between KD and viral infection remains unclear but it is possible that large amounts of cytokines produced by viral-infected cells, such as IL-1, IL-6 and IL-18, may damage the vascular endothelium resulting in severe cases in Kawasaki Disease Shock Syndrome (KDSS). KD and KDSS: role of infections Kawasaki Disease Shock Syndrome (KDSS) has been defined by Kanegaye in 2009. This syndrome is characterized by shock and hypotension requiring critical care support in patients with acute KD. These patients seem to have higher incidences of coronary artery aneurism and IVIG resistance (13). During the acute phase of KD, the immune system is activated with an increase of pro-inflammatory cytokines. Inflammatory cytokines may cause local and systemic damage. KD patients with IL-6 above 66.7 pg/ml, IL-10 above 20.85 pg/ml and IFN-γ above 8.35 pg/ml may have a higher risk to evolve into KDSS (14). Pathogenesis of KDSS or other organ involvement in KD is unknown, but inflammatory mediators that are involved in the host immune reaction after an infection, may be associated with KDSS. COVID-19 and KD On December 2019, a cluster of pneumonia cases of unknown aetiology has been reported in Wuhan, Table 1. Classic KD is diagnosed in the presence of fever for at least five days with at least four of the five principal features 1 . Diagnosis of Classic KD • Erythema and cracking of lips, strawberry tongue and/or erythema of oral and pharyngeal mucosa • Bilateral conjunctival injection • Maculopapular rash • Erythema and edema of the hands and feet and/or periungual desquamation • Cervical lymphadenopathy (> 1,5 cm diameter), usually unilateral Table 2. The diagnosis of incomplete KD should be considered in any infant or child with prolonged unexplained fever, fewer than 4 of the principal clinical findings, and compatible laboratory or echocardiographic findings 1 . Suspected Incomplete KD • Children with fever > 5 days and 2 or 3 compatible clinical criteria or infants with fever for 7 days without explanation (15). Several countries affected by COVID-19 pandemic recently have reported cases of children that have been hospitalised due to an inflammatory multisystem syndrome characterized by signs and symptoms of Kawasaki disease (KD) and KDSS with cardiac involvement. A temporal association with SARS-CoV-2 infection has been supposed because some of the children that have been tested for SARS-CoV-2 infection were positive by polymerase chain reaction (PCR) or serology (16). Particularly, Italy has reported an unusually high number of children with signs and symptoms of KD in paediatric intensive care. According to Verdoni et al., a high number of Kawasaki-like disease cases have been reported in Bergamo, with a monthly incidence that has been at least 30 times greater than the monthly incidence of the previous 5 years (17). Discussion Recently, several children with a novel multisystem inflammatory disease similar to toxic shock syndrome (TSS) and atypical Kawasaki disease (KD) with proven SARS-CoV-2 infection have been observed in the UK. Whittaker et al. has described the features of 58 children who have been admitted in eight UK Hospitals for 'Paediatric inflammatory multisystem syndrome temporarily associated to SARS-CoV-2 infection' between 23 March and 16 May 2020. The 78% of the patients have evidence of prior or current SARS-CoV-2 infection. Twenty-nine children, developed shock, often associated with clinical, echocardiographic and laboratory evidence of myocardial injury; seven children fulfilled the American Heart Association diagnostic criteria for KD (18). Likewise, an increased frequency of KD has been reported in Italy (17). According to the Italian experience in Bergamo, the clinical and biochemical characteristics of the 10 children admitted to the ER during the COVID-19 pandemic, differ from the previous cohort of patients. In fact, from a clinical perspective, they were older, had meningeal signs, and signs of cardiovascular involvement and respiratory as well as gastrointestinal involvement. From a biochemical perspective, they had thrombocytopenia, leukopenia with marked lymphopenia and increased ferritin, signs of macrophage activation syndrome (MAS). Moreover, children had a more severe disease course, with resistance to intravenous immunoglobulin and need of steroids. The monthly incidence of these KD-like cases has been at least 30 times greater than that observed for KD in the same region across the previous 5 years (17). The SARS-CoV-2 pro-inflammatory syndrome has already been reported in adults who present a wide range of signs and symptoms characterized by fever, lymphopenia, increased in transaminases, D-dimer, ferritin, and lactate dehydrogenase called "cytokine storm" (19). Moreover, the epidemiology of KD supports the idea that infections may causes KD in a group of genetically predisposed children. KD seasonality and well-documented Japanese epidemics with wave-like spread support an infectious trigger (20). According to Rowley and al., the prevalence of cytotoxic T cell, the upregulation of interferon pathway genes and the presence of CD8 T cells in the inflammatory infiltrate in the coronary arteries of children who have died of KD are suggestive of a viral aetiology (21). Actually, the innate immunity is the first line of defence against infectious agents, but it is also accompanied by inflammatory reaction. KD could be associated with a dysregulation of the innate immune response. In fact, many microbes may stimulate immune cells through the interaction with specific receptors that may trigger innate immune response with the production of inflammatory cytokines. However, while a large number of microorganisms may cause KD, the prevalence of KD in children is potentially limited, suggesting that the genetic pattern may influence the disease susceptibility. Molecular data show that many genes with KD-associated polymorphisms are responsible for the modulation of inflammatory responses. Thus, the pathogenesis of KD may be explained by a dysregulation of the immune response to infectious stimuli (32). Consequently, KD may be considered a stereotyped way of reaction of the patient to different aetiological factors. According to a recent paper by Ravelli et al., the occurrence of a Kawasaki-like disease in association with SARS-CoV-2 infection suggests that KD is not a disease, but rather a syndrome, whose main features depend on the type of the characteristics of the infectious agent as well as on the immune response of the patient (33). Conclusion The pathogenesis of KD seems to be explained by the interaction between an environmental trigger, such as an infection, and the development of an exuberant immune response in a susceptible patient. This would explain why an increase in the number of cases of KD should be expected during a pandemic outbreak. Moreover, according to recent studies, SARS-CoV-2 has a particular tropism for vases with a production of many inflammatory mediators that would also explain the effusive systemic response, confirming the double relevance of both the genetic predisposition in the immune response and the characteristics of the pathogens in the pathogenesis of Kawasaki disease (or syndrome).
2020-09-15T13:05:42.764Z
2020-09-07T00:00:00.000
{ "year": 2020, "sha1": "99c01befdd116412a15da8f36ee5283ddbb50cc4", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "befd93ec6b348681c304ee95b1448255380959ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248930923
pes2o/s2orc
v3-fos-license
A fenestrated, double-barrel technique for proximal reintervention after open or endovascular abdominal aortic aneurysm repair Objective Proximal endovascular reintervention after prior endovascular aortic repair (EVAR) or open abdominal aortic aneurysm repair (OR) can be challenging due to the short distance to the visceral branches. We present a novel solution to allow the use of the commercially available ZFEN device using a double-barrel, kissing-limb technique. Methods Patients who underwent fenestrated repair for proximal failure after EVAR or OR were identified. The ZFEN device is deployed above the prior graft flow divider. Once the visceral branches are secured, kissing limbs are used to connect with the prior graft limbs. The distal diameter of the standard ZFEN is 24 mm, accommodating two 20 mm components according to the formula 2πDLIMB = πDZFEN + 2DZFEN. Results Of 235 patients who underwent repair using ZFEN from 2012 to 2021 at a single institution, 28 were treated for proximal failure of prior repairs, with 13 treated using the double-barrel technique (8 EVAR, 5 OR). The distance from the flow divider to the lowest renal artery was 67 ± 24.4 mm (range, 39-128 mm), and the distance to the superior mesenteric artery (SMA) was 87 ± 30.5 mm (range, 60-164 mm). Technical success was 100%. Seven patients had standard ZFEN builds (2 renal small fenestrations, SMA large fen/scallop). The minimum distance to the lowest renal artery and SMA to accommodate a standard ZFEN build was 56 and 60 mm, respectively. Four patients required adjunctive snorkel grafts and two required laser fenestrations. Two patients had gutter leaks at 1 month that self-resolved; one patient developed a late type 1a endoleak. Freedom from reintervention was 90%, 72%, and 48% at 1, 2, and 3 years, respectively. Conclusions This double-barrel technique allows for distal seal of commercial ZFEN devices into prior open or endovascular repairs with good technical success. Long-term outcomes remain to be quantified. Repair of infrarenal abdominal aortic aneurysms can be achieved by either open surgical repair (OR) or endovascular aortic repair (EVAR); the latter has become increasingly prevalent in recent years. 1 Failure at the proximal end of the repair can occur with either approach, resulting from technical failures, graft-related issues, or continued aneurysmal degeneration. [2][3][4][5][6] This clinical scenario can be associated with important and potentially devastating outcomes such as anastomotic pseudoaneurysms in OR patients, and type 1a endoleak with sac expansion and rupture after EVAR. 7 With patients surviving longer than ever after abdominal aortic aneurysm repair, these late complications are becoming commonplace in modern aortic practice. 8 Endovascular intervention to address proximal failure of prior OR/EVAR is a challenging problem due to encroachment on the visceral aortic segment. Any repair at this level needs to incorporate the renovisceral vessels using either branched or fenestrated devices 5,6 or parallel grafting techniques. 9 One of the technical issues is connecting the new endovascular graft to the prior repair, made difficult by the short distance between the lowest visceral branches and the flow divider of the prior EVAR or aorto-bi-iliac graft. Traditionally, the main body of an open surgical graft was trimmed as short as possible before performing the proximal anastomosis; firstgeneration EVAR devices were modeled after this, with very short distances to the flow divider as well. This precludes the use of most bifurcated endovascular grafts for revision and does not allow for adequate overlap and seal between the new graft and the prior implant. The Cook Zenith Fenestrated device (ZFEN; Cook Medical) is the only commercially available fenestrated device in the United States. Although it is indicated for repair of short neck and juxtarenal aneurysms and could potentially be used for such proximal failures, several design restrictions and its need for a distal bifurcated piece limit its utility. Physician-modified endografts (PMEGs) may be used, though subject to institutional restrictions and surgeon familiarity with this technique, and custom fenestrated/branched (F/BEVAR) devices are not universally available. We therefore describe and report outcomes on a novel technique to overcome some of these obstacles, which allows for the use of ZFEN in this scenario using a double-barrel, kissinglimb technique to bridge to the prior repair with sufficient overlap and seal. We also attempt to define the minimum required distance from the flow divider to the visceral branches to employ this strategy for fenestrated repair of prior EVAR or OR using the commercially available ZFEN graft. METHODS We retrospectively reviewed our prospectively maintained, institutional ZFEN database to identify patients in whom FEVAR was performed for proximal failure of prior endovascular or open infrarenal repair. The patients treated with the double-barrel technique were then analyzed. Technical success was defined as successful implantation of the fenestrated device, cannulation and stenting of all intended visceral targets, and bridging to the prior repair with the double-barrel stents, as well as successful aneurysm exclusion. The primary outcome measures were endoleak and freedom from reintervention. Follow-up imaging was obtained using our standard protocol, with computed tomography angiogram and/or duplex ultrasound imaging performed at 30 days, 6 months, 1 year, and yearly thereafter. The study was approved by our local institutional review board, and because of the retrospective nature of the study, informed consent was not required. We also attempted to define the minimum distance between the visceral arteries and the flow divider of the prior repair, in order to generalize the applicability of this technique. The diameter ratio was calculated at the first postoperative computed tomography scan, using the methods described by Groot Jebbink et al, 10 to estimate the "fit" of the double-barrel stents into the main device; the ratio is calculated by dividing the major axis length by the minor, with larger ratios representing a more elliptical shape. 10 We also used our previously described formula to assure that two barrels fit inside a larger barrel without gutter leak either by circumference covered or area filled. 11,12 Statistical analysis was performed using Stata 16.0. Descriptive statistics were used to analyze patient demographics, comorbidities, and clinical outcomes. All measurements were performed using 3D rendering software (Intuition; TeraRecon) by the same user (JRS) to limit interobserver variability. Description of technique. The fenestrated device was generally designed such that its distal end would sit just above the flow divider of the prior repair. "Standard build" was defined as having two small fenestrations for the renal arteries, which were bridged with covered stents, and an unstented large fenestration or scallop for the superior mesenteric artery (SMA). Other configurations were referred to as "nonstandard builds." After obtaining bilateral femoral access, the ZFEN main body was advanced into position and deployed. The visceral targets were then cannulated and stented from the contralateral femoral access using standard methods. Once the visceral branches were secured and the proximal graft had been balloon-molded, attention was turned to bridging to the prior repair. The distal diameter of the commercially available ZFEN device is 24 mm, which accommodates two 20 mm limbs according to the formula 2pD LIMB ¼ pD ZFEN þ 2D ZFEN and leaves no gutters (Fig 1). To fill the space completely, the left side of this equation must be equal to or greater than the right. The two limbs were positioned side by side and deployed simultaneously to create the double-barrel configuration (Fig 2, A). A kissing-balloon technique can then be used to simultaneously mold the limbs to ensure equal visual positioning of the two limbs (Fig 2, B). Finally, each side is extended as needed to fully seal into the prior repair. A 3D reconstruction of the final repair is shown in Fig 3. Patients were all discharged on dual antiplatelet therapy for 6 weeks. RESULTS Of 235 patients who underwent repair using ZFEN between 2012 and 2021, 28 (11.9%) were treated for proximal failure of a prior repair. The double-barrel technique was used in 13 of 28 of these patients (46.4%), though more recently this has become the primary strategy for this scenario and was used in 11 of the last 12 cases. Basic demographics of the double-barrel cohort are outlined in Anatomic factors and devices. The distance from the flow divider of the prior repair to the lowest renal artery was 67 6 24.4 mm (range, 39-128 mm), and the distance to the SMA was 87 6 30.5 mm (range, 60-164 mm). Seven patients had standard ZFEN builds, whereas four patients required adjunctive snorkel grafts 13 and two required in situ laser fenestration 14 for one of the renals. Among patients with standard builds, the shortest distance to the lowest renal artery and the SMA was 56 and 60 mm, respectively. Patients with standard ZFEN builds were treated for juxtarenal aneurysms, whereas those requiring adjunctive chimneys or laser fenestrations were treated for the paravisceral aneurysm extent. In the latter, the adjunctive maneuvers were planned ahead of time and performed on the lower renal(s) to move the ZFEN higher and use the fenestrations for the more proximal branches. No true thoracoabdominal aneurysms were included in this series. For the double-barrel components, in the first patient two 20 Â 55 mm ESLE Iliac Leg Extensions (Cook Medical) were used; two patients then had flipped 20 mm Excluder Contralateral Leg Endoprostheses (W.L. Gore), 15 and the remaining 10 patients were treated with 20 Â 82 mm Endurant Iliac Limb Extensions (Medtronic). Diameter ratios of the double-barrel configuration ranged between 1.60 (more spherical) and 2.11 (more elliptical), with a mean ratio of 1.92 6 0.14. There was no perioperative or 30-day mortality. Two patients (15.4%) had gutter leaks at 1-month follow-up, both of which resolved at the subsequent 6-month scan. One patient (7.7%) developed a late type 1a endoleak at 6 months, which is being followed as the patient is of advanced age and there has been no associated sac growth. Kaplan-Meier estimated freedom from reintervention was 90%, 72%, and 48% at 1, 2, and 3 years, respectively, with all reinterventions related to the renal branches. There was one occlusion of a renal chimney, which was not able to be salvaged. All other reinterventions were for restenosis, which were successful in restoring patency. The overall primary and secondary patency of the renal branches at the last follow-up was 92% and 100%, respectively. No patients required temporary or permanent renal replacement therapy. There were no limb occlusions noted. DISCUSSION We describe a novel technique for endovascular rescue of proximal failure in prior infrarenal aneurysm repair. The technique allows for the use of the commercial ZFEN device, in both prior open and endovascular repairs, so long as the fenestrated component can land above the flow divider. We defined this minimum required distance from the flow divider to the lowest renal artery as approximately 56 mm, though the actual distance will vary based on the size of the fenestrated graft, the number of sealing stents, and the location of the lower fenestration boundary. We achieved technical success in all cases, and reasonable short-term outcomes with no perioperative mortality and only two self-limited gutter leaks between the double-barrel limbs. The ability to seal the distal 24 mm body of the commercially available ZFEN device via these double barrels expands its use to include cases of proximal degeneration of prior open or endovascular repair. Proximal failure of infrarenal aortic repair is a significant problem in modern vascular practice and can be caused by either device failure or the progression of disease. 13,16 Unchecked type 1a endoleaks from loss of proximal seal in EVAR can lead to continued pressurization of the aneurysm sac and ultimately rupture and death. 7 Although open repair remains a viable option in fit patients, it can be associated with significant morbidity and mortality, 17 making endovascular repair an attractive alternative. Good results have been described with F/BEVAR 18 in this context, but in the United States, these devices are limited to those with investigational device exemptions from the Food and Drug Administration. Interventionalists in the United States are therefore limited to alternative strategies with parallel grafting 9 or more recently using back table PMEG devices. Although PMEGs can certainly be used to treat these failures, data are lacking due to the inability to publish results in peer-reviewed journals. In addition, some hospitals do not allow physician modifications, as they may not be reimbursable and potentially open the door to medicolegal exposure. Our technique takes advantage of the Food and Drug Administration-approved and widely available ZFEN device, with which U.S. physicians are already familiar and for which good long-term data are available. 19 So long as the ZFEN device lands above the flow divider, the workflow for completing the fenestrated portion of the procedure is identical to that for standard ZFEN implantation. Bridging to the prior device is also very straightforward, provided two operators are able to deploy the double-barrel limbs simultaneously and at the same level. We have transitioned to using 20 Â 82 mm straight Medtronic limbs, particularly as their deployment is easily controlled for accuracy. The length is ideal also as it traverses across the flow divider into ipsilateral and contralateral gates of all prior infrarenal devices or open repair constructs. These self-expanding limbs fill the distal ZFEN body well and generally make a good elliptical "Double-D" shape without significant gutters (Fig 1). Fortunately, the two gutter leaks seen at 6 months both spontaneously resolved. Although this is a limited series, it is encouraging that these did not persist and lead to downstream sequelae. Particularly because this technique is an alternative to proximal extension with parallel grafts, one may argue that we are simply trading one gutter issue for another. Indeed, the natural history of parallel graft gutters may also be relatively benign, 20 but a major advantage of this technique over traditional parallel strategies is that the renovisceral branches can be secured via fenestrations, avoiding some of the concerns related to chimney graft occlusions. 21,22 Late explantation for failure of this configuration always remains an option, but would be extremely difficult and unlikely to be feasible in the majority of these patients. In calculating the optimal size for the double-barrel components, we previously developed the equation 11 When the left side of this equation is equal to or greater in value than the right, then the conformable self-expanding stents should fill the lumen of the larger device into which they are placed. We have also found that this general equation can be adapted to any similar situation where a double-barrel configuration may be needed. Just as two 20 mm limbs are needed to fill the 24 mm ZFEN lumen, this equation suggests that the devices needed are perhaps somewhat larger than one may predict. For example, two 14 mm devices would be needed to fill a 16 mm lumen, and two 28 mm devices to fill a 34 mm lumen. The clinical applicability of this remains to be seen, but it is a concept that could prove useful in various scenarios. Our study has several limitations, primarily related to its retrospective nature and small size. Although we were able to achieve good technical and short-term success, the long-term durability of this configuration is unknown. The results from our high-volume single institution may also not be widely applicable to all users and hospitals, but we do feel that for experienced ZFEN users, this is a useful technique for these difficult situations. Caution should be taken in situations of severe angulation and tortuous neck anatomy. Finally, this technique helps solve this specific issue given the current device restrictions in the United States, but may become obsolete in the future with the widespread availability of custom F/BEVAR devices or if a converter from the commercially available device into a bifurcated graft becomes available. CONCLUSIONS The double-barrel technique allows for repair of proximal failure of prior open and endovascular infrarenal aortic aneurysm repair using the commercially available ZFEN device connecting it to any prior endovascular or open repair graft. Although technical and short-term success was high in this small cohort, more data are needed to quantify long-term outcomes.
2022-05-21T15:21:40.838Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "cdb11c4a46f2f38295968e1656f78ad4be768dc6", "oa_license": "CCBYNCND", "oa_url": "http://www.jvscit.org/article/S2468428722002398/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "199796fd1f104c63e212d9c1c41036291ef16099", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
253260315
pes2o/s2orc
v3-fos-license
Interleukin-6 and indoleamine-2,3-dioxygenase as potential adjuvant targets for Papillomavirus-related tumors immunotherapy High-risk Human papillomavirus (HPV) infections represent an important public health issue. Nearly all cervical malignancies are associated with HPV, and a range of other female and male cancers, such as anogenital and oropharyngeal. Aiming to treat HPV-related tumors, our group developed vaccines based on the genetic fusion of the HSV-1 glycoprotein D (gD) with the HPV-16 E7 oncoprotein (gDE7 vaccines). Despite the promising antitumor results reached by gDE7 vaccines in mice, combined therapies may increase the therapeutic effects by improving antitumor responses and halting immune suppressive mechanisms elicited by tumor cells. Considering cancer immunosuppressive mechanisms, indoleamine-2,3-dioxygenase (IDO) enzyme and interleukin-6 (IL-6) stand out in HPV-related tumors. Since IL-6 sustained the constitutive IDO expression, here we evaluated the therapeutic outcomes achieved by the combination of active immunotherapy based on a gDE7 protein-based vaccine with adjuvant treatments involving blocking IDO, either by use of IDO inhibitors or IL-6 knockout mice. C57BL/6 wild-type (WT) and transgenic IL-6-/- mice were engrafted with HPV16-E6/E7-expressing TC-1 cells and treated with 1-methyl-tryptophan isoforms (D-1MT and DL-1MT), capable to inhibit IDO. In vitro, the 1MT isoforms reduced IL-6 gene expression and IL-6 secretion in TC-1 cells. In vivo, the multi-targeted treatment improved the antitumor efficacy of the gDE7-based protein vaccine. Although the gDE7 immunization achieves partial tumor mass control in combination with D-1MT or DL-1MT in WT mice or when administered in IL-6-/- mice, the combination of gDE7 and 1MT in IL-6-/- mice further enhanced the antitumor effects, reaching total tumor rejection. The outcome of the combined therapy was associated with an increased frequency of activated dendritic cells and decreased frequencies of intratumoral polymorphonuclear myeloid-derived suppressor cells and T regulatory cells. In conclusion, the present study demonstrated that IL-6 and IDO negatively contribute to the activation of immune cells, particularly dendritic cells, reducing gDE7 vaccine-induced protective immune responses and, therefore, opening perspectives for the use of combined strategies based on inhibition of IL-6 and IDO as immunometabolic adjuvants for immunotherapies against HPV-related tumors. High-risk Human papillomavirus (HPV) infections represent an important public health issue. Nearly all cervical malignancies are associated with HPV, and a range of other female and male cancers, such as anogenital and oropharyngeal. Aiming to treat HPV-related tumors, our group developed vaccines based on the genetic fusion of the HSV-1 glycoprotein D (gD) with the HPV-16 E7 oncoprotein (gDE7 vaccines). Despite the promising antitumor results reached by gDE7 vaccines in mice, combined therapies may increase the therapeutic effects by improving antitumor responses and halting immune suppressive mechanisms elicited by tumor cells. Considering cancer immunosuppressive mechanisms, indoleamine-2,3-dioxygenase (IDO) enzyme and interleukin-6 (IL-6) stand out in HPV-related tumors. Since IL-6 sustained the constitutive IDO expression, here we evaluated the therapeutic outcomes achieved by the combination of active immunotherapy based on a gDE7 protein-based vaccine with adjuvant treatments involving blocking IDO, either by use of IDO inhibitors or IL-6 knockout mice. C57BL/6 wild-type (WT) and transgenic IL-6 -/mice were engrafted with HPV16-E6/E7-expressing TC-1 cells and treated with 1-methyltryptophan isoforms (D-1MT and DL-1MT), capable to inhibit IDO. In vitro, the 1MT isoforms reduced IL-6 gene expression and IL-6 secretion in TC-1 cells. In vivo, the multi-targeted treatment improved the antitumor efficacy of the gDE7based protein vaccine. Although the gDE7 immunization achieves partial tumor mass control in combination with D-1MT or DL-1MT in WT mice or when administered in IL-6 -/mice, the combination of gDE7 and 1MT in IL-6 -/mice further enhanced the antitumor effects, reaching total tumor rejection. The outcome of the combined therapy was associated with an increased frequency Introduction Human papillomavirus (HPV) is the most common cause of sexually transmitted illnesses worldwide (1). Nearly all cervical malignancies and a range of other female and male cancers, such as anogenital and oropharyngeal, are associated with highrisk HPV, especially HPV-16 and HPV-18. Despite the preventability of HPV-related malignancies by prophylactic vaccines, there is still a high global incidence, particularly in low-and lower-middle-income countries. In this scenario, cervical cancer is the ninth most prevalent cancer worldwide and the fourth in terms of incidence and mortality in women (2). The conventional treatment of cervical cancer depends on the extent of the disease and fertility-sparing, which may include surgery, radiotherapy, and/or chemotherapy (3)(4)(5). However, even following usual treatments, recurrence of cervical cancer is still prevalent (3,6), emphasizing the need for novel curative antitumor approaches. Therapeutic failure is mainly attributed to the systemic and local immunosuppression induced by the oncological disease, which depends on tumor and host factors and involves different inflammatory molecules (7). Interleukin-6 (IL-6) is one such inflammatory molecule produced by many cell types, including tumor cells. IL-6 plays a crucial role in the proliferation and differentiation of malignant cells and it is known to be implicated in the pathogenesis of HPV + cervical cancer (8). Compared to the normal cervix and cervical intraepithelial neoplasia (CIN), the expression of IL-6 in cervical cancer was considerably higher (9). Furthermore, circulating IL-6 was found to be a risk indicator since elevated serum IL-6 levels correlate with advanced stages of cervical cancer (10,11). Regarding immunomodulation, while IL-6 promotes the recruitment of myeloid-derived suppressor cells (MDSC) into the tumor microenvironment, it hampers Th1 lymphocytes infiltration (12,13), and dendritic cells activation (14). The autocrine activation of IL-6 is responsible for STAT3 phosphorylation in HPV-related malignancies, particularly in cervical cancer (15). Interestingly, the IL-6 signal loop on a selfsustaining IL-6/STAT3/AHR axis is one of the mechanisms that maintain the constitutive expression of the indoleamine 2,3dioxygenase (IDO) enzyme in tumor cells (16). IDO has gotten attention as one of the many mediators of tumor immune escape (17), since it degrades the essential amino acid tryptophan, creating a tryptophan-deficient microenvironment with critical immunological outcomes. IDO-expressing dendritic cells mediate T-cell suppression and/or tolerance, while low tryptophan concentration reduces T-cell-mediated responses by inhibiting T-cell proliferation and activation (18,19). Importantly, cervical cancer expresses one of the highest amounts of IDO (20-22), and both IL-6 and IDO are negative prognostic markers in patients diagnosed with this neoplasm (8,23), highlighting the IDO/IL6 axis as an important selfimmunoregulatory network in cervical cancer. Consequently, blocking or inhibiting IL-6 signaling pathways may provide an interestingly therapeutic target to re-sensitize cancer cells to immunotherapies. Regarding biotechnology breakthroughs, immunotherapy either based on passive administration of monoclonal antibodies or active immunization with vaccines has become a powerful ally to fight cancer. Immuno-oncological treatments aim to boost the immune system to recognize and attack cancer cells, as well as target immunosuppressive checkpoints to restore an immunological effector milieu (24). Over the last years, our group developed vaccines based on genetic fusions of HSV-1 glycoprotein D (gD) and HPV-16 oncoproteins aiming to treat HPV-related tumors and demonstrated the promising antitumor effects associated with combined adjuvant therapies in tumorbearing mice (25-27). Our present study reports the testing of therapeutic adjuvant strategies focusing on IL-6 and IDO in combination with a protein-based (gDE7) antitumor vaccine. Tumor cell line and culture conditions The TC-1 cell line (28) was kindly provided by Dr. T.C. Wu from John Hopkins University in Baltimore, MD, USA. The TC-1 cells were cultured as previously described (27) and harvested at 90% confluency for subculture procedures, in vitro experiments, and in vivo assays. As a quality control, the expression of the oncoprotein E7 was confirmed by RT-PCR (data not shown), and cells were frequently tested for the absence of Mycoplasma spp. Mice strains and tumor cell line implantation Female wild-type (WT) C57BL/6 mice (6-8 weeks old) were purchased from the Faculty of Veterinary Medicine and Zootechnics of the University of São Paulo (USP). Male or female (on demand) IL-6 gene knocked mice (IL-6 -/-) were supplied by the animal facility unit of the Department of Immunology of the University of São Paulo. Animals were allowed free access to water and food and provided with a 12h light/dark cycle, at 20-26°C temperature. Mice experiments were performed under approved protocols by the ethics committee for animal experimentation (protocol number CEUA 8572030918) and followed the standard rules approved by the National Council for Control of Animal Experimentation (CONCEA). The TC-1 cells were harvested at 90% confluency and transplanted into mice as previously described (27), at a concentration of 1x10 5 cells/100µL/animal on day 0 (D0). Mice were considered tumor-bearing when tumors became palpable (7-10 days) and were euthanized when tumors reached 15mm in diameter or if they showed signs of distress (grimace scales). Mice gDE7-based immunotherapy and IDO inhibitors (1MT) treatment The therapeutic gDE7-based vaccine was administered following a regimen of two subcutaneous immunizations at a week interval, as previously described (27). Each dose contained 30 µg of the gDE7 protein, diluted in PBS, in a total volume of 100 µL, and inoculated at the right rear flank region of mice. Animals were immunized with gDE7 seven (D7) and fourteen (D14) days after TC-1 cell engraftment (D0). Treatment with oral administered D-1MT or DL-1MT began two days (day 9 -D9) after the first gDE7 immunization and lasted four weeks until day 36 (D36) for mice treated every day with 1MT ( Figure 1A) or until day 37 (D37) for mice treated every other day with 1MT ( Figure 1B). The D-1MT and the DL-1MT were administered to the animals at 8mg animal -1 every day or 10 mg animal -1 every other day, dissolved in a mixture of 0.5% tween-80, 0.5% methylcellulose in sterile Milli-Q water, being administered 100µL/animal per gavage. The antitumor effect assessment The single or combined treatment outcome was evaluated by tumor mass volume, mice survival, and tumor-free mice. Tumor volume was plotted up to day 44 (D44). The "monitoring endpoint" of each group could be different since we chose as the "endpoint data" a survival rate of at least 80% of each mice group. Both survival and tumor-free mice were assessed up to day 60 (D60). The formula 1/2 [(length)^2 × width] was used to determine tumor volume. Statistical analysis Statistical analyses were performed using GraphPad-Prism software. The analysis was performed using the unpaired T-test, One-Way ANOVA, or Two-Way ANOVA and the results were confirmed through multiple comparisons by Turkey's test or Sidak's test, according to the GraphPad-Prism software recommendation. Survival curves were compared using the logrank (Mantel-Cox) test. Appropriate methods were indicated in the legends. Values of p < 0.05 were considered significant. Results Mice treated with gDE7 and 1MT isoforms promote partial tumor mass control according to the administration regimen In previous work, we showed that the combination of gDE7 with 1MT partially controls the growth of TC-1 cells in mice (27). Since gDE7 conferred complete antitumor protection in IDO -/knocked mice (27), here we compare two therapeutic regimens in wild-type mice with administration of 1MT isoforms every day ( Figure 1A), and every other day ( Figure 1B). Notably, administration of 1MT isoforms every day did not improve treatment outcomes (Figures 1C, D). On the other hand, administration of D-1MT or DL-1MT every other day improved the therapeutic antitumor effects of the gDE7-based immunotherapy ( Figures 1C, D). Regarding mice survival (Figures 2A, B) and tumor-free outcome ( Figures 2C, D), mice submitted to vaccination and treated with 1MT every other day ( Figures 2B, D) outperformed the group that was treated every day (Figures 2A, C). Mice treated with DL-1MT showed higher survival rates ( Figure 2B) and tumor-free conditions ( Figure 2D). Importantly, when 1MT therapy (D-1MT or DL-1MT) is interrupted, there was a decline in tumor growth control ( Figures 1C, D), as also indicated by the mice survival rate after day 40 (Figures 2A, B). Given that every other day treatment with IDO inhibitors had a better outcome, we proceeded the experiments using solely this treatment strategy. Therefore, we next investigate the tumor-infiltrating immune cells population (time point analyzed -D21) to evaluate the immunological mechanism triggered by the chosen therapeutic approach ( Figure 2E). Mice immunized with gDE7 and those also treated with IDO inhibitors (D-1MT or DL-1MT) showed similar frequencies of CD45 + cells, PMN-MDSC and dendritic cells ( Figures 2F-H). Regarding T cell population, although only gDE7 immunization induced higher rates of intratumoral CD8 + T cells ( Figure 2I), immunization with gDE7 with or without IDO inhibitors leads to increased frequency of E7-specific CD8 + IFN-+ T cells ( Figure 2J). Interestingly, higher rates of intratumoral CD4 + T cells were found in mice immunized with gDE7 ( Figure 2K), but only the combined therapy was able to decrease Treg population ( Figure 2L). These findings suggest that the antitumor effects of IDO inhibitors, when employed as immunometabolic adjuvants, depend on the administration regimen. Moreover, when combined with immunotherapy, IDO inhibitors have a beneficial effect on Treg tumor-infiltration. IL-6 expression promotes tumor growth and negatively impacts gDE7 vaccine efficacy Next, we assessed how IL-6 affects tumor development ( Figure 3A) and tumor-infiltrating immune cells (Figures 3B-F) (see Figure 2E for gate strategy) in IL-6 -/mice engrafted with TC-1 cells. Tumor growth was significantly reduced in IL-6 -/mice when compared to WT mice ( Figure 3A). Furthermore, the frequency of immune cells (CD45 + cells) in the tumor microenvironment was increased in IL-6 -/animals with a higher rate of intratumoral dendritic cells (DCs) compared to WT mice ( Figures 3B, C), but no significant differences were observed in the frequencies of intratumoral CD8 + and CD4 + T lymphocytes, and Treg ( Figures 3D-F). Notably, the transplanted TC-1 cells were capable of producing IL-6 ( Figures 3G, H), underlining the importance of endogenous IL-6 on immune and stromal cells in the promotion of tumor growth. Following that, we investigated the influence of IL-6 on the antitumor effects of the gDE7 vaccination ( Figure 3I). Immunotherapy reduces tumor development in IL-6 -/mice ( Figure 3J) with a significant improvement in survival but does not induce tumor remission (Figures 3K, L). Furthermore, no increase in the frequency of circulating E7-specific CD8 + IFN-+ T cells ( Figure 3M) was seen at the time point analyzed (D21). These findings show that IL-6 promotes the growth of TC-1 cell proliferation and negatively impacts the efficacy of gDE7based immunotherapy. IDO inhibition boosts the antitumor effects of gDE7 in IL-6 -/mice To further understand the interplay of IL-6 and tryptophan metabolism in HPV-related tumors, we next evaluated the in vitro impact of IDO inhibitors on IL-6 expression in TC-1 cells. As indicated in Figure 3, culturing TC-1 cells in the presence of D-1MT or DL-1MT significantly reduced IL-6 gene expression and cytokine release (Figures 3G, H). IL-6 -/tumor-bearing mice were immunized with gDE7 and treated with IDO inhibitors following the "every other day" regimen ( Figure 4A). In comparison to the gDE7-treated group, IL-6 -/mice vaccinated with gDE7 and treated with D-1MT or DL-1MT showed significantly decreased tumor mass ( Figures 4B-D). IL-6 -/mice treated with gDE7 and D-1MT had a 73% survival rate and 52% remained tumor-free till the end of the observation period (D60), whereas animals treated with gDE7 and DL-1MT had a 52% survival rate and 47% were tumor-free. In contrast, IL6 -/mice treated only with gDE7 showed a 33% survival rate and 10% remained tumor-free at D60. Importantly, comparing WT mice with IL-6 KO mice, both treated with the combination of gDE7 + IDO inhibitors, we observed a significantly decreased in tumor growth in IL-6 defective mice, especially when treated with D-1MT (Supplementary Figure S1). Furthermore, at D28, higher numbers of circulatory E7-specific CD8 + IFN-g + T cells were observed in mice that received the combined therapy ( Figures 4E, F). Importantly, we opted to assess CD8 + -specific T cells on D28 because when we previously evaluated this cell population in the mice blood on D21, we found no changes between the control and the gDE7-vaccinated groups ( Figure 3M). Taken together, the current findings show that, in IL-6 -/mice, the combination of gDE7 with 1MT isoforms boosts the antitumor immunity and highlights the role of IDO and IL-6 on the growth of TC-1 cells. Lack of IL-6 expression and IDO inhibition enhances activation of intratumoral effector immune cells and reduces immune suppressive cells in gDE7 vaccinated mice We next investigate the tumor-infiltrating immune cells population (time point analyzed -D21) to determine the immunological mechanism behind the tumor rejection outcome obtained in IL-6 deficient mice by our therapeutic approach ( Figure 5A). Mice immunized with gDE7 and those also treated with IDO inhibitors (D-1MT or DL-1MT) had higher rates of intratumoral CD45 + ( Figure 5B) and CD8 + T cells ( Figure 5C) than non-immunized mice. Interestingly, despite CD4 + T-cell population was similar in all experimental groups ( Figure 5D), immunization with gDE7 with or without IDO inhibitors decreased the frequency of intratumoral Treg cells ( Figure 5E). Regarding myeloid cells, immunization of IL-6 -/mice with gDE7 enhanced DC migration into the tumor microenvironment ( Figure 5F). In addition, the adjuvant treatments with D-1MT or DL-1MT increased the frequencies of activated DCs in the tumor microenvironment (Figures 5F, G). Moreover, mice treated with gDE7 and 1MT isoforms showed increased activation of resident and inflammatory monocytes (Figures 5H, I). Notably, the combined treatment of gDE7 and D-1MT or DL-1MT substantially reduced the frequency of intratumoral PMN-MDSC when compared to mice treated only with gDE7 ( Figure 5H). Furthermore, the combination treatment with D-1MT or DL-1MT promoted upregulating of CD86 of PMN-MDSC ( Figure 5I). Unfortunately, due to the small tumor volume, it was not possible to assess the E7-specific CD8 + IFN-g + T cells population in the tumor microenvironment. These findings further support the role of IL-6 and IDO in the immunomodulation promoted by gDE7 and underline the relevance of multi-target therapeutic strategies for successful antitumor immunotherapy. Discussion Our research explored the possible association between IL-6 and IDO1 in the progress of HPV-related tumors, as well as their influence on a specific cancer immunotherapy strategy. The experimental approach aimed to circumvent three major concerns in HPV-related tumors: systemic and local high expression of IL-6 and IDO as well as activation of effector immune cells. To address this goal, we employed the well-known HPV-16 TC-1 tumor mouse model, which expresses IL-6 (30) and IDO (27), to understand the impact of these factors during gDE7-based immunotherapy. The main findings of the study were: 1) IL-6 impacts the in vivo TC-1 cell tumor development and this feature depends on IL-6 expression by leukocytes and stromal cells, not by tumor cells; 2) in IL-6 -/mice, gDE7 treatment enables partial tumor mass control, nonetheless, only with IDO inhibition gDE7-immunized mice could boost immune responses and more efficiently eradicate tumor cells; 3) in the absence of IL-6, the adjuvanticity of 1MT isoforms were essential to increase the efficacy of the gDE7 vaccine leading to increased frequencies and activation of antigen-presenting cells in the tumor microenvironment and control of intratumoral PMN-MDSC and Treg expansion. The vaccines based on the fusion of HPV-16 E7 oncoprotein and the HSV-1 glycoprotein D, either protein-or DNA-based, have been shown to potentiate immune responses capable of blocking inhibitory signals mediated by the B and T lymphocyte attenuator (BTLA) co-signaling protein by competitive binding inhibition with herpesvirus entry mediator (HVEM) receptor (31,32). In addition, HSV-1 gD protein delivers target antigen and promotes direct activation of a specific DCs subset specialized in cross-presentation leading to efficient activation of CD8 + T cell-dependent antitumor responses (25). The therapeutic antitumor efficacy of gDE7-based vaccines could be enhanced by combination with different adjuvant procedures, including administration by electroporation (33), combined treatments with gemcitabine (34) and cisplatin (26) Lack of IL-6 combined with IDO inhibition augment immunotherapy control mediated by gDE7 on TC-1 cells engrafted in mice. (A) IL-6 -/mice were subcutaneously inoculated with 1 x 10 5 TC-1 cells and vaccinated with two doses (D7 and D14) of gDE7 (30µg per animal). Two days after the first dose (D9), mice were treated with 1MT at a concentration of 10 mg/animal every other day for four weeks, until D37. The experimental groups were followed for 60 days, but the "endpoint data" for each group was plotted up to the date when at least 80% of the mice were alive. (B-D) Data represent means ± SD from two (groups IL-6 -/and IL-6 -/-+ gDE7) (n=6, total n=12) or three (groups IL-6 -/-+ gDE7 + 1MT) (n=6 or 7, total n=19) independently performed experiments with comparable results and analyzed by ANOVA or by Kaplan-Meyer test (exclusively for survival assay). The antitumor effects of gDE7 combined with 1MT isoforms were followed by (B) tumor volume (mm 3 platform was designed to deliver E7 oncoprotein to DEC205 + dendritic cells (aDEC205-E7 mAb) (39). Although gDE7-based vaccines showed outstanding anticancer effects, the effectiveness decreased when tumors achieved an advanced growth stage due to immunosuppressive mechanisms elicited by tumor cells (27,34,37). Indeed coadministration of DNA vaccines encoding gDE7 and IL-10 receptors has been shown to halt tumorinduced immune suppressive cells (MDSC) and enhance strong tumor-specific CD8 + T-cell response leading to better control of tumors at advanced growth stages (37). Notably, IL-10 -/mice develop TC-1 cell-derived tumors at faster rates with a significant enhancement of IL-6 and numbers of intratumoral MDSC concerning WT mice. Indeed, previous experimental evidence demonstrated that the use of an anti-IL-6 receptor monoclonal antibody controlled tumor growth and expansion of intratumoral MDSC (40). Among the oncology biomarkers explored in HPV-related cancers, IL-6 stands out as a predictor of tumor development and immunosuppression. (9,(41)(42)(43). High expression of IL-6 in both tumor cells and surrounding tissues has been found in patients with HPV-16 and 18 infections (8,42,44). It is widely assumed that the positive regulation of IL-6 in HPV-related pathologies relies on STAT3 signaling (15,43). Interestingly, the STAT3/IL-6 axis is assumed to regulate the constitutive expression of IDO1 in tumor cells (16), leading to the hypothesis that this axis could regulate the high-level expression of IDO1 in the tumor microenvironment and adjacent tissues of HPV-related tumors (27, [45][46][47][48], since both molecules are co-expressed in TC-1 cell mouse model or cervical cancer patients. Indeed, IL-6 and IDO have been linked to poor treatment outcomes, tumor recurrence, and aggressive tumor progression in breast cancer (49), nasopharyngeal carcinoma (50), and prostate cancer (51) patients. Therefore, the study of these two immunosuppressive molecules is important not only for HPV-related tumor, but also for other tumor types. Considering that immunometabolism has emerged as a central element in cancer therapy (52,53), we previously explored the combination of gDE7 with IDO inhibitors and melatonin, which promoted synergic antitumor effects drawing attention to the relevance of multi-target therapeutic approaches (27). Focusing on continuing this study and further understanding the IDO/IL6 axis in HPV-related tumors, in the present study we investigated the outcomes of gDE7 immunization in IL-6 -/mice, treated or not with 1MT isoforms, based on the hypothesis that targeting IDO and IL-6 could augment the immunotherapeutic effects. Aligning with our findings, the impact of IL-6 in HPV-related tumors has been previously demonstrated with IL-6 -/mice and by blocking of IL-6 with specific inhibitors (43). Here we saw a considerable reduction in tumor development when the IL-6 -/mice were immunized with gDE7. This phenomenon could be related to the specific tumor signature since IL-6 deficiency did not affect esophageal tumorigenesis (54). IL-6 signaling can be targeted in a variety of ways, including the use of anti-IL-6 (siltuximab) or IL-6R (tocilizumab) monoclonal antibodies, which have both been extensively studied in different experimental tumor models as well as in clinical trials (55). IL-6 inhibition combined with other chemotherapeutic drugs, radiation, and targeted therapies significantly increased the clinical therapeutic gain in various cancer types (56,57). In this concern, the inhibition of IDO1 can trigger an IL-6-dependent toxic inflammation in mice, which can be reduced by anti-IL-6 antibodies (58). Indeed, the combination of different treatments with multifactorial target mechanisms may pave the way for the generation of new and more effective cancer therapies. Focusing on cancer metabolism, we previously observed the impact of IDO1 on the efficacy of gDE7 immunization, with the complete rejection of TC-1 tumors in IDO1-deficient mice (27). Taking this finding into account, and knowing that giving oral IDO inhibitors every other day partially protects WT mice immunized with gDE7 (27), we sought to find out if oral administration of 1MT isoforms to WT mice every day would enable tumor clearance in response to gDE7 treatment. This approach leads to toxic side effects that impaired the antitumor responses conferred by gDE7. Similar findings were obtained in an experimental HPV-related head and neck tumor model using tumor cells derived from murine oropharyngeal epithelial cells expressing HPV16 E6/E7 (59). However, in a glioblastoma mouse model (60) and a lung mouse model (61), daily oral administration of 1MT isoforms improved therapeutic outcomes. Targeting IDO1-induced immunosuppressive mechanisms could represent a double-edged sword, since inhibiting IDO1 as a monotherapy could also lead to increased tumor development (27,62). Clinical development efforts now encompass the combination of IDO inhibitors with immunotherapies. Positive clinical outcomes were achieved when IDO inhibitors were used in combination with sipuleucel-T (NCT01560923), DC-based vaccine (NCT01042535), and pembrolizumab (63), implying that IDO inhibition has a significant therapeutic value when combined with other therapeutic procedures. Regarding gDE7 immunotherapy, we observed a similar adjuvanticity performance of DL-1MT and D-1MT, worth mentioning that D-1MT is presently undergoing 17 clinical trials. Both 1MT isomers lead to increased gDE7-mediated antitumor protection in WT mice, but only the combination of gDE7 with IDO inhibitors in the absence of IL-6 afforded more efficient tumor cell eradication. Importantly, TC-1 cells were the exclusive IL-6 source in the model suggesting that the therapeutic efficacy is selective when targeting IL-6 on immune and stromal cells rather than on the tumor cells. The immunotherapeutic efficacy of the proposed vaccine approach (gDE7 + IL6 -/-+ 1MT) relied on the immune-cellular profile of the tumor microenvironment, including activation of myeloid cells and reduction of PMN-MDSC and Treg. Supporting our data, IL-6 is involved in the differentiation and expansion of MDSCs, which can inhibit T-cell via multiple molecular mechanisms (64), and 1MT effectively reverses the recruitment of tumor-infiltrating MDSCs induced by IDO1 (65). Similarly, as previously noted, CD11b + Ly6G + myeloid cells represent a major source of IDO in the tumor microenvironment (58). Treg cells are also involved in the role of IDO1-induced immunosuppressive mechanisms that promotes cancer cell survival (20). Indeed, higher frequency of CD4 + CD25 + FoxP3 + T cells are associated with IDO expression in immunological and stromal cells (66,67). In this concern and corrobotarting or data, the inhibition of IDO by 1MT attenuates Treg cells differentiation and expansion (67,68). Concerning the therapeutic effectiveness, DCs are required for immunotherapy-driven tumor relapse control (25, 26, 38). Our current data with the IL6 -/mouse model demonstrated the relevance of 1MT adjuvanticity in boosting DCs in the tumor microenvironment. Indeed, cooperativity between orallydelivered 1MT and subcutaneous administration of gDE7 in IL-6 -/mice induced tumor rejection and DC activation. Interestingly, we observed an increased frequency of DCs in the tumor microenvironment of IL-6 -/mice, but not CD4 + T cells or CD8 + T cells at the time point assessed. Corroborating our data, an increased percentage of DCs was observed in IL-6 -/mice implies that IL-6 hinders DC maturation in vivo with negative outcomes for DC-mediated T cell activation (14). Notably, IL-6 -/-DCs retained the ability to generate functional CD8 + T effectors and memory cells (69). In this concern, the IL-6 signaling cascade was shown to inhibit the expression of major MHC-II and CD86 molecules on the surfaces of DCs in vivo, resulting in the delay of cancer-related antigen presentation (70,71). Moreover, the dysfunction of DC also attenuates CD4 + Tcell-mediated antitumor immunity responses, and inhibition of IL-6 reduces tumor growth by restoring T-cell activity in tumorbearing mice (72,73). The tumor-driven immunosuppression of DCs could also rely on IDO expression (74). Remarkably, in the TC-1 tumor mouse model, there is a substantial increase in tumor-infiltration of IDO-expressing DCs, macrophages, and monocytes during tumor development, which contributes to the immunosuppressive cellular microenvironment (27). Although IDO inhibitors did not increase intratumoral CD8 + T cells in vaccinated mice, concomitant targeting of IL-6 and IDO promoted efficient induction of tumor-infiltrating DCs. Notably, cellular analyses indicated that only gDE7 combined with 1MT increased tumor infiltration of monocytic and myeloid antigen-presenting cells expressing higher levels of CD86, an important T-cell costimulatory molecule. Therefore, one possible hypothesis of the observed antitumor effects may rely on the fact that 1MT can reverse the T cell suppressive phenotype induced by IDO-expressing murine DCs promoting efficient antigen presentation and T cell proliferation (62). In the era of immuno-oncology, the search for prognostic markers to expand the use of the immunotherapeutic approach may be a key step to improving the outcome of presently available cancer treatments. The contextual study presented here has allowed us to demonstrate how anti-IDO/IL-6 therapies may contribute to future successful treatments and open perspectives for the development of alternative options for the treatment of HPV-related tumors. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. Ethics statement Mice experiments were performed under approved protocols by the ethics committee for animal experimentation from Instituto de Ciencias Biomedicas da USP, protocol number CEUA 8572030918. Author contributions RLP, AM and LF conceptualized the study. BP and LA designed the vaccine platform. RLP, P.C.S., and RP carried out tumor cell engraftments and performed immunizations. RLP, PS, RP, BP, JS, LA, MS, KR, NS and AM performed acquisition, analysis, and interpretation of data. All authors participated in the interpretation of the results. RLP, AM and LF wrote the paper. AM and LF were responsible for raising funding for the study. AM supervised the study. All authors contributed to the article and approved the submitted version.
2022-11-03T17:47:00.222Z
2022-11-03T00:00:00.000
{ "year": 2022, "sha1": "1955ca6ff221fd90ffa8f1980c0fb2adf1befb70", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "1955ca6ff221fd90ffa8f1980c0fb2adf1befb70", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
174809051
pes2o/s2orc
v3-fos-license
Melanoma patients with additional primary cancers: a single-center retrospective analysis Background: Recent progress in the diagnosis and treatment of primary and metastatic cutaneous melanoma (CM) has led to a significant increase in the patients` expectancy of life. The development of additional primary tumors (APT) other than CM represents an important survival issue. Results: Of a total of 1764 CM patients, 80 (4.5%) patients developed APT. For tumors diagnosed after CM, there was a 2.7 fold excess risk for APT compared to the swiss german population. A significantly increased risk was noted for female breast (SIR, 2.46), male larynx (SIR, 76.92), male multiple myeloma (SIR, 11.2), male oesophagus (SIR, 10.8) and thyroid on males (SIR, 58.8) and females (SIR, 38.1). All thyroid cancer cases had a common papillary histological subtype and a high rate of BRAFV600E mutation. Melanoma was the primary cause of death in the vast majority of patients. Methods: We used the cancer registry from the Comprehensive Cancer Center Zurich (CCCZ) and retrospectively analyzed patients with CM and APT between 2008 and 2018. We calculated the risk of APT compared to the swiss german population using the standardized incidence ratio (SIR). Conclusions: Patients with CM have an increased risk for hematologic and solid APT. Long-term follow-up is indicated. INTRODUCTION Cancer is a genetic disease, caused by the accumulation of genetic mutations that eventually transform a normal cell into a tumor cell [1]. It is of great concern for industrial countries due to the increasing incidence rates of the most common forms of cancer, mostly due to the lengthening of human lifespan and population aging [2,3]. Cutaneous melanoma (CM) is, when not diagnosed early, an aggressive skin cancer derived from the melanocytes [4]. Although it represents approximately 5% of all cutaneous malignancies, it is considered one of the most lethal form of all skin cancers. Recent advances in understanding the molecular mechanisms and the underlying pathogenesis of the disease has led to significant improvements in the diagnosis and treatment and subsequently to an increase of the melanoma survival rates [5]. Due to variable environmental factors resulting in DNA damage, melanoma is associated with a high burden of somatic mutations [6,7]. The identification of frequent mutations Research Paper www.oncotarget.com in the MAPK pathway has led to the development of targeted inhibitors, such as BRAF inhibitors (BRAFi) and MEK inhibitors (MEKi), with improved response and overall survival (OS) [8]. On the other hand, melanoma is an immunogenic cancer, conferring sensitivity to immunotherapeutic antibodies, which augment the cellmediated immunity, such as checkpoint inhibitors against cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) or programmed cell-death protein 1 (PD-1) [9]. Due to this remarkable progress in metastatic melanoma treatment, today, the proportion of patients alive at 24 months in firstline setting is 62.9% with anti-PD1 plus anti-CTLA4 and 53.5% with BRAFi and MEKi [10]. Considering the tremendous improvement in the OS, it is also important to focus on survival issues and consequences, such as the development of long-term toxicities and most importantly new, independent tumors of different primaries other than melanoma. SEERdatabase analyses have previously shown that melanoma patients are prone to the development of other primary tumors of different histological type [11] and vice versa; other malignancies have been related to a secondary melanoma development [12][13][14][15][16][17]. Since the diagnosis and treatment complexity of the patients with additional primary tumors (APT) increases, we retrospectively analyzed melanoma patients with primary tumors of histological type other than melanoma in our melanomareference center. We aimed to describe the distribution of these tumors analog to melanoma diagnosis, underline the complexity of these patients and emphasize the importance of long-term follow-up in patients diagnosed with cancer. Study population There was a total of one thousand seven hundred and sixty four (n = 1764) patients diagnosed with melanoma between 2008 and 2018 with a cut-off of June 2018. Of the 1764 melanoma patients, eighty (4.5%) patients were diagnosed with an APT, from which thirteen (16.25%) patients developed multiple (> three) separate cancer types of different primary (MPT) ( Figure 1). The median patient age at melanoma diagnosis was 70 years (33-90 years) and the majority of the patients were males (65%). Thirty (37.5%) patients had a family history of cancer, with same cancer in first-or seconddegree relatives in 8.8%. 60% of the patients diagnosed with an APT had metastatic melanoma of which 26.7% were metastatic to the brain. Since mutational analysis does not belong to the standard tests for patients with non-metastatic melanoma in our institution and 32.4% of the patients were stage III-IV, mutational status was known in 57% of the patients. 26.6% of these patients were BRAF V600 mutated and 16.5% NRAS. As of June 2018 69.6% of the patients with APT were alive. Of the 30.4% deceased, metastatic melanoma was the primary cause of death in the vast majority of patients (79.2%), followed by breast cancer (8.3%). Other causes of death were multiple myeloma (4.2%), pancreas cancer (4.2%) and lung cancer (4.2%). Patients' characteristics, demographics and features of melanoma are listed in Table 1. Additional primary tumors (APT) In all, we found 93 second primary cancers, accounting also for multiple cancers, diagnosed before and after the first melanoma diagnosis between January 2008 and June 2018. The most frequently observed APT was prostate cancer in males (24.7%) followed by breast cancer in females (15.1%). Other frequently observed cancers on the overall distribution were bladder cancer (7.5%), lung cancer (6.5%), thyroid cancer (6.5%), nonhodgkin lymphoma (5.4%) and leukemia (5.4%) ( Table 2). No predominant leukemia type was found. There was diversity in the stage of first diagnosis of the APT with 31.2% in stage I, IA and IB and 16.2% in stage IV and IVA. Consequently, 44.1% of these APT received curative surgical resection only, 27.9% received curative resection combined either with radiotherapy (11.8%) or with systemic therapy (11.8%) or both (4.3%). Altogether, 30.2% of the patients in our cohort required a systemic therapy; 10.8% alone, 11.8% combined with operation, 1.1% with radiotherapy, 4.3% with both operation and radiotherapy and 2.12% with radiotherapy and bone marrow-or stem cell-transplantation. 6.5% of the APT were under follow up only. For information on the course of treatment see Table 2. Comparison of APT diagnosed before and after melanoma We compared the APT diagnosed before (50.5%) and after (49.5%) the first melanoma diagnosis and analyzed for differences in site of occurrence according to sex ( Figure 2). Breast cancer in females and prostate cancer in males remain the most frequently observed APT before and after the melanoma diagnosis, with a frequency of 12.8% and 17.4% respectively for breast cancer and 36.2% and 13% for prostate cancer ( Table 2). Bladder cancer in males occurs with a frequency of 8.5% before CM and 4.3% after CM. In females, the second most frequent APT is leukemia (8.5%) before CM with no cases after CM and thyroid cancer (8.6%) after CM with no cases before CM. In our patient cohort, all thyroid cancer cases were observed after the first melanoma diagnosis in both sexes. On the contrary, the vast majority of hematologic malignancies (leukemia in females and non-hodgkin lymphoma in males) seem to occur before the melanoma diagnosis. Median time to APT diagnosis after CM was 15 stage IV, whereas on APT diagnosed after 39.1% were stage I and 19.6% stage IV ( Figure 3). DCR was achieved in 91.5% on APT before CM and 80.5% after CM. For APT diagnosed after CM, we additionally analyzed the patterns of diagnosis, concluding that 28 out of 50 APT were diagnosed at melanoma follow-up, including PET/CT and CT imaging, as well as clinical examination. Standardized incidence ratio (SIR) analysis Although we had a small patient cohort, we calculated the expected number of APT after first melanoma diagnosis assuming that cancer risk in the cohort was similar to that observed in the German Swiss population (Figure 4) Cutaneous melanoma and papillary thyroid cancer Based on our findings that all thyroid cancer cases (6.5%) occurred after the first melanoma diagnosis and due their common papillary pathological type (PTC), although our small patient cohort, we aimed to assess the rate of BRAF V600E mutation in this patient cohort. We therefore performed a real-time quantitative PCR procedure (Idylla) of all available tumor tissues. Six patients with both CM and PTC were tested, of which 4 were found to be positive for BRAF V600E mutation in melanoma, 6 for BRAF V600E in PTC and 4 in both. Patient selection and data collection The cancer registry of the Comprehensive Cancer Center Zurich (CCCZ) is a melanoma reference database Treatment type (%) None 6 (6.5) 1 (5.6) 0 (0.0) 4 (14. Patients` records were also searched for risk factors, including family history, smoke, age, gender and race. In order for the reported variables not to contribute in more than one category, for patients with a first and second degree relative with cancer, only the first degree relative was included. In order to describe the distribution of APT with respect to melanoma diagnosis, we labelled the patients into two groups; APT before and after CM. Multiple primary tumors (MPT) were defined as two or more separate neoplasms of different primary, other than melanoma. Follow-up time was calculated from the day of resection of the CM to the date of last follow-up, including last visit or date of death, or June 2018, whichever occurred first. For APT occurring after CM, the standardized incidence ratios (SIRs) were calculated by dividing the observed numbers of cancer by the expected ones. The observed numbers of cancers and person-years at risk were calculated by gender, 5-year age group and the time since the diagnosis of CM. The expected numbers of cancer were obtained by multiplying the stratumspecific numbers of person-years by the corresponding cancer incidence rates in German Swiss population in Switzerland extracted from the Nationales Institut für Krebsepidemiologie und -registrierung (NICER) database. Exact 95% confidence intervals (CIs) were defined when the numbers of observed cases followed a Poisson distribution. All analyses were conducted using statistical language R version 3.5. Written informed consent for retrospective analysis of melanoma patients in our registry was previously approved by local ethics committee (KEK-ZH 2014-0193). DISCUSSION AND CONCLUSIONS On our retrospective analysis, there is an overall incidence of 4.5% of an APT before and after CM diagnosis, among of which 16.25% attributed to MPT. Based on our analysis, we show that patients who were previously diagnosed with cutaneous melanoma (CM) have approximately a 2.7 fold increased risk of an APT compared to the general Swiss German population. These results are consistent with previous international findings reporting a significantly elevated risk for specific subsequent primary cancers other than melanoma in melanoma survivors [11], although the relatively greater overall risk of 28% described in the cited study. Albeit, a direct comparison to these data could be misleading, taking into consideration the differences in the population size, follow-up duration as well as inclusion criteria. Since patients with CM have an increased risk of second primary CM, subsequent melanomas were excluded from our analysis [11,[18][19][20]. In our patient cohort, female breast cancer, male larynx cancer, male multiple myeloma, male oesophagus cancer and thyroid cancer in both sexes had statistically increased incidence after the first melanoma diagnosis, compared to the normal population. The reasons for this increased risk seem to be multifactorial and could be contributed to increased medical surveillance and follow-up after CM diagnosis, as well as other independent, non-modifiable risk factors, such as the presence of an inherited genetic predisposition and advanced age [21,22], and behavioral, modifiable risk factors, such as UV-exposure [23] and immunosuppression [24][25][26], especially for hematologic tumors. A mutual association between CM and female breast cancer has been reported in both genetic and population-based retrospective studies [27], suggesting a higher prevalence of both cancers for mutations of high-risk genes, such as BRCA2 and CM [28] and CDKN2A and breast cancer [29,30]. Unorthodoxly, the incidence of prostate cancer in our population seems to decrease after CM diagnosis, although this result should be cautiously interpreted, due to the relatively low number of this event in our patient cohort. The latter may be of particular interest, since several studies have connected CM and prostate cancer with common risk factors responsible for the pathogenesis of both cancer types, such as the UV exposure, with subsequently increased risk of prostate cancer diagnosis after melanoma diagnosis [12,16,[31][32][33][34]. On the other hand, the role of androgens and oestrogens in the melanoma development have been controversially discussed [35][36][37][38]; altogether, the assumption of an additional biological or hormonal relationship between CM, prostate cancer and breast cancer remains unclear. Interestingly, the probability of a new thyroid cancer diagnosis seems to increase after CM diagnosis in both sexes, which is in accordance of previous reported results [11]. Besides, the controversial relationship between CM and thyroid cancer has been confirmed from previous retrospective reviews, suggesting that patients with papillary thyroid cancer (PTC) are at significantly increased risk of CM and vice versa [39]. The precise etiology for this correlation is unclear; however, the thyroid cancer cases observed in our patient cohort had a common papillary histological subtype and a high rate of BRAFV600E mutational background, which was observed in both CM and PTC, thus unravelling a possible common genetic component of these cancers. This is of great importance, considering that several types of cancer are a result of mutations in critical genes [40]. Still, although every individual tumor is genetically distinct, the pathways affected in different tumors may overlap [6]. Approximately 50% of the CM cases harbor an activating mutation in the BRAF oncogene [41], whereas BRAF V600E mutations have been also described in approximately 44% of patients with papillary thyroid cancer (PTC) [42]. These results may indicate a common genetic pathway in the pathogenesis of these cancers. Our dataset showed that in patients with APT or MPT, metastatic melanoma still remains the most common cause of death. Yet, a large proportion of the APT are usually diagnosed on early stages with high DCR rates. There was also a significant difference among the median time of the diagnosis for APT before and after CM ((55.37 months (0.30-443.17) versus 15.91 months (0-241.16)), which can be attributed to the patterns of care after CM diagnosis with regular screening during follow-up. Indeed, 28 out of 50 APT diagnosed after CM were detected at melanoma follow-up, including PET/CT and CT imaging or clinical examination. The Swiss guidelines recommend an increased vigilance of follow-up in the first 10 years, with surveillance every 3 to 6 months the first 5 years and every 12 months for the next 5 years for the stage I-II melanoma [43]. Patients with a previous diagnosis of CM are more likely to have more regular and attentive health controls, which may increase the possibility of detecting APT and thus in an early stage. Still, there was a slight difference between stage IV APT diagnosed before (12.7%) and after (19.6%) CM, although a possible explanation for this finding is unclear. The main strength of our study is that the CCCZ registry provides a high accuracy in tumors registering with long and minimum loss of follow-up. Such patients are usually excluded from clinical trial protocols, thus only few prospective data are available. Our study emphasizes the complexity of patients with unrelated APT and the importance of increased follow-up for the prompt detection of such survival issues. Further research is required to unravel the burden and causative relationship of APT and primary tumors, as well as their socioeconomic impact on the survival population.
2019-06-07T20:32:26.679Z
2019-05-21T00:00:00.000
{ "year": 2019, "sha1": "7b7d8d5ee6e888c4d0b386c889f3410f3cd40d4c", "oa_license": "CCBY", "oa_url": "https://www.oncotarget.com/article/26931/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b7d8d5ee6e888c4d0b386c889f3410f3cd40d4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4716388
pes2o/s2orc
v3-fos-license
Blood flow-improving activity of methyl jasmonate-treated adventitious roots of mountain ginseng Ginsenosides from Panax ginseng are well known for their diverse pharmacological effects including antithrombotic activity. Since adventitious roots of mountain ginseng (ARMG) also contain various ginsenosides, blood flow-improving effects of the dried powder and extract of ARMG were investigated. Rats were orally administered with dried powder (PARMG) or ethanol extract (EARMG) of ARMG (125, 250 or 500 mg/kg) or aspirin (30 mg/kg, a reference control) for 3 weeks. Forty min after the final administration, carotid arterial thrombosis was induced by applying a 70% FeCl3-soaked filter paper outside the arterial wall for 5 min, and the blood flow was monitored with a laser Doppler probe. Both PARMG and EARMG delayed the FeCl3-induced arterial occlusion in a dose-dependent manner, doubling the occlusion time at high doses. In mechanism studies, a high concentration of EARMG inhibited platelet aggregation induced by collagen in vitro. In addition, EARMG improved the blood lipid profiles, decreasing triglyceride and cholesterol levels. Although additional action mechanisms remain to be clarified, it is suggested that ARMG containing high amount of ginsenosides such as Rg3 improves blood flow not only by inhibiting oxidative thrombosis, but also by modifying blood lipid profiles. Today, cardiovascular diseases (CVD) due to hypertension, hyperlipidemia, and atherosclerosis are one of the largest contributors to global mortality, leading to death of 19.5 million people in 2016. According to the World Health Organization, it will be about 23.3 million deaths from CVD in 2030 [1]. As the lifestyle of modern society has changed, high-energy and fat-rich diet, stresses, crapulence, and reduced exercise are believed to cause increased cardiovascular risks by affecting lipoprotein metabolism, platelet aggregation ability, and vessel resistance. It is well established that the incidence of CVD is due to hyperlipidemia, a condition characterized by significant increases in total cholesterol (TC), triglycerides (TG), and low-density lipoproteins (LDL) as well as decrease in high-density lipoproteins (HDL) [2,3]. Aspirin is a widely used anti-thrombosis drug for preventing cardiovascular diseases. Low doses of aspirin have been shown to have a preventive effect on thrombus formation in animal experiments [4]. But aspirin is acidic and stimulates stomach to release gastric acid. Besides, it affects gastric wall to become weak and causes gastric hemorrhage. In addition, currently available anti-thrombotic agents such as tissue-type plasminogen activator (t-PA), streptokinase, and PA-type agents for clinical research have some adverse-effects [5]. Therefore, safe and effective anti-thrombotic agents that can inhibit thrombosis are needed to be studied. Medicinal plants are the attractive source of pharmacologically active compounds and agents. Panax ginseng C.A. Meyer is well known as the most important botanical medicine in traditional eastern Asia for longer than 2000 years, and now one of the most extensively used herbal products in the world [6,7]. The root of Panax ginseng has long been studied for its pharmacological efficacies [8]. In Korea, mountain ginseng, wild Panax ginseng which lives in mountain naturally, has been considered traditionally more effective than cultivated ginseng products [9]. This plants grow wild in cool, shady forests from Korea and north eastern China to far eastern Siberia [8,10,11]. Mountain ginseng has become extremely scarce and the ginseng supply depends almost solely on field cultivation, which is a time-consuming and labor-intensive process. However, tissue culture techniques for mountain ginseng have been established by bioreactor technology as a useful tool for large-scale production of root biomass [12,13]. Tissue culture techniques allowed us to easily obtain a mass of adventitious roots. Tissue-cultured mountain ginseng was produced to contain higher concentration of bioactive ginsenosides or polyphenols than cultivated ginseng; these compounds are believed to be beneficial to human health [14]. The major active ingredients of ginseng are the triterpene glycosides named ginsenosides, which have been studied extensively for their pharmacological effects. These saponins possess diverse pharmacological actions for treating various diseases including cerebral ischemia [15], hypertension [16], and symptoms of hyperlipidemia [17], in addition to anti-fibrotic and antioxidant activities [18,19], which are associated with circulatory system or others as well. Polyphenols rich in mountain ginseng defend tissues from radical damage and have anticholesterol, intestinal regulatory, anti-cancer, and antioxidant effects [20]. Based on the reported biological effects, we suggested that the adventitious root of mountain ginseng (ARMG) would influence physiological hemostatic and pathological thrombotic processes. The present study was designed to investigate whether dried powder (PARMG) and an extract (EARMG) of ARMG have potential anti-thrombotic activity, in addition to underlying mechanisms, when it was taken orally. Materials The experimental materials, PARMG and EARMG treated with methyl jasmonate (50 µM) for 2 days were obtained from Dongkook Pharmaceutical Co., Ltd. (Seoul, Korea), and kept at 4 o C until use. After steaming, PARMG was extracted with 70% ethanol to obtain EARMG. PARMG, EARMG or aspirin was orally administered in a volume of 10 mL/kg sterile purified water. Aspirin, a reference material, was used as a wellknown blood flow enhancer. Animals Seven-week-old male Sprague-Dawley (SD) rats were from Daehan-Biolink (Eumseong, Korea), and subjected to the experiment after acclimation to the laboratory environment for 1 week. The animals were housed in a room with constant environmental conditions (temperature: Measurement of platelet aggregation Blood sample was withdrawn from the male SD rats directly into anti-coagulant citrate dextrose solution containing 0.8% citric acid, 2.2% trisodium citrate, and 2% dextrose. Washed platelets were prepared as previously described [21]. In brief, platelet-rich plasma (PRP) was obtained by centrifugation at 170 g for 7 min. Platelets were sedimented by centrifugation of the PRP at 350 g and washed with tyrode buffer. The washed platelets were resuspended in tyrode buffer and adjusted to 3×10 8 platelets/mL [22,23]. Platelet aggregation was measured with an aggregometer (Chrono-Log Co., Harbertown, CA, USA) according to the turbidimetric method of Born and Cross [24]. The Lab Anim Res | June, 2017 | Vol. 33, No. 2 washed platelet suspension was pre-incubated with PARMG, EARMG or aspirin (125, 250 or 500 µg/mL) at 37 o C in the aggregometer under stirring at 1,200 rpm. After 3-min pre-incubation, platelet aggregation was induced by adding collagen (0.625 µg/mL), adenosine diphosphate (ADP, 10 µM) or thrombin (0.1 unit/mL). The extent of aggregation was expressed as percentage of the vehicle control value stimulated with collagen, thrombin or ADP. Measurement of blood flow Rats (n=8/group) were orally administered with PARMG, EARMG (125, 250 or 500 mg/kg) or aspirin (30 mg/kg) for 3 weeks. Forty min after the final administration, the animals were anesthetized by intramuscular injection of urethane (1 g/5 mL/kg), under constant maintenance of body temperature (36-37 o C) using a heating pad. The right carotid artery of rats were exposed and detached away from the vagus nerve and surrounding tissues. Aortic blood flow rate was monitored with a laser Doppler flowmeter (AD Instruments, Colorado Springs, CO, USA). At the time point of 1 hour after the final administration, arterial thrombosis was induced by wrapping the artery with a Whatman No. 1 filter paper (3×5 mm) soaked with 70% FeCl 3 solution near (5 mm anterior to) the flowmeter probe for 5 min. The blood flow was monitored for 40 min, and the time to occlusion was recorded. Animals were sacrificed at the time point of 40 min from the application of FeCl 3 , and the arteries were cut to observe the thrombus in the artery. Thrombus weight and histopathology After 40-min monitoring of the blood flow in carotid artery applied with 70% FeCl 3 to the external surface, the arteries were cut at intervals of 1 mm from the visible thrombus regions. Then, the length (in mm) and weight (in mg) of the thrombotic arteries were measured. For microscopic examination, an 1.5-cm section of the ascending thoracic aorta was dissected from the heart. The injured artery was fixed in 4% paraformaldehyde, and embedded in paraffin. Paraffin sections were cut (4 µm in thickness), and stained with hematoxylin and eosin. The proportion of platelet plug to whole blood clot in cross-sectional area of the vessels was analyzed using an ImageJ 1.46b image analysis software. Hematology and blood biochemical analysis After 16-hour overnight fasting (feed only), blood samples were collected from rats orally administrated with PARMG, EARMG or aspirin for 3 weeks. Complete blood cell counts (CBC) were measured with an automatic hematology analyzer (INTEGRA 400; Roche, Mannheim, Germany). Blood samples were centrifuged at 3,000 g for 15 min at 4 o C. Biochemical analysis was performed in sera using a blood chemistry analyzer (Hitachi-747; Hitachi Korea, Seoul, Korea) for lipids including total cholesterol (TC), low-density lipoproteins (LDL), high-density lipoproteins (HDL), and triglycerides (TG) as well as glucose. Statistical analysis All data are expressed as a mean±standard error (SE). Statistical significance was analyzed using SPSS package (version 18.0; SPSS Inc., Chicago, IL, USA). Differences among groups were compared with one-way ANOVA, followed by Dunnett's multiple-range test. P-value <0.05 was considered to be statistically significant. Results PARMG did not inhibit all the collagen (0.625 µg/ mL)-, ADP (10 µM)or thrombin (0.1 unit/mL)-induced platelet aggregation up to 250 µg/mL ( Figure 1). However, the significant inhibition on the collagen-induced platelet aggregation at a higher concentration (500 µg/mL) is unclear, since turbid PARMG interfered with the analysis of platelet aggregation. In comparison, a high concentration (500 µg/mL) of EARMG markedly inhibited the collageninduced platelet aggregation to a half level, although it did not affect the ADP-and thrombin-induced aggregation. Similar results were attained by aspirin (500 µg/mL), near-fully inhibiting only collagen-induced platelet aggregation. During whole 3-week administration period of PARMG, EARMG or aspirin, there were no abnormal symptoms in rats, and the rats gained body weights steadily in normal state. Five-min application of 70% FeCl 3 outside carotid arterial wall caused gradual decrease in the blood flow that near-fully ceased in 30 min (Figure 2). However, 3week oral treatment with PARMG inhibited the thrombus formation in a dose-dependent manner, in which the blood flow was maintained longer than 40 min in the rats treated with a high dose (500 mg/kg) of PARMG, as observed in rats treated with aspirin (30 mg/kg) ( Figure 2A). EARMG also delayed the arterial obstruction in a dose-dependent manner ( Figure 2B). The mean occlusion time in the vehicle control group was calculated to be 18.1 min, based on the time point when the blood flow dropped to 10% (practical cessation) of initial flow rate ( Figure 3). However, PARMG markedly prolonged the occlusion time to 25.8, 30.9, and 34.4 min at 125, 250, and 500 mg/kg, respectively. Similar effects were achieved with EARMG, extending the occlusion time to 19.4, 31.7, and 34.7 min at 150, 250, and 500 mg/kg, respectively. Such a blood flow-improving activity was also attained by aspirin (30 mg/kg) that prolonged the occlusion time to 39.4 min. Forty min after FeCl 3 exposure, thrombi weighing 0.83 mg/mm were formed ( Table 1). The thrombus weight decreased following treatment with PARMG, leading to a significant reduction to 0.72 mg/mm at 500 mg/kg. Such significant decreases in the thrombus weight were also achieved with EARMG (125 and 500 mg/kg) and aspirin (30 mg/kg). As dissected 40 min after exposure to FeCl 3 , the arteries were entirely plugged with thrombi in the vehicle-treated rats (Figure 4). In the analysis of the thrombus, it was composed of 82.5% blood clot and 17.5% platelet plug ( Figure 5). In the animals treated with PARMG, EARMG or aspirin, the thrombi were loose and had larger portions of permeable platelet plugs, relative to the flow-blocking blood clots. That is, aspirin (30 mg/kg) enhanced the portion of platelet plugs to 52%. By comparison, PARMG (500 mg/kg) increased the platelet plug ratio to 40%, and EARMG (500 mg/kg) also expanded to 43%. In the blood biochemical analysis, PARMG (125-250 mg/kg) reduced blood TC, HDL, and TG levels, although there was no dose-response relationship ( Table 2). EARMG significantly decreased blood glucose level at all doses tested (125-500 mg/kg), and lowered the lipids at a high dose (500 mg/kg). In comparison, aspirin (30 mg/kg) reduced TC, HDL, and TG. Discussion This study was designed to investigate whether PARMG and EARMG have anti-thrombotic potential when it was taken orally. Furthermore, we attempted to find out which state of ARMG (dried powder or ethanol extract) is more effective for blood flow improvement via in vitro platelet aggregation assay as well as in vivo FeCl 3induced carotid arterial thrombosis model. A high concentration (500 µg/mL) of ARMG significantly inhibited the collagen-induced platelet activation, in which EARMG was more potent than PARMG. Collagen fibers are exposed to circulating platelets following damage to the vessel wall and play an important role in hemostasis through the creation of a physical barrier at the site of vascular injuries, thereby limiting blood loss [25][26][27]. The collagen fibers also stimulate platelet activation, recruiting additional platelets to the site of damage as well as consolidating the thrombus [27]. Many studies have reported the anti-platelet activity of ginsenosides [28,29]. It is suggested that the major ginsenosides in ARMG should be effective in inhibiting collagen that induces platelet aggregation. Platelet activation is characterized by shape change, granule secretion, and ultimately, aggregation. These responses are triggered by Dried powder (PARMG) or ethanol extract (EARMG) of adventitious roots of mountain ginseng or aspirin were orally administered for 3 weeks prior to FeCl an increase in intracellular Ca 2+ , which is induced by collagen [30,31]. Thus, the inhibition of this increase in [Ca 2+ ]i blocks the collagen-induced aggregation of human platelets [31]. The inhibitory effect of ARMG on the collagen-induced aggregation may be involved in the inhibition of Ca 2+ influx by decreasing thromboxan A 2 (TXA 2 ) formation. At lower concentrations, many of the effects of collagen are enhanced by its production of TXA 2 [32][33][34][35]. The collagen-induced increase in [Ca 2+ ]i can be blocked by inhibiting the production of TXA 2 via the pretreatment of platelets with cyclooxygenase inhibitors such as aspirin [30,36,37]. It was reported that the antiplatelet aggregation effect of ginseng may be due to the decreased production and release of TXA 2 [28]. Therefore, a part of the mechanisms involved in calcium mobilization and TXA 2 release would be needed for further understanding of the processes whereby collagen induces platelet aggregation as well as PARMG and EARMG inhibit the process. In the present study, the anti-thrombotic activities of PARMG and EARMG were evaluated in a FeCl 3induced carotid arterial thrombosis model. This model is widely used to study thrombogenesis in vivo. Since transition metals such as ferric ion facilitate oxidative radical formation, lipid peroxidation of cellular and tissue injuries as well as endothelial cell damage that leads to occlusive thrombus formation [38]. As shown in Figure 2 and Table 1, oral administration of PARMG and EARMG significantly delayed the occlusion time in a dose-dependent manner, indicating that the adventitious exposure. Gray aggregated platelet plugs (empty star) and yellow/pink blood clots (filled star) are seen in the arterial lumens exposed to FeCl 3 . Figure 5. Platelet plug (gray) and blood clot (black) ratios in the carotid arterial thrombi produced by FeCl 3 application outside the arterial wall. Dried powder (PARMG, 500 mg/kg) or ethanol extract (EARMG, 500 mg/kg) of adventitious roots of mountain ginseng or aspirin (30 mg/kg) were orally administered for 3 weeks prior to FeCl roots of mountain ginseng was able to inhibit thrombus formation in vivo and possibly act as platelet aggregation inhibitors like aspirin. In the model of FeCl 3 -mediated rat artery thrombosis, the thrombi are composed of fibrin, activated platelets, and entrapped erythrocytes. Clinically, this type of thrombus is found in the coronary arteries after sudden death and acute myocardial infarction [39,40]. Fibrin polymers and fibrin clots as the major components of thrombus are made from fibrinogen via the enzymatic action of thrombin. In the present study, the thrombi were observed to be separated into loose platelet plugs and compact blood (fibrin) clots. Notably, PARMG, EARMG as well as aspirin decreased the portion of impermeable blood clots, increasing the permeable platelet plugs. Such effects on the ratio of platelet plug/ fibrin clot suggest that blood flow-improving compounds have an ability to change the structure or composition of fibrin network. As demonstrated in vitro platelet aggregation assay and in vivo thrombus analysis, ARMG may interrupt the early stage of hemostasis triggered by collagen, rather than that induced by thrombin. Thus, it is supposed that following interference with initial hemostasis, relatively-loose platelet adhesion remained, leading to decrease in the strong blood clot formation. So, this may make the blood vessel obstructs slowly. On the other hand, the interaction between fibrin structure and coagulation factors is a complex process, and it has been reported that the composition of fibrin structure affects its sensibility to enzymatic activity of thrombin [41]. Thus, it is suggested that PARMG and EARMG suppress fibrin clotting by interrupting the interaction between fibrinogen and thrombin or by decreasing the thrombin activity in the body environment. In biochemical analysis, PARMG and EARMG improved the blood lipid profiles. In addition, EARMG remarkably decreased the glucose level. It was previously reported that there was a significant decrease in TG level in diabetic rats fed with the extracts of tissue-cultured adventitious roots of mountain ginseng [42]. Such an effect was also achieved in a human study [43], wherein ginsenosides stimulated lipid metabolism in diabetic individuals. Therefore, the beneficial efficacies of ARMG on the blood lipid and glucose levels may attract attentions of investigators and practitioners. The major components of ginseng are saponins called ginsenosides. Ginsenosides are well known for cardioprotective, immuno-stimulatory, anti-fatigue, and hepatoprotective activities [44]. According to some studies, ginsenosides are the main components affecting the hemostatic and thrombotic processes [45,46]. Among various ginsenosides, it was reported that both Rg 1 and Rg 2 have an anti-thrombosis activity [47]. Notably, Rg 3 is the most potent ginsenoside in inhibiting platelet aggregation by decreasing TXA 2 formation and granule secretion in platelets, which are mediated by Ca 2+ influx and Ca 2+ mobilization from intracellular stores [48]. Interestingly, the tissue-cultured ARMG contained higher amounts of Rb 1 , Rb 2 , Rc, Rd, Re, Rg 1 , and Rf ginsenosides when compared to field cultivated Korean ginseng. In addition, ginsenosides such as Rb 3 , Rg 2 , Rg 3 , Rh 1 , and Rh 2 which were lacking in the extracts of fieldcultivated Korean ginseng were abundant in ARMG [42]. Notably, it is well known that higher amounts of original ginsenosides and other novel ginsenosides are accumulated by treatment with methyl jasmonate during the tissue culture which is not contained process in Korean ginseng field cultivation [12]. It was reported that Dried powder (PARMG) or ethanol extract (EARMG) of adventitious roots of mountain ginseng or aspirin were orally administered for 3 weeks prior to FeCl methyl jasmonate stimulates the ginsenoside biosynthetic pathway, thus enhancing the accumulation of ginsenosides up to 5-7 folds in adventitious roots [49,50]. Besides, it seemed that steaming and extraction processes of ARMG led to different concentrations of ginsenosides. In contrast to the low concentration of Rg 3 in PARMG (0.18 mg/g), that in EARMG was 0.43 mg/g, implying that various ginsenosieds including Rg 3 can be more obtained during processes of ginseng products. In the present study, EARMG was somewhat superior to PRAMG in the anti-thrombotic activity which might be due to the increased ginsenosides including Rg 3 . Taken together, ARMG, especially EARMG, improved blood flow by inhibiting platelet aggregation induced by collagen as well as thrombus formation induced by oxidative vascular injury. Although more actual mechanisms remain to be clarified, the adventitious roots of mountain ginseng would be good natural candidate without adverse effects for the prevention and improvement of cardiovascular and cerebrovascular diseases.
2018-04-03T03:44:35.349Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "7a1b07e140415caca5faa02750a49be896736487", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5625/lar.2017.33.2.105", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a1b07e140415caca5faa02750a49be896736487", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
125031387
pes2o/s2orc
v3-fos-license
A convergence result for unconditional series in Lp ( μ ) ∗ We give sufficient conditions for the convergence almost everywhere of the expansion with respect to an unconditional basis for functions in L p ≥ 2. This result extends the classical theorem of Menchoff and Rademacher for orthogonal series in L. Subjclass 2000 MSC Primary 42C15 ; Secondary: 46B15, 46B20. Introduction In this paper we study the almost everywhere convergence of unconditional series in a general σ-finite L p space.The main result of this work is a generalization to L p of the classical result of D. Menchoff and Rademacher ( [1], [7], [9]) on the almost everywhere convergence of orthogonal series in L 2 .Theorem 1.1 below is a generalization of that result.When p = 2 and the series are orthogonal this contains the generalization obtained by Moricz and Tandori [10] (Theorem 1).The result of Menchoff has interesting implications in several areas of Analysis: Fourier Series ( [1] [14]), and specially in Probability Theory ( [2], [4], [7]) where this result is related to the laws of large numbers.On the other hand, unconditional basis are very important in Wavelet Analysis and related fields.One of the difficulties found when trying to generalize this result to L p , p 6 = 2, is the lack of the notion of orthogonality.Fortunately, it is not orthogonality what is needed but a consequence of it; uncondtional convergence.We will prove that this property together with another similar to the original used by Moricz and Tandori seem to be sufficient to ensure the almost everywhere convergence of certain expansions in L p . The main result can be stated formally: Theorem 1.1.Let {f j } j∈N be a basic sequence in L p (X, F, µ), p ≥ 2. If for some 0 < ≤ 2: converges in the norm of L p (X, F, µ) then: Some definitions and known results Recall that a Schauder basis or a basic sequence (a basis for a closed subspace) is call unconditional if it verifies one (and hence all) of the equivalences of the following proposition: Proposition 2.1.A basic sequence {x n } n∈N in a Banach space B is unconditional if and only if one of the following conditions is fullfilled: i) For every permutation π of the integers the sequence {x π(n) } n∈N is a basic sequence (is a basis of span{x n } n∈N ). ii) For every subset of integers σ the convergence of iii) The convergence of iv) The convergence of where θ n = ∓1 arbitrarely. As a consequence of this proposition using the properties of the Rademacher functions, an alternative characterization can be given for unconditional basic sequences in the particular case of σ-finite L p spaces, [13]: Then it is unconditional if and only if there exist A p , B p positive constants such that: This result gives a very useful characterization in terms of the equivalence of norms for span{f j } j .This equivalence will be very important in the sequel, though we will use it several times without refering to it explicitly, as it will become clear from the context.Our results rely on maximal inequalities, so we need to define several maximal operators: 3. Auxiliary Results In the following we will also suppose that (X, F, µ) is a σ-finite measure space.We will call absolute constants as K, C, c, C p , etc. Logarithms are taken in base 2. In our results {f j } j constitutes an unconditional basic sequence, so sometimes we will no mention this fact as in the following: Proof.first, let us bound the difference On the other hand, given x ∈ X: Integrating at both sides of the inequality, and then 2 In the following we will use a rather classical technique which consists in decomposing the partial sums in dyadic blocks [1], [14].From this fact we can easily see that if we take f ∈ L p (X, F, µ), f This follows immediately from the following fact: Take x ∈ X then there exists n * (x) such that max From this last result we obtain: 5) Proof.There exists a disjoint family of sets But, if the f k 's form an unconditional basic sequence then then from equation 3.4 we have In particular, replacing f with Since p ≥ 2, using a similar argument as in inequality 3.6, we have from which the result follows. 2 With all these results we can completely bound the maximal function Mf and hence we can give a proof of the following: Remark Here we are considering the limit of the partial sum operators S k f .I) and II) in some way recover the result of [2]. Proof.Part I) is trivial.Part II) follows from the fact that the maximal function Mf can be bounded by combining propositions 3.2 and 3.3: Main Result In the following we will consider the case an unconditional basic sequence, with this in mind, as a direct application of theorem 3.1 we get that if In particular, if m n = 2 n , since the {f j } j are an unconditional basic sequence: . So we have proved the following: From this fact, we may prove the main result of this work, but as in as in [10], the proof relies in a maximal inequality which we shall prove first: Lemma 4.1.Fixed {f j } j an unconconditional basic sequence, for every 0 < ε ≤ 2, there exists a constant C(ε) depending only on ε such that for all (a j ) j ∈ C N , and all N ∈ N: where Proof. Let us proceed as in [10], consider the case N = 2 2 r = n r ; r ∈ N; and define I nr = {1, ..., 2 2 r } and for p = 0, 1, ..., r − 1 set J p = {2 2 p + 1, ..., 2 2 p+1 }.Now, we can find a permutation π ∈ S I nr such that: and we can also find a permutation π 0 ∈ S I n r such that that is, a permutation such that π(J p ) is reordered with the natural order of the indexes, for each p = 0, ..., r − 1.On the other hand, for some N (x): Taking the p norm at each side of the inequality and by the triangle inequality: Apply the maximal inequality 3.8 obtained in the proof of theorem 3.1 to the finite sum to get: . Now, if i ≤ 2 2 s then log(i + 1) ≤ 4 2 2 s , so that: , by Hölder's inequality, with 1 p + 1 q = 1.Since p ≥ 2 then 4.3 is less than: Combining inequalities 4.2 and 4.5 we obtain: Now we can give a proof for theorem 1.1: , where I N = {2 N + 1, ..., 2 N+1 }.Using lemma 4.1 we can estimate as in 3.3. 2 and Monotone Convergence)Then under the hypothesis of theorem 1.1 we have:∞ X N=1 °°°M N f °°°p p < ∞ .(4.7)Now, we can proceed as in theorem 3.1 or we can use equation 4.7 to obtain: limN→∞ M N f (x) = 0 .On the other hand the (unconditional) convergence inL p norm of implies the convergence in L p norm of ∞ X k=1 a k loglog(k + 1)f k .(4.8)As a consequence of proposition 4.1, and since given m ∈ N and 2 m−1 < n < 2 m for almost all x ∈ X the following holds:|S n f (x) − f (x)| ≤ M m+1 f (x) + |S 2 m f (x) − f (x)| where f = ∞ P k=1 a k f k ( L p ), then by proposition 4.1, S 2 m f (x) converges to f (x) [µ]-a.e.as m → ∞, and on the other hand M m+1 f (x) → 0, so that the proof is complete.2 Juan M. Medina Departamento de Matemática, Instituto Argentino de Matemática, Universidad de Buenos Aires, Paseo Colón 850 (1063) Capital Federal CONICET, Argentina e-mail : jmedina@fi.uba.ar and Bruno Cernuschi-Fr ías Facultad de Ingeniería, Instituto Argentino de Matemática, Universidad de Buenos Aires, Paseo Colón 850 (1063) Capital Federal CONICET, Argentina e-mail : bcf@ieee.org
2018-12-04T09:46:59.350Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "377fcfa1bdeb14b55157eba5d32344dae24a863a", "oa_license": "CCBY", "oa_url": "https://www.revistaproyecciones.cl/index.php/proyecciones/article/download/1299/1011", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "377fcfa1bdeb14b55157eba5d32344dae24a863a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
261160463
pes2o/s2orc
v3-fos-license
Modified Frailty Index and Albumin-Fibrinogen Ratio Predicts Postoperative Seroma After Laparoscopic TAPP Background Postoperative seroma is the most common minor complication after inguinal hernia repair surgery and can have negative consequences. The objective of this study was to identify potential risk factors for postoperative seroma. Methods This study consecutively included 354 elderly patients with inguinal hernia who underwent laparoscopic Transabdominal preperitoneal Patch Plasty (TAPP). Seroma diagnosis was conducted by the same experienced surgeon based on the physical examinations combined with ultrasound. Risk factors for seroma were identified through univariate analysis and subsequently included in the binary multivariate logistic regression model. Results A total of 40 patients experienced postoperative complications of seroma, with an incidence rate of 11.3% (40/354). The binary logistic regression analysis revealed that obesity (OR: 2.98, 95% CI: 1.20–7.41, P = 0.018), disease duration ≥ 4.5 years (OR: 4.88, 95% CI: 2.14–11.18, P < 0.001), albumin-fibrinogen ratio (AFR) level < 9.25 (OR: 6.13, 95% CI: 2.00–18.76, P = 0.001), and modified frailty index (mFI) score ≥ 0.225 (OR: 6.38, 95% CI: 2.69–15.10, P < 0.001) were four independent risk factors for postoperative seroma. Conclusion Obesity, prolonged disease duration, decreased AFR level, and increased mFI score independently predict postoperative seroma after laparoscopic TAPP. Introduction Inguinal hernia is a very common surgical condition and over 20 million patients undergo inguinal hernia repair surgery worldwide every year. 1 The prevalence of inguinal hernia increases with age, and it is 8-10 times higher in men than women. 2,3 Because of the advantages of small wounds and allowing visual inspection of both inguinal regions, laparoscopic repair constitutes a large proportion of inguinal hernia surgeries. 4 Transabdominal preperitoneal Patch Plasty (TAPP) is a very commonly used surgical method for inguinal hernia repair. 5 Seroma, defined as the localized accumulation of serous fluid within tissue, represents the most frequent minor complication following inguinal hernia repair surgery. 6 Due to the varying definitions and interpretations of seroma, the reported seroma incidence rates range widely from 0.5% to 78%, suggesting significant discrepancies among different studies. 7,8 Although seroma typically resolves within a few weeks, it can lead to various complications. Among these, infected seromas are particularly challenging and may necessitate mesh removal or result in hernia recurrence. 9,10 The occurrence of seroma can result in prolonged hospital stays, reduced patient comfort during treatment, hernia patch displacement and hernia recurrence. 1 Therefore, it is imperative to conduct extensive research on the factors contributing to the development of postoperative seroma. According to a recent study conducted by Atsushi et al, 4 it was found that two risk factors significantly contribute to the formation of postoperative seroma/hematoma: hernia size ≥ 3 cm and internal inguinal hernia. Despite extensive research on the subject, there is currently no consensus regarding the specific risk factors associated with the development of postoperative seroma. In addition, most studies focus on the surgery associated factors as main reasons and the points towards systematic (eg, frailty) are quite limited. The objective of this study was to identify the potential risk factors for postoperative seroma formation following TAPP surgery. Materials and Methods Patients This study consecutively included 354 elderly patients with inguinal hernia who underwent laparoscopic TAPP at Taizhou People's Hospital between 2020 and 2022. The inclusion criteria were: (1) elderly patients ≥ 65 years old; (2) with clinical manifestations, imaging, and intraoperative findings that met the diagnostic criteria for inguinal hernia; (3) undergoing laparoscopic TAPP without contraindications; (4) with no history of previous inguinal surgery, and (5) follow-up ≥ 3 months. The exclusion criteria were: (1) lack of clinical data or follow-up; (2) recurrent hernia; (3) with blood coagulation disorder, and (4) undergoing laparotomy or conversion to open repair. This study was approved by the ethics committee of our Hospital and enrolled patients were required to offer written consent. Surgical Procedures Preoperative prophylactic antibiotics were selectively administered based on specific conditions with high risk of infection (eg, combined with several underlying disease), rather than being routinely prescribed. All surgeries were performed by the same experience surgical team under general anesthesia. The intra-abdominal CO 2 pressure was maintained at 8-10 mmHg throughout the procedure. Following dissection, a pre-shaped lightweight polypropylene mesh (3D Max, Bard) was utilized to precisely fit the defect. The mesh was inserted through the 10 mm umbilical port and unrolled to cover the entire myopectineal orifice on the hernia side(s) without fixation. The peritoneal flap was closed utilizing a continuous suture pattern with 3-0 Vicryl sutures. Data Collection The data presented herein were obtained from the medical records of the subjects enrolled in this study: (1) Demographic data, including age, gender, body mass index (BMI), obesity, and smoking habits; (2) Disease and treatment-related data, including duration of disease, antithrombotic drug, history of abdominal surgery, sides, operation time, hernia aperture diameter, types of hernias, classification of hernias, emergency surgery, and mFI; (3) Preoperative laboratory data (obtained on the morning of the day prior to surgery), including hemoglobin, white blood cell (WBC), C-reactive protein (CRP), albumin, fibrinogen, neutrophil, lymphocyte, and platelet. Observational Endpoints and Definitions The primary endpoint of the current study was the occurrence of seroma formation within 3 months following TAPP surgery. Patients were routinely follow-up at postoperative 7 days, 1, and 3 months. Seroma diagnosis was conducted by the same experienced surgeon based on the physical examinations combined with ultrasound. A seroma is characterized as the accumulation of fluid beneath the skin without the presence of any solid component. 11 Modified frailty index (mFI) was calculated based on the previous criteria, incorporating eleven components (dividing the number of positive items by 11). 12,13 The albumin-fibrinogen ratio (AFR) was determined by dividing the serum albumin level by the fibrinogen level. The neutrophil-lymphocyte ratio (NLR) and platelet-lymphocyte ratio (PLR) were calculated by dividing the levels of neutrophils and platelets, respectively, by the level of lymphocytes. Statistical Analysis The statistical analysis was conducted using SPSS 23.0 (SPSS Inc, IL, USA) and GraphPad Prism 9.0 (GraphPad Inc., CA, USA), respectively. The data are presented as means accompanied by their corresponding standard deviations (SD) and numbers expressed in percentages (n,%). Statistical analysis was conducted using Student t, Mann Whitney U, and Chi-square tests as appropriate. Additionally, a receiver operating characteristic (ROC) curve was constructed to evaluate the predictive and cut-off values of continuous variables. Binary logistic regression analyses were performed to identify https://doi.org/10.2147/CIA.S418338 DovePress Clinical Interventions in Aging 2023:18 risk factors for postoperative seroma. A test for multicollinearity, which involved the use of the tolerance and Variance Inflation Factor (VIF), was performed to assess the presence of multicollinearity among the factors. Significance was set at a two-tailed P-value of < 0.05. Results Based on the predefined inclusion and exclusion criteria, a total of 354 elderly patients with inguinal hernia were ultimately enrolled in this study. The cohort had an average age of 76.2 years, with male patients accounting for 85.9% (304/354) of the sample. Within the first three months after surgery, a total of 40 patients experienced postoperative complications of seroma, with an incidence rate of 11.3% (40/354). Table 1 displays the demographic, disease-related, and treatment-related characteristics of patients with or without postoperative seroma. Patients who developed seroma exhibited a significantly higher rates of obesity (P = 0.003), a longer duration of disease (P = 0.013), and a larger hernia aperture diameter (P = 0.003) compared to those without seroma. Patients in the seroma group exhibited a higher incidence of bilateral hernias (P = 0.041) and correspondingly longer operative duration (P = 0.023). Moreover, patients with irreducible/incarcerated hernias (P = 0.043) or those undergoing emergency surgery (P = 0.018) appear to have a higher susceptibility to seroma formation. The comparison of laboratory variables revealed no significant differences in hemoglobin, WBC, NLR, and PLR between patients with or without seroma (P > 0.05). Patients in the seroma group exhibited significantly lower levels of AFR (P < 0.001), higher levels of CRP (P = 0.040), and a greater mFI score (P < 0.001) compared to those in the nonseroma group. Discussion The exact cause of postoperative seroma remains incompletely understood. Given its adverse effects, investigating potential risk factors and implementing preventative measures is crucial for improving prognosis. The overall incidence of seroma in this study was 11.3%, as determined by physical examinations combined with ultrasound, which is consistent with the reported rate of 12.6% by Ruze et al 14 and lower than the rate of 19.2% reported by Morito et al. 4 This study firstly highlighted obesity, disease duration ≥ 4.5 years, AFR level < 9.25, and mFI score ≥ 0.225 as four independent risk factors for postoperative seroma. Previous studies have reported a close relationship between the hernia aperture diameter and the formation of postoperative seroma. 14 The local tissue adhesion and increased surgical difficulty caused by large hernia aperture diameter are probably the main reasons for the increased risk of postoperative seroma formation. However, our results did not support hernia aperture diameter as an independent risk factor for postoperative seroma. We consider that different patient characteristics, and different surgical strategies (eg, patch types, fixation methods, etc.) were the main potential reasons for the different conclusions. A previous study conducted by Tomita et al 15 has indicated that patients with an elevated BMI are at a higher risk of developing seroma compared to those with a normal BMI. Additionally, another study by Klink et al 16 has also revealed a strong correlation between higher BMI and increased formation of seroma. These findings are in line with our results, indicating that patients with obesity exhibit a disproportionately enlarged wound cavity, which subsequently facilitates the development of seroma. 16 Our research has also uncovered a strong correlation between the duration of the disease and the formation of seromas. We consider that long-term inguinal hernia, which contains a significant amount of abdominal contents within the hernia sac, is prone to inducing local inflammatory responses. This can lead to hyperplasia of postoperative scar tissue, obstruction of lymphatic reflux channels and an increased risk of postoperative seroma. This could potentially serve as an explanation for the independent association between the duration of disease and seroma risk. AFR, which is calculated based on albumin and fibrinogen levels, serves as an optimal indicator for evaluating nutritional status, systemic inflammation, and coagulation function. 17 Moreover, it has been widely utilized as a prognostic factor in various diseases. 18,19 A decreased serum albumin is correlated with impaired tissue 20 and elevated levels of proinflammatory mediators. 21 In addition, fibrinogen exerts potent effects on both acute and reparative inflammatory pathways that modulate the spectrum of tissue injury, remodeling, and repair. 22 A reduction in the albumin level or an increase in fibrinogen, or both, can lead to a decrease in the AFR. This may partially account for the strong correlation between the decreased AFR and postoperative seroma formation. mFI, a widely used tool to assess the frailty status, has been widely used as prognostic factors in elderly patients. 23,24 A recent study conducted on patients who underwent total joint arthroplasty revealed that frailty was associated with an increased risk of postoperative complications, including hematoma/seroma formation, 25 which is consistent with our findings. Frailty is commonly associated with compromised immune function and reduced body resistance, rendering individuals more susceptible to infection and inflammation following surgery, 26 thereby elevating the risk of postoperative seroma formation. Furthermore, the presence of muscle atrophy and impaired function in frail patients can negatively impact lymphatic drainage and fluid metabolism, 27,28 potentially contributing to the development of postoperative seroma. In conclusion, this study has underscored that obesity, prolonged disease duration, decreased AFR level, and increased mFI score are four independent risk factors for postoperative seroma in elderly patients with inguinal hernia after laparoscopic TAPP. This study has some limitations. The single centre single surgeon setting without an independent assessor of the clinical results parameters is a main weakness. Moreover, whether the improvements of some factors (eg, AFR) can reduce the incidence of seroma needs to be verified by further prospective studies. Data Sharing Statement Please contact the corresponding author for data requests. Ethics Approval and Consent to Participate This study was approved by the ethics committee of Taizhou People's Hospital, Taizhou Clinical Medical School of Nanjing Medical University. All methods were carried out in accordance with relevant guidelines by Declaration of Helsinki. Written Informed consent was obtained from all participants/patients. Funding There is no funding to report. Disclosure The authors report no conflicts of interest in this work.
2023-08-26T15:53:29.227Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "ffddbf5180adca6b50bc9a9b45d59eb4d085b0a0", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=92184", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec842f0948ed96c0d2056834b5895130ae859ea5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219659622
pes2o/s2orc
v3-fos-license
Rainfed Evaluation of Genotypes for Seed Yield and Yield Components in Sunhemp (Crotalaria juncea L.) and Dhaincha (Sesbania aculeata L.) India has changed from a region of food scarcity to food sufficiency by increased fertilizer use with subsidized prices, but use of organic manures including green manure, declined substantially. Inorganic fertilizers are becoming more expensive, therefore sustainability of soil productivity has become a question. Hence, alternate sources to supplement inorganic fertilizers are thought. Green manuring are low cost and effective technology in minimising cost of fertilizers and safe guarding productivity. Crops grown International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 4 (2020) Journal homepage: http://www.ijcmas.com Introduction India has changed from a region of food scarcity to food sufficiency by increased fertilizer use with subsidized prices, but use of organic manures including green manure, declined substantially. Inorganic fertilizers are becoming more expensive, therefore sustainability of soil productivity has become a question. Hence, alternate sources to supplement inorganic fertilizers are thought. Green manuring are low cost and effective technology in minimising cost of fertilizers and safe guarding productivity. Crops grown ISSN: 2319-7706 Volume 9 Number 4 (2020) Journal homepage: http: //www.ijcmas.com In agriculture, green manure is created by leaving uprooted or sown crop parts to wither on a field so that they serve as a mulch and soil amendment. The plants used for green manure are often cover crops grown primarily for this purpose. In the present study ten Sunhemp (Crotalaria juncea L.) and Eleven Dhaincha (Sesbania aculeata L.) genotypes were studied for seed yield during kharif, 2018 under rainfed ecosystem. Observations recorded on Days to 50 % flowering , Days to maturity, Plant height (cm), Number of branches per plant, Number of pods shattered per plant, Number of dry pods, Number of green pods, Number of seeds/pod , Test weight (g) and Seed yield per plant (g). Significantly higher values recorded compared to the check for all the characters except Plant height in Sunhemp and Test weight in Dhaincha. Seed yield per plant exhibited a wide amount of variation varying from 13.30 to 27.77 g per plant in Sunhemp. The overall mean was 18.94 g. Highest seed yield recorded in NTPSH 04 (27.77 g) followed by NTPSH 03 (26.12 g). Whereas in Dhaincha, the lowest seed yield recorded in NTPD 04 (14.46 g) and highest in NTPD 08 (34.67 g). However the high biomass production in terms of root and shoot is criteria in green manures. Selection will be effective for entry with high biomass and high seed yield. for the purpose of restoring or increasing the organic matter content in the soil are called Green manure crops. Use of Green manure crops in cropping system is called 'Green Manuring' where the crop is grown in situ or brought from outside and incorporated when it is purposely grown. Green manure crop should posses the characteristics such as; multipurpose use, short duration, fast growing, high nutrient accumulation ability, tolerance to shade, flood, drought and adverse temperatures, wide ecological adaptability, efficiency in use of water, early onset of biological nitrogen fixation, high N accumulation rates, timely release of nutrients, photoperiod insensitivity, high seed production, high seed viability, ease in incorporation, ability to cross-inoculate or responsive to inoculation, pest and disease resistance and high N sink in underground plant parts. In line with these properties, Sunhemp (Crotalaria juncea L) and Dhaincha (Sesbania aculiata L) are the suitable species for green manuring with high biomass production as 20-25 t/ha (Thipathi et al., 2013). The lack of availability of adequate quality seeds at appropriate time at reasonable price for small holding and marginal farmers becomes a major constraint in Sunhemp and Dhaincha cultivation. Quality seed production of Sunhemp and Dhaincha is given meagre importance inspite huge demand from farmers. Further, possibility of seed production under rainfed situation paves way for identification of genotypes with high water use efficiency. Materials and Methods The material for the study comprised 10 and 11 genotypes in Sunhemp and Dhaincha respectively including local check. These genotypes were collected from different parts of India representing diverse eco-geographic origin. The experiment was conducted at Agricultural Research Station, Nathnaipally during Kharif, 2018 in a randomized block design with three replications. Each genotype was sown during June II FN, 2018 in six rows of 4 m length at a distance of 30 cm between the rows and 10 cm between the plants within the rows in Sunhemp and 45 cm between the rows and 20 cm between the plants within the rows in Dhaincha. Recommended agronomic practices were followed to raise a healthy crop. Observations were recorded on each entry on five randomly selected plants for yield and yield contributing characters viz., days to 50 % flowering , days to maturity, plant height (cm), number of branches per plant, number of pods shattered per plant, number of dry pods, number of green pods, number of seeds/pod , test weight (g) and seed yield per plant (g). Simple analysis was done for Mean, Range, CV and CD. Results and Discussion Basically the trial was conducted during kharif under rainfed ecosystem. This is mainly to evaluate the survival ability of the genotypes under scarce rainfall situation in Medak district, Telangana State. Usually quality seed availability in time is big constraint in Telangana State. As per reality the best season for any crop is during Rabi. But availability of water during summer months will be a question. Further most of the area in state is under rainfed situation. These areas can bring under cultivation of green manures seed production (Chandrasekhar, 2013). Hence the present study was conducted to evaluate possibility of green manures seed production during Kharif. Agricultural Research Station, Nathnaipally is established during the year 2008 with the financial assistance of Non-Plan, Professor Jayashankar Telangana State Agricultural University formerly Acharya N.G. Ranga Agricultural University. ARS, Nathnaipally is situated in Narsapur Mandal, Medak District, Telangana, India. Its geographical coordinates are 17° 44' 9" North, 78° 16' 54" East on globe. The environment in area is tropical. Farming system is rainfed and soil type is Red sandy loam (shallow-medium). It is characterized by the annual rainfall of 732.5 mm and temperatures ranging from 15° C to 45° C. Seed yield is a complex trait, polygenic and highly influenced by environmental conditions. A successful breeding programme depends upon the genetic variability present among the different genotypes. Phenotypic selection of parents for hybrids based only on their performance alone may not always be available procedure since phenotypically superior genotypes may yield inferior hybrids and/or poor recombinants in the segregating generations. Hence simple analysis on Mean, Range, CV and CD for days to 50 % flowering, days to maturity, plant height (cm), number of branches per plant, numberof pods shattered per plant, number of dry pods per plant, No of green pods, number of seeds/pod, test weight (g) and seed yield per plant (g) was studied in Sunhemp and Dhaincha (Table 1 and 2). Days to 50 % flowering Overall mean for days to 50 % flowering was noticed as 72 days in Sunhemp and 48 days in Dhaincha. This reveals rains during vegetative period delayed flowering in Sunhemp and after initiation of flowering there was decline in growth observed. While, Dhaincha not effected with continuous moisture availability and even after flowering growth continued. In Sunhemp NTPSH 06 (68 days) was early to flower where as NTPSH 05 (76 days) was taken highest number of days for flowering. Similarly, in Dhaincha NTPD 02 (46 days) was early to flower and NTPD 10 flowered in 51 days. For biomass production and green manuring, late flowering would be the desirable trait. Three genotypes viz., NTPSH 05, NTPSH 07 and NTPSH 08 recorded significantly more number of days taken for flowering compared to check (72 days) in Sunhemp. Where as in Dhaincha only one entry NTPD 10 recorded more number of days for flowering over check (49 days). Days to maturity Maturity duration varied from 136 days to 148 days with a mean of 141 days in Sunhemp. NTPSH 06 was early while NTPSH 09 was late in maturity. Whereas mean number of days taken for maturity in Dhaincha was 127 with range 123 days (NTPD 10) to 136 days (NTPD 05). Variation in days to maturity provides ample scope for selection of early and late maturing plants for further improvement. None of the genotype recorded more number of days for days to maturity over check (146 days) in Sunhemp, while in Dhaincha only one genotype NTPD 05 recorded more days over check (128 days). Plant height (cm) In Sunhemp, Plant height varied from 83.60 to 123.60 cm with an overall mean of 100.66 cm. Among all the genotypes NTPSH 05 (83.60 cm) was dwarf, while NTPSH 05 (123.60 cm) was the tallest. Similarly in Dhaincha for plant height not much variation is observed. Plant height was increased even after flowering. Secession of rains resulted in decline in terms of plant height increase. Number of branches per plant Number of branches varied from 3.8 (NTPSH 05) to 7.8 (NTPSH 06) with overall mean of 5.58 branches per plant in Sunhemp. While in Dhaincha less number of branches per plant observed in NTPD 01 (14.8) and highest in NTPD 08 (24.6). Results indicated that for any green manure crop more branches per plant may give more biomass, is desirable trait and there is possibility for improvement through selection of this character and breeder may have reliable benefits in next generation with respect to this character. In all the genotypes number of branches found significantly superior in Sunhemp compared to the check (4.0) except NTPSH 07. Where as in Dhaincha, four genotypes namely NTPD 02, NTPD 06, NTPD 08 and NTPD 09 found significantly superior over check (18.2). Number of shattered pods per plant However number of shattered pods observed in Sunhemp genotypes was nil (0) indicates shattering is not a problem in Sunhemp. Where as in Dhaincha shattering is observed with mean value of 7.60. Among all the genotypes in four genotypes no shattering was observed namely; NTPD 01, NTPD 06, NTPD 07 and NTPD 08. While highest shattering was noticed in NTPD 04 (15.2). Results indicates shattering plays negative selection role in Seed yield. Two entries namely NTPD 04 and NTPD 05 were recorded significantly higher shattered pods over check (10.2). Number of dry pods per plant In Sunhemp, number of dry pods is equal to total number of pods which is ranged from 47.4 (NTPSH 07) to 116.2 (NTPSH 03) with mean value of 88.94. In Sunhemp delayed flowering, increased vegetative growth and increased number of branches solely attributed to the increased number of pods. This will have direct selection pressure on yield. Similarly in Dhaincha, number of dry pods varied from 30.6 (NTPD 05) to 101 (NTPD 06) with a mean value of 64.85. In sunhemp, all the genotypes recorded significantly more number of dry pods except NTPSH 06 and NTPSH 07 compared to the check (48.2). While in Dhaincha, NTPD 01, NTPD 02, NTPD 03, NTPD 07, NTPD 08 and NTPD 10 produced significantly more number of pods over check (46.4)(Venkanna, 2014). Number of green pods per plant No green pods observed at maturity in Sunhemp reveals the synchronous maturity was observed. Where as in Dhaincha synchronous maturity is difficult to find i.e., few pods remained green even after secession of rains. Test weight (g) The average test weight was 2.87 g with a range varying from 2.0 to 4.5 g in Sunhemp. NTPSH 09 had lighter seeds (2.0 g) while NTPSH 05 recorded heavier seed (4.5 g). This trait can be improved through heterosis breeding. Similarly, 2.104 (NTPD 01) to 2.418 g (NTPD 09) test weight was recorded in Dhaincha with mean value observed was 2.28 g. Significantly superior test weight observed in NTPSH 05 over check (2.6) in Sunhemp. Where as in Dhaincha non of the genotype significantly superior over check (2.342 g). Seed yield per plant (g) Seed yield per plant exhibited a wide amount of variation varying from 13.30 to 27.77 g per plant in Sunhemp. The overall mean was 18.94 g. Highest seed yield recorded in NTPSH 04 (27.77 g) followed by NTPSH 03 (26.12 g). Whereas in Dhaincha, the lowest seed yield recorded in NTPD 04 (14.46 g) and highest in NTPD 08 (34.67 g). However the high biomass production in terms of root and shoot is criteria in green manures. Selection will be effective for entry with high biomass and high seed yield. Significantly superior seed yield/plant observed in NTPSH 01, NTPSH 02, NTPSH 03, NTPSH 04 and NTPSH 09 over check (13.8 g). Where as in Dhaincha six genotypes namely, NTPD 01, NTPD 02, NTPD 06, NTPD 07 and NTPD 08 recorded significantly superior yield over check (14.68 g) (Triveni, 2011).
2020-05-21T09:12:23.501Z
2020-04-20T00:00:00.000
{ "year": 2020, "sha1": "43ccc64d161f4b13973400470d123c0bd40175d6", "oa_license": null, "oa_url": "https://www.ijcmas.com/9-4-2020/T.%20Shobha%20Rani,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "716c2f296706aa1493746b6fb2b1d9f78b48bcac", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
55390831
pes2o/s2orc
v3-fos-license
Analytical modeling of self-powered electromechanical piezoelectric bimorph beams with multidirectional excitation Unused mechanical energies can be found in numerous ambient vibration sources in industry including rotating equipment, vehicles, aircraft, piping systems, fluid flow, and even external movement of the human body. A portion of the vibrational energy can be recovered using piezoelectric transduction and stored for subsequent smart system utilization for applications including powering wireless sensor devices for health condition monitoring of rotating machines and defence communication technology. The vibration environment in the considered application areas is varied and often changes over time and can have components in three perpendicular directions, simultaneously or singularly. This paper presents the development of analytical methods for modeling of self-powered cantilevered piezoelectric bimorph beams with tip mass under simultaneous longitudinal and transverse input base motions utilizing the weak and strong forms of Hamiltonian's principle and space- and time-dependent eigenfunction series which were further formulated using orthonormalization. The reduced constitutive electromechanical equations of the cantilevered piezoelectric bimorph were subsequently analyzed using Laplace transforms and frequency analysis to give multi-mode frequency response functions (FRFs). The validation between theoretical and experimental results at the single mode of eigenfunction solutions reduced from multi-mode FRFs is also given. Introduction The investigation of energy conversion techniques utilizing ambient vibration has been of great interest for many researchers for at least the past decade. Although most developments of embedded piezoelectric structures have focused on applications ofcrack detection using screw dislocation and inclusion techniques [1,2] and health condition monitoring [3] and control systems [4], the recent applications of piezoelectric devices have also included the conversion of ambient vibration into low electrical power. The energy extracted from vibrating devices and structures can be utilized for powering electronic devices, supplying direct current into rechargeable batteries or electrical power storage devices. One of many applications being considered is for powering smart wireless sensor devices. The usage of piezoelectric material in the application of energy conversion requires knowledge of analytical methods, circuit components, material properties and geometrical structure. An Electromechanical System Electronic Circuit extensive review of energy harvesting devices has been discussed by [5] using electrostatic, electromagnetic and piezoelectric transductions. It is important to note here that the overall inherent mechanism of transductions, as shown in the simple diagram in Figure 1, depends on the material design, the nature of the vibration environment and strategies to increase the power through tuning the mechanical and electrical systems in order to maximize the power for the smart wireless sensor systems [6]. The major benefits of piezoelectric transduction have shown reasonable prospect for self-diagnostic detection of aircraft sensor networks, microelectromechanical system design, compact configuration, high sensitivity with respect to low input mechanical vibration and is suitable to be used as a patch or embedded with other substructures as compared to other transduction methods [7][8][9]. In summary, the three major transductions have advantages and disadvantages, as shown in Table 1. The piezoelectric bimorph beam represents a useful candidate for power harvesting as relatively high strain fields can be induced as a consequence of the input vibration producing the resulting electrical field. The resultant extracted electrical energy can be optimized by utilizing an electronic circuit capable of supplying the direct current into a rechargeable battery for the usage of wireless sensor communication as shown in [10]. Moreover, there have been numerous extensive analytical studies of electromechanical piezoelectric systems associated with experimental validations. Smits and Choi [11] and Wang et al. [12] derived an analytical solution for the static condition of the bending piezoelectric bimorph beam. However, their analytical methods cannot be applied to the dynamic piezoelectric harvester due to the required coupled electromechanical response. Roundy and Wright [13] investigated the analytical solution using the electrical equivalent of the electromechanical transverse bending form for powering electrical devices but it was limited to the single mode. Some other investigations used analytical methods limited to the single mode of Rayleigh-Ritz's analytical approach as shown by du Toit et al. [14]. The normalized single mode dynamic equation of piezoelectric power harvesting using non-dimensional parameters of dynamic functions was shown in [15] and the multi-mode frequency response using the closed-form method of the piezoelectric bimorph was derived in [16]. The parametric consideration of the micromechanical piezoelectric unimorph beam using the Rayleigh-Ritz method with condensed matrix equation form was shown by Goldschmidtboeing and Woias [17]. The study of the optimal piezoelectric patch shape using finite element methods has been discussed by Friswell and Adhikari [18]. Sun et al. [19] discussed the single mode dynamic response of the piezoelectric nanostructure power harvesting. Harigai et al. [20] discussed the experimental study of cantilevered micro-piezoelectric unimorph patched onto a silicon substructure under input transverse vibration from an electromagnetic mechanical shaker. The electromechanical dynamic equation derived using the weak form of Hamiltonian's principle introduced by Lumentut and Howard [21,22] was used to formulate complete forms of multi-mode frequency response functions using the normalized Ritz eigenfunction form. Furthermore, the theoretical method was validated with experimental results by considering the two input base motion exciting the piezoelectric bimorph to stimulate polar power harvesting [23]. This paper presents a novel analytical method of modeling the electromechanical dynamic equation of a piezoelectric bimorph beam under two input base acceleration using the weak and strong forms of Hamiltonian's principle. The mathematical derivations of electromechanical dynamic equations are analyzed to give the piezoelectric couplings due to polarity-electric field, the bending transverse and longitudinal stiffness coefficients due to the stress fields, mass moment of inertias due to velocities of bimorph element including tip mass element, internal capacitance due to permittivity, inertia input loads due to base excitations and electrical charge output. The strain and polarity-electric field depends not only on the physical material characteristics and its geometry but also on the polarity sign conventions. New coupling superposition methods for electrical series and parallel connections are shown in order to illustrate the clear mathematical concepts and theories of forward and backward piezoelectric coupling coefficients under input dynamic excitations (longitudinal and transverse motions). Moreover, the electromechanical weak and closed form methods provide deep analytical insight of the system behavior. The weak form analytical approach derived from the strong form solution was further derived using the Ritz method by introducing an eigenvector function (Ritz coefficient) and space-and time-dependent Ritz eigenfunction series which were further formulated using orthonormalization. The closed-form boundary value method derived from the strong-form method was further formulated using a direct analytical solution with orthonormalization by introducing the space-and time-dependent eigenfunction series into boundary conditions. The closed-form solution was shown to provide accurate results over the frequency response domain because of its convergence at any particular mode of interest whereas the weak form can give similar results with the closed form provided that the typical mode shapes and number of modes are chosen correctly in order to meet the convergence criteria. The normalized electromechanical piezoelectric bimorph beam with a tip mass is further analyzed using Laplace transformation to obtain the frequency response functions (FRFs) of the dynamic velocity, voltage, current and power harvesting. Sample theoretical and experimental comparison results from the frequency response analysis of the bimorph with parallel electrical connections are also presented. Constitutive electromechanical dynamic equations The electrical enthalpy of the piezoelectric material in tensor notation based on continuum thermodynamics condensed using Voigt's notation and then further reduced using Einstein's summation convention gives [6,[24][25][26][27], The above formulations assume the adiabatic and isothermal processes. HereQ E 11 , e 31 , ς ε 33 , E 3 , σ 1 , ε 1 and D 3 represent the piezoelectric elastic coefficient at constant electric field, piezoelectric coefficient, permittivity under constant strain, electric field, stress, strain and electric displacement, respectively. Some notations from Equation (1) have been adapted in this research for further mathematical derivations. The kinematic equations of the infinitesimal piezoelectric beam with two input base motions were developed to formulate the energies of the structure element. The effect of two input base motions of the structure not only affects the strain fields of each layer of the piezoelectric bimorph but also affects the piezoelectric couplings to create the electrical force and moment of the piezoelectric layers when the series and parallel connections are chosen for the bimorph [6]. In Figure 2 The position vector R base has the same magnitude as vector R pp by defining R base = u base e 1 + w base e 3 . As the base support undergoes motions, point p undergoes deformation in the longitudinal extension and transverse bending forms as indicated by moving to point p . The position of point p has a condition of absolute displacement with respect to frame of reference oXZ defined by w abs and u abs , where w abs = (w base + w rel ) e 3 and u abs = (u base + u rel ) e 1 . To obtain position vector R pp , R op needs to be defined as, As the position vector R op is defined, the position vector R pp can be obtained and differentiated with respect to time to givė R pp is defined as the absolute velocity of point p with respect to the fixed frame of reference oXZ. The geometrical position of the bimorph tip mass can be measured from the fixed frame of reference oXZ to the deformation point and this vector can be differentiated with respect to time to give the absolute velocity of the tip mass in terms of the moving base support as, × x gm e 1 + z gm e 3 + ẇ base (t) +ẇ rel (L,t) e 3 . In this case, the center of gravity of the tip mass was assumed to coincide with the end of the piezoelectric bimorph x = L, where x gm and z gm represent the distance from the arbitrary element mass dm = ρd to the center of gravity of the tip mass. The position vector R p p specifies the relative displacement due to the moving support base in the fixed reference frame oXZ, We note that, R p p can be further differentiated with respect to x to give the strain field of the element structure, Here Hamiltonian's principle can be restricted to the particular form of the constitutive electromechanical dynamic equation of the piezoelectric bimorph beam with brass shim substructure and tip mass giving [6], Each term from Equation (7) can be written in terms of the kinetic energy KE for every layer of the bimorph including tip mass, potential energy PE from center brass shim or substructure, electrical enthalpy energyĤ from piezoelectric material at the lower and upper layers and applied mechanical work due to the input base excitations and electrical work due to electrical charge output W f . Superscripts k and i indicate the layers of bimorph and input inertia mechanical forces (input base excitations). It is noted that the electrical enthalpy can be stated as, δĤ = δPE p − δWE p which implies the potential energy and electrical energy from the piezoelectric layers. The geometry of the piezoelectric bimorph beam with the tip mass can be modeled, as shown in Figure 3. Essentially, the functional forms from Hamiltonian's principle can be extended using the Lagrangian theorem L a incorporated with the external mechanical and electrical works W f . Each variable categorized in the functional forms L a and W f in terms of the mathematical model can be stated as [6], Equation (8) can be further formulated using total differential equations as, (9b) Corresponding to Equations (1), (3), (4), (6), (8) and (9), the reduced Hamiltonian constitutive electromechanical dynamic equation for the piezoelectric bimorph beam with tip mass from Equation (7) can be formulated in terms of virtual relative and base displacement forms after applying divergence theorem as [6], The and transverse bending, piezoelectric couplings for longitudinal extension and transverse bending, respectively. All coefficients from Equation (10) are described in the next section. In terms of variational methods it should be noted that Equation (10) has been reduced from extremum functional form with integration between the time instants t 1 and t 2 for all domains in L a and W f for all independent variables of relative longitudinal u rel (x, t) ∈ C 2 [0,L] ⊂ ⊂ R 3 and relative transverse displacements 3 . This has been written in terms of actual forces and moments of the mechanical and electrical fields for the piezoelectric element as prescribed in the partial differential electromechanical dynamic equations in domain d including the boundary conditions on the vector surface dS in terms of the divergence theorem. In this case, the reduced equation must fulfill the mathematical lemma of the variational method of duBois-Reymond's theorem for each virtual displacement field. Determining stiffness coefficients for the piezoelectric bimorph interlayer The elastic stiffness coefficients of the piezoelectric bimorph C 11 from Equation (10) are formulated according to the characteristic material properties and the cross section of each layer of bimorph to give, The piezoelectric bimorph considered here has symmetrical geometry with the same material used for the upper and lower layers with a center brass shim. The extensional stiffness coefficient of the interlayer of the piezoelectric bimorph can then be written as, whereQ (D,1) 11 =Q (D, 3) 11 . The extensional-bending stiffness coefficient C (E,k) 11 from Equation (11a) tends to be zero due to the symmetrical geometry and material of the bimorph element structure. Therefore, the coefficient C (E,k) 11 is not shown from Equation (10). The bending stiffness coefficient can be formulated as, (11c) It should be noted thatQ (D,1) indicate the plane stress-based elastic stiffness at constant electric field for piezoelectric material and plane stress-based elastic stiffness for brass material, respectively. Determining forward and backward piezoelectric coupling coefficients The coefficient R 31 from Equation (10), known as the forward and backward piezoelectric couplings, will be discussed in terms of series and parallel connections of the piezoelectric bimorph. It is noted that the direct effect of the piezoelectric element, developed in potential form, indicates the backward piezoelectric coupling whereas the converse effect of the piezoelectric element gives the forward piezoelectric coupling which is formulated from electrical energy. As the piezoelectric behavior is reversible, the forward piezoelectric couplings have the same values as the backward piezoelectric couplings. This indicates that the piezoelectric couplings are affected by the electrical force and moment of the piezoelectric layers which depend on the strain fields and the polarity-electric field. In this case, new techniques of formulating the piezoelectric couplings will also be given. The electric field of the piezoelectric bimorph depends on the positive and negative terminals located at the lower and upper surfaces of the piezoelectric element, respectively. Each connection (series and parallel connections) can be arranged into two types of poled configurations i.e. X-poled and Y-Poled which depends on the direction of polarities and strain effect between the piezoelectric benders k ∈ {1, 3} (upper element and lower element). At this point, when the piezoelectric element was initially undeformed, the polarization direction, for example, was in the z-axis called the initial polarized state. When the tensile stress acts perpendicular to the z-axis on the element, the polarization will behave in the opposite direction to the z-axis. Conversely, when the piezoelectric element is under compressive stress perpendicular with the z-axis, the polarization will be in the same direction with the z-axis. It means that the change of stress from tensile to compressive or vice versa in the piezoelectric element will result in a reversal of the direction of polarization [24]. This situation is known as the direct piezoelectric effect where the polarization is proportional to the stress field and the stress field is also proportional to the strain field or it can be stated in terms of Einstein's summation convention as P i = d ij σ j . This has been reflected in the electrical enthalpy of the piezoelectric formulation [25,27]. Based on this case, the piezoelectric bimorph under series and parallel connections can be further considered. For example, in series connection shown in Figure 4, when the piezoelectric element undergoes transverse input base motion, by assumption here, the upper and lower layers of the piezoelectric bimorph can respectively deform with the tension and compressive strains and polarization of the upper layer will then create opposite directions compared with the lower layer (X-poled). It should be noted that the polarization directions affect mathematically the piezoelectric coefficients due to the stress field on the element structure whilst the electric field generates electrical voltage. Consequently, the electrical moments at both lower and upper layers will be formed. With the same electrical connection, when the piezoelectric bimorph is under input longitudinal base motion, the upper and lower layers of the bimorph have the same deformation, for example, tensile strain and then the polarization at the upper and lower layers have the same direction (Y-poled), while the electric field will be generating electric voltage to create the electrical force at both the lower and upper layers. This situation exists when the piezoelectric bimorph beam operates under two input base motions which was considered here mathematically by backward and forward coupling superposition of the elastic-polarity field [6]. Smits and Choi [11] and Smits et al. [28] discussed the sign conventions of electric field and piezoelectric coefficient for the series and parallel connections. However, their formulations of the transverse bending bimorph beam only considered the static condition. In this section, the complete piezoelectric couplings due to the effect of electric field and polarity directions are discussed when the bimorph undergoes two input base motions. As shown in Figure 5, the piezoelectric bimorph under parallel connection also depends on the input base motions and direction of polarity. The strain fields between the upper and lower layers have similar behavior with series connection. The difference lies with the polarization and electric field directions due to the chosen parallel connection of the piezoelectric bimorph. This is achieved at the upper layer of the piezoelectric bimorph under X-poled series connection by applying a strong electric field to direct initial polarization in the same direction with the lower layer or it can be provided according to the manufacturing process in such a way that the parallel connection can be arranged as shown in . In this case, the polarization tends to show the same directions with each other when the tension strains are opposite between the lower and upper layers due to the input transverse base motion. On the other hand, when the piezoelectric bimorph was treated to the input base longitudinal motion, two polarizations at the upper and lower layers tend to give opposite directions due to the compressive strains in the piezoelectric elements, respectively. In this case, the sign conventions of electric field and piezoelectric coefficient need to be considered for the series and parallel connections under two input base motions. As assumed here, the polarization indicates the opposite direction with respect to electric field, with the piezoelectric constant having a resulting negative sign and vice versa. Series connection normally has two wires where one wire attaches to the electrode of the lower layer and one wire attaches to the electrode of the upper layer whereas parallel connection normally has three wires where one wire connects to the center shim and two wires are located at the electrodes of the lower and upper layers. It is noted that the common piezoelectric constant produced from the manufacturing company is in the form d 31 but this can be modified by multiplying the plane stress-based elastic stiffness at constant electric field to give e 31 = d 31Q E 11 . Furthermore, the series connection of the piezoelectric bimorph results in the positive sign of electric field at both the lower and upper layers with the same direction as the positive z-axis because the piezoelectric bimorph for series connection has a positive terminal at the electrode of the lower layer and a negative terminal at the upper layer. However, the parallel connection indicates a positive sign of the electric field for the lower layer and a negative sign for the upper layer because the piezoelectric bimorph for the parallel connection has a positive terminal at the electrodes of the lower and upper layers and a negative terminal between the center brass shim [6]. The electric field can be formulated as, It is considered here that (z) represents the shape function of the electric voltage v (t) and it applies to each layer of the piezoelectric bimorph. (z) (n,k) is the gradient operator of the electric voltage shape function and subscript r refers to 1 or 2 to be used for sign convention. This indicates the change of sign of the electric field due to the terminal connections of the upper and lower electrodes of the piezoelectric bimorph containing the charges normally flowing from positive to negative terminals. The modified piezoelectric constant can be formulated as, Superscript s refers to the change of sign of piezoelectric coefficient due to change of polarization. Equation (10) for piezoelectric couplings R (n,k) 31 can be formulated for each outside layer as, represents the backward and forward piezoelectric couplings for the longitudinal extension term and R (H,k) 31 represents the backward and forward piezoelectric couplings for the transverse bending term. As mentioned previously, the forward piezoelectric couplings indicate equal form with the backward piezoelectric couplings. Case 1. Series connection Corresponding to Equation (12), the electric voltage v (t) for series connection is considered to provide half the voltage between the electrodes of the piezoelectric bimorph. Therefore, the shape function of the electric potential can be formulated based on the thickness of each layer of the piezoelectric bimorph to give, For the X-poled case, due to the transverse form where the polarization has opposite direction, Equation (14) for transverse effect meets the function of polarization for each layer k ∈ {1, 3} due to the conditions of Equations (12), (13) and (15a): Backward and forward piezoelectric couplings of the X-poled transverse bending form can be formulated using Equation (14), resulting in For the Y-poled case, due to longitudinal extension where the polarization has the same direction, Equation (14) for longitudinal effect meets the function of polarization for each layer due to the condition of Equations (12), (13) and (15a), Backward and forward piezoelectric couplings of the Y-poled longitudinal extension can be formulated using Equation (14), resulting in Case 2. Parallel connection The electric voltage v (t) for parallel connection is considered to give one voltage between the electrodes of the piezoelectric bimorph. This means that the shape function of the electric potential is divided by a factor of one. Therefore, the shape function of electric potential can be formulated based on the thickness of each layer of the piezoelectric bimorph, to give, For the X-poled case, due to longitudinal extension form where the polarization has opposite direction, Equation (14) for longitudinal effect meets the function of polarization for each layer due to the condition of Equations (12), (13) and (16a): Backward and forward piezoelectric coupling of the X-poled longitudinal extension can be formulated using Equation (14), resulting in For the Y-poled case, due to the transverse bending form the polarizations have the same direction. Because the polarizations have the same direction as the global direction of the electric field in the z direction, the backward and forward piezoelectric coefficients in both layers have conditions as, The piezoelectric coupling of the Y-poled transverse bending form can be formulated using Equation (14), resulting in (16g) Determining internal capacitance of the piezoelectric bimorph The internal capacitance per unit area of piezoelectric bimorph from Equation (10) can be formulated as, Case 1. Series connection The capacitance of the piezoelectric element was formulated in Equation (17a). The summation of capacitances between the upper and lower layers of the piezoelectric bimorph has a factor of 1/2. The capacitance for series connection can be formulated as It should be noted that the upper and lower layers of the piezoelectric bimorph have the same material and geometrical structure, thus the permittivity of the piezoelectric component gives ς (1) Case 2. Parallel connection The summation of capacitances between the upper and lower layers of the piezoelectric bimorph has a factor of 2. At this point, the capacitance of the piezoelectric element for parallel connection can be formulated as, The capacitance can affect the electric displacement as a converse effect where it depends on the geometrical structure of the piezoelectric element and permittivity. Determining mass moment of inertias of the piezoelectric bimorph and tip mass The mass moment of inertias per unit area are formulated according to the characteristic materials and the cross section of each layer of bimorph, The zeroth mass moment of inertia per unit area gives, 3) . The first mass moment of inertia per unit area I (B,k) from Equation (18a) tends to be zero, since the centroid of the section is located at the neutral axis of the bimorph which has symmetrical geometry and the same material of the bimorph element structure. Therefore, the coefficient I (B,k) is not shown from Equation (10). Furthermore, the second mass moment of inertia or rotary inertia per unit area is formulated as The mass densities ρ (C,1) and ρ (C,3) are assumed to have the same material located on the lower and upper piezoelectric layers, respectively. It should be noted that ρ (A,1) = ρ (A,3) = ρ (C,1) = ρ (C, 3) and ρ (A,2) = ρ (C,2) . The tip mass moment of inertia is also formulated as It is noted that the third integral term implies the second mass moment of inertia for the arbitrary geometric shapes where further detail can be found in Beer and Johnston [29]. Figure 3 indicated an example shape of the tip mass where the zero-th tip mass moment of inertia is formulated as The first mass moment of inertia, I (B) tip , tends to be zero because the geometric shape only has one center of gravity where each moment with respect to the centroid has equal magnitude. The second tip mass moment of inertia is known as rotary inertia at the center of gravity of the tip mass which is assumed to coincide with the end length of the piezoelectric bimorph beam. This results in wherex 1 = x g − l tip /2,x 2 = x g − l b /2 and x g is the center of gravity of the tip mass. The strong form of electromechanical dynamic equation As prescribed in Equation (10), parameters of virtual relative base displacements and electrical potential forms can be separated in terms of partial differential dynamic equations for extensional, transverse and electrical fields to formulate the strong form of Hamiltonian constitutive electromechanical dynamic equation. In this case, there are three constitutive electromechanical equations of the cantilevered piezoelectric bimorph beam associated with moving virtual displacement fields of δu rel , δw rel , δu rel (L), ∂w rel ∂x (L) and δv. The first constitutive electromechanical dynamic equation in extensional form in terms of virtual relative longitudinal displacement field δu rel reduced from equation (10) can be formulated as It is noted that the symbolˆindicates the modified parameter after integrating with respect to y and the divergence theorem from the nineteenth and twenty first terms in Equation (10) can also be modified. The boundary condition can be stated as It should be noted that the fourth term of Equation (20a) is zero because the electric voltage is only a function of time. However, Equation (20a) can be used to formulate the weak form of the Hamiltonian constitutive electromechanical dynamic equation. The second constitutive electromechanical dynamic equation for transverse bending form in terms of the virtual relative transverse displacement field δw rel reduced from Equation (10) can be formulated as, The symbolˆindicates the modified parameter after integrating with respect to y. The boundary condition can be stated after modifying the divergence theorem from the twentieth, twenty first, twenty third, twenty fourth and twenty fifth terms in Equation (10) as, w rel (0, t) = 0, ∂w rel ∂x (0, t) = 0. It should be noted that the fifth term of Equation (21a) is zero because the electric voltage is a function of time. However, Equation (21a) can be modified to formulate the weak form of the Hamiltonian constitutive electromechanical dynamic equation. The third constitutive dynamic equation for electrical potential in terms of the virtual electrical potential field, δv (t) can be formulated as, δv : As prescribed by Equation (10), the form of the electromechanical dynamic equation represents the electromechanical strong form of Hamiltonian's theorem as shown in Equations (20a)- (22). Since the strong form method includes quite tedious derivations to give the electromechanical frequency response function, the closed-form boundary value method shown in section 5, can be further reduced by using the strong form method. In the next section, the weak form of the Hamiltonian electromechanical dynamic equations will be formulated from the general and normalized Ritz methods. Since the Ritz method is viewed as an analytical approach, this method can give the same results as the closed-form provided that the space-dependent eigenfunction form from the normalized Ritz method is chosen correctly with the space-dependent eigenfunction form having the same form as the closed-form solution [6]. The weak solution form of electromechanical dynamic equation The weak form of the Hamiltonian theorem can also be formulated in terms of the virtual relative extensional δu rel and transverse δw rel displacement fields and virtual electric potential δv to give the analytical integro-partial differential dynamic equation over the length of the piezoelectric bimorph beam. The weak form of Hamiltonian theorem includes the virtual displacements into the calculations of the constitutive electromechanical dynamic equation. In other words, the virtual displacements are assumed to be the non-zero terms. Therefore, the solutions of the dynamic equation in terms of variable fields (u rel , w rel , v) and virtual variable fields (δu rel , δw rel , δv) must be assumed as eigenfunction forms. At this point, the weak form of the Hamiltonian constitutive electromechanical dynamic equation can be formulated by modifying Equation (10) and then applying the divergence theorem as [22,23] The weak form of Hamiltonian's principle can also be obtained alternatively in terms of Equations (20a)-(22) by applying the variational principle. It should be noted that Equations (10) and (23) can be used to formulate the Rayleigh and Euler-Bernoulli piezoelectric bimorph beams. The Rayleigh beam only considered the rotary inertias of the piezoelectric bimorph. The Euler-Bernoulli piezoelectric bimorph beam can also be formulated using the same equation by ignoring the rotary inertia of the piezoelectric bimorph. In forthcoming mathematical derivations, this equation will use the rotary inertia of the piezoelectric bimorph where this will be easier to neglect once the orthonormality property of electromechanical dynamic equations is established. It should be noted that the second integral represents the divergence theorem reflecting the boundary conditions on the surface S of the bimorph element in the direction n x of the unit vector normal to the x-axis. The second integral is sometimes called the generalized internal force and moment for every element discretization and these become necessary when the element boundary S coincides with boundary of domain . The second integral can be a crucial part to be included in Equation (23) when using finite element analysis where their existence depends on external loads on certain nodes of the structure. In terms of the analytical approach that is proposed here, the second integral can be ignored because the displacement fields (u rel , w rel ) and virtual displacement fields (δu rel , δw rel ) reflected from Equation (23) were assumed as eigenfunction forms which meet the continuity of mechanical form or strain field and boundary conditions. The solution form of Equation (23) can be expanded using the convergent eigenfunction series forms. This equation can be formulated separately from the longitudinal and transverse forms as discussed further in the next section. As previously mentioned, the effects of input base motions on the bimorph not only affect the mechanical domain (stress and strain fields) but also the electrical domain (electric field and polarity). The solution forms can be prescribed using the space-and time-dependent eigenfunction forms as, Parameters (x) and (x) indicate the space-dependence or mode shapes of the eigenfunction series which can be determined using analytical solution forms for the cantilevered piezoelectric beam with a tip mass where the mode shape can formulated as shown in appendices A and B. It should be noted that these parameters are defined as the independent mode shapes of relative motions to meet the continuity requirements of the mechanical strain field. Corresponding to Equation (24), Equation (23) can be formulated according to the eigenfunction series forms. Setting virtual displacement forms δu rel (t) , δw rel (t), δv (t) separately can be used to obtain three independent dynamic equations. Parameters of virtual displacements meet the duBois-Reymond's lemma to indicate that only dynamic equations have solutions. At this point, the constitutive electromechanical dynamic equations from Equation (23) can be reformulated in matrix form by including the damping coefficients after integration with respect to y to give, where It should be noted that theˆsymbol refers to the modified variables after multiplying with the width b of the bimorph. Equation (25) is a non-homogeneous differential dynamic equation of the piezoelectric bimorph beam with two input base-excitation. This equation can be used for modeling the piezoelectric bimorph with either series connection or parallel connection. The connections just depend on the chosen piezoelectric couplings from Equations (15d), (15g), (16d) and (16g) and also the chosen internal capacitance from Equations (17b) and (17c). In addition to that, other parameters from this research such as mass moment of inertia, stiffness coefficients, piezoelectric constant and permittivity are viewed as constant values. The geometry of the piezoelectric bimorph beam must also be considered where it will affect all aspects of power harvesting performance. Normalized coupling electromechanical dynamic equation This section focuses on the solution of the multi-mode electromechanical dynamic equations of the piezoelectric bimorph beam with tip mass using Equation (23). The solution presented here complies with the orthonormality of the Ritz eigenfunction forms [6,30]. Equations (23) or (25) and (24) need to be modified in order to achieve the orthonormality conditions. In this case, the convergent Ritz eigenfunction forms can be stated as, In terms of the only mechanical equation, Equation (26) can be substituted into Equation (23) to give the independent algebraic equations of the eigenvalues corresponding to the longitudinal and transverse bending form as, It should be noted that c (u) r and c (w) r represent the unknown Ritz coefficients for the respective longitudinal and transverse bending forms which refer to the eigenvectors in the mechanical domain. Once the Ritz coefficients are determined associated with natural frequencies, the generalized Ritz mode shapes in terms of the r-degree of freedoms can be formulated as, The generalized Ritz mode shapes can be normalized with respect to mass as, The normalized eigenfunction series forms in terms of the generalized space-and timedependent functions can now be stated as, Corresponding to Equation (23), the orthonormalizations can be proven using Equation (30) and applying the orthogonality property of the mechanical dynamic equations as, where δ rq is the Kronecker delta, defined as unity for q = r and zero for q = r. The Rayleigh mechanical damping can be reduced in terms of orthonormalization as, In this case, although the modal mechanical damping ratios can be determined mathematically, the chosen modal mechanical damping ratios ζ (u) r and ζ (w) r were obtained here by experiment to give accurate results at the resonance frequency amplitude regions. Applying the orthonormalizations from Equations (31a) to (31c) into the electromechanical piezoelectric bimorph beam equation from Equation (23) gives, It is noted that because Equation (33) has been normalized due to Equation (30) in terms of Equations (31a)-(31c), the parameters P (u) r , P (w) r ,P (u) r ,P (w) r , R L , P D , Q (u) r and Q (w) r can be reduced as Equation (33) can be solved using Laplace transforms. In this case, the multi-mode electromechanical dynamic equations of the piezoelectric bimorph system can be reduced as, The characteristic polynomial form from Equation (33) can be formulated as (34d) Multi-mode electromechanical frequency analysis Corresponding to Equations (34a)-(34c), the electromechanical dynamic equation of the piezoelectric bimorph can be formulated into FRF forms. The first multi-mode FRF represents the generalized frequency dependent longitudinal function per unit input base longitudinal excitation. In the case when input transverse excitation is ignored, the FRF due to input base longitudinal excitation can be modified as a function of position of the piezoelectric element (x) and frequency (jω) by transforming it back into the Ritz eigenfunction form as The FRF of the relative transfer function between the input base transverse acceleration and output longitudinal displacement can be obtained as H 32 (jω) = was formulated as u abs (x, t) = u base (t) + u rel (x, t). The steady state relative transverse motion in Equation (34b) in terms of any position on the piezoelectric beam can be modified corresponding with Equations (30a), (37) and (38) as Corresponding to Equation (48), the multi-mode absolute transverse displacement can be reduced as It should be noted that Equation (49) is formulated according to the kinematics of the bimorph beam as discussed previously to give w abs (x, t) = w base (t) + w rel (x, t). Equation (34c) can be modified to the generalized electrical potential response in terms of Equations (39) and (40) It should be noted that Equations (47), (49) and (51) are applicable for analyzing the absolute dynamic responses when comparing the results using the laser Doppler vibrometer (LDV) because the signal output from the vibrometer can be transferred into a digital signal FFT Analyzer to display the results. The results obtained from measurements were the time dependent absolute displacement, velocity, acceleration and frequency response function located at any position along the piezoelectric bimorph beam. Closed form method of the normalized coupling electromechanical dynamic equation This section focuses on the multi-mode frequency response using the closed form of the electromechanical dynamic equations under two input base excitations. The closed-form analytical method was formulated according to the strong form of the Hamiltonian principle which was formulated from Equations (20a)- (22). This included the electromechanical bimorph element with the boundary-value problem where the partial differential equations associated with the geometry and natural boundary conditions were formulated from those equations. The solution form from this analytical method involves the space-and time-dependence of convergent eigenfunction forms which can be formulated as It should be noted that Equation (52) is sometimes called the mode superposition theorem, which utilizes the normalized mode shapes and generalized time-dependent coordinates. It should be noted that space-dependent eigenfunction forms of the Euler-Bernoulli piezoelectric bimorph for closed-form method are given in appendices A and B. At this point, the equation considering the coupled electromechanical longitudinal and transverse forms of the piezoelectric element can be further formulated in terms of frequency analysis. As formulated in Equations (20a) and (21a), the boundary-value problem formulated for the piezoelectric bimorph element can be expressed using the normalized eigenfunction series under two input base excitations. The first representation of the electromechanical piezoelectric bimorph under transverse bending from Equation (21a) can be reformulated using Equation (52a) and multiplying withˆ q (x), the results obtained can be further integrated with respect to x to give, The boundary conditions from Equation (21b) can be further formulated by substituting Equation (52a) asˆ and two dynamic boundary conditions can be further formulated as In terms of the orthogonality relations, the second and third terms of Equation (53) can be further manipulated by using partial integration and then applying Equations (54)-(56). The result of which can be used to reformulate Equation (53) as It should be noted that the normalized eigenfunction series in Equation (52a) must also meet the orthogonality relations to correctly represent the mode shapes. As mentioned previously, this section gives the piezoelectric bimorph beam equations based on Rayleigh's beam assumption as it considers the rotary inertia of the bimorph beam. The Euler-Bernoulli bimorph beam can be formulated by ignoring the rotary inertia of the bimorph beam. In this case, the normalized mass from Equation (58a) can be used to ignore the rotary inertia of the bimorph beam at the first integration to give the typical Euler-Bernoulli bimorph beam condition. The orthonormalizations can be provided by using Equation (52a) and applying the orthogonality property of the mechanical dynamic equations as, where δ rq is the Kronecker delta, defined as unity for q = r and zero for q = r. It should be noted that Equations (58a) and (58b) represent specific orthogonality conditions based on the boundary conditions. The mechanical damping based on Rayleigh's principle can be reduced in terms of orthonormality as Equation (59) can now be reformulated by including Rayleigh's damping coefficient according to orthogonality conditions as The second case of the electromechanical dynamic equation associated with boundary conditions represents the electromechanical piezoelectric bimorph under longitudinal extension. Here it is formulated using Equation (20a) corresponding to Equation (52b) and multiplying withˆ q (x), the result obtained can be integrated with respect to x to give The boundary conditions from Equation (20b) can be further formulated by substituting Equation (52b) as In terms of orthogonality relations, the third term of Equation (61) can be further manipulated by using the partial integral form and then applying Equation (62). The result of which can used to reformulate Equation (61) to give In this case, the second part of the normalized eigenfunction series in Equation (52b) also must meet the orthogonality relations in order to correctly specify the longitudinal mode shapes. Furthermore, the specific orthogonality condition based on Equation (63) can be established as where δ rq is the Kronecker delta, defined as unity for q = r and zero for q = r. The mechanical damping constant based on Rayleigh's principle can be reduced in terms of orthonormality as Equation (63) can now be reformulated by including Rayleigh's damping coefficient based on the orthonormality conditions as The third case of the electro mechanical dynamic equation from Equation (22) represents the piezoelectric bimorph under electrical form. Equation (22) can be modified by substituting Equation (52) into Equation (22) and differentiating it with respect to time to obtain the parameters of velocity and electrical current giving, Equation (67) can be reformulated as In this case, the electromechanical piezoelectric bimorph beam based on Equations (60), (66) and (68) can be reformulated to givë It can be noted that because Equation (69) has been normalized due to Equations (58a), (58b), and (64), the parameters P (u) r , P (w) r ,P (u) r ,P (w) r , P D , R L, Q (u) r and Q (w) r can be reduced as It is clear that Equation (69) provides the closed-form electromechanical dynamic equations with two input base motions and this equation can be further formulated using Laplace transformations to give the multi-mode electromechanical dynamic equations of the piezoelectric bimorph. Equations (69) and (33) appear to be similar to each other but have different sign and operation for some parameters. It should be noted that the electromechanical closed-form method can give accurate results because of its convergence for the chosen frequency response mode of interest. In comparison with the electromechanical weak form solution based on the Ritz analytical approach method, the typical mode shapes or space-independent eigenfunction forms and number of modes must be chosen correctly in order to meet convergence criteria and give results similar to the closed-form method. For example, the FRFs results from the analytical weak form as discussed in the next section only show the first mode but were iterated using three modes of interest to give accurate results. In this case, the weak form was chosen, having the same typical space-independent eigenfunctions (appendices A and B) as for the closed-form to reduce computational time and number of iterations to meet the speedy convergence criteria. Theoretical and experimental results The suggested formulations in terms of the electromechanical dynamic responses of the piezoelectric bimorph beam under two input base longitudinal and transverse excitations are used here for comparison with experimental results. A list of the properties of the bimorph used in this investigation is given in Table 2. Figure 6 shows the experimental setup. The piezoelectric bimorph beam with the tip mass was clamped at the protractor base structure with the given input base acceleration in order to investigate the effects of electromechanical dynamic responses of the bimorph. Notes: †Calculated according to the geometry and material properties of tip mass and the rotary inertia at center of gravity of tip mass coincident with the end of the bimorph length. First and third coefficients refer to zeroth and second mass moment of inertias, respectively. exciter. Moreover, the laser digital vibrometer Polytec PDV 100 was focused on reflector tape attached to the tip mass in order to measure the time-dependent dynamic displacement, velocity and also frequency response. In this case, all signal measurements from the charge amplifier, piezoelectric bimorph and vibrometer were connected to the B & K FFT pulse Analyzer 3560B. The reduced single mode FRFs with varying load resistances can be used to model the electromechanical piezoelectric bimorph of the electrical parallel connection. The sinusoidal base input acceleration of cantilevered piezoelectric bimorph was 306 mg equivalent to 3 m/s 2 (1 g = gravitational acceleration 9.81 m/s 2 ). The generalized time-dependent dynamic response along the length of the bimorph was measured using the laser vibrometer. The absolute dynamic velocity FRFs from the theoretical analysis can then be used for comparison with the experimental results. As can be seen from Figure 7, the trend of the tip absolute dynamic velocity FRFs at the first resonance frequency shifts when the load resistance changes. As can be seen, the weak and closed form analytical methods indicated good agreement with experimental results. resistance on the piezoelectric bimorph can be viewed as resistive shunt damping effect resulting in shifting of the resonant frequency, as shown clearly in Figure 8. It is important to notice here that although the bimorph FRF analysis changed from the short and open circuit load resistances, the mechanical behavior was predominantly affected by the resonance frequency behavior. It is also noted that since the piezoelectric element has profound internal mechanical and electrical effects, the piezoelectric couplings and internal capacitance also affects the frequency domain as indicated by the electromechanical damping and stiffness. As proved theoretically, the forward and backward piezoelectric couplings were created as a result of electrical force and moment due to the direct and converse mode thermopiezoelectric behavior. Figure 9 shows the trend of electrical voltage FRFs with variable load resistance using the weak and closed form analytical methods where the results showed good agreement with experiment. As also seen, the frequency response shifts as the load resistance changes where the resonance amplitude increases from short to open circuit load resistances. be clearly seen from the three-dimensional graph in Figure 10 that the region of maximum electrical voltage amplitude located at the open circuit load resistance gradually decreased to the minimum level at the short circuit load resistance followed by a shift in frequency. By considering the electrical current, as shown in Figure 11, the trend seems opposite to that of the electrical voltage where the lowest frequency of 76.1 Hz with short circuit load resistance seemed to give the highest amplitude. The amplitude then decreased to the lowest value at the load resistance approaching open circuit followed by the increasing resonance at 79.6 Hz. This is also clearly seen in Figure 12 where the maximum current amplitude dropped dramatically when the load resistance increased from short circuit. In the next section, further explanation s given by considering the relationship of electromechanical responses between velocity, voltage, current and power. In this section, the weak and closed forms of the analytical power harvesting FRF as shown in Figure 13 were found to give good agreement with the experimental results. The trend of power response as a function of load resistance can be compared with the previous trends of tip absolute velocity, electric voltage and current. The minimum current amplitude occurred at the open circuit load resistance with maximum amplitudes of tip absolute velocity and maximum electric voltage response at a resonance frequency of 79.6 Hz. Conversely, the maximum current amplitude was obtained at the short circuit load resistance with the maximum amplitude of tip absolute velocity with decreasing voltage amplitude at the resonance frequency of 76.1 Hz. This shows that the electrical current increased when the load resistance approached short circuit, but the tip absolute velocity of the bimorph tended to give the highest amplitude. This indicates that this situation would be unsuitable for power harvester optimization. As expected, the highest tip absolute velocity amplitude did not result in the highest power harvesting with short circuit resistance. The electrical current FRF for the short circuit load resistance gave the highest amplitude whereas the electric voltage response indicated the highest amplitude for open circuit load resistance. In fact, the power harvesting under the short or open circuit is not preferred due to the lower amplitudes. This can be seen clearly from Figure 14 where the maximum power amplitude region was achieved at the load resistance away from short and open circuits. The sensitivity of the bimorph to generate the optimal power harvesting shows the importance of understanding the underlying strain-polarity field on the piezoelectric element from the bimorph under variable load resistance. The convenient electrical power amplitude would be from an intermediate curve of load resistance of 60 k where this value indicated the lowest tip absolute velocity amplitude around the resonance frequency of 77.83 Hz but showed convenient values for voltage, current and optimal power harvesting. The optimal power can be seen with the black triangular curve in Figure 13 representing the optimal load resistance where the high amplitude coincidently overlapped with the load resistance of 60 k . Two local maximum points from the optimal load resistance curve also coincidently overlapped with the load resistances of 20 k and 200 k , respectively. It should be noted that both local minimum and maximum points indicated optimal power harvesting with very small differences in value. If we consider the local maximum points of 20 k and 200 k by comparing with the local minimum point, the local maximum points of 20 k can provide lower voltage and higher current compared with the local minimum point of 60 k . Conversely, local maximum points of 200 k seemed to give higher voltage and lower current compared with the local minimum point of 60 k . It is also noted that the velocity amplitude with load resistance of 60 k above the resonance frequency of 77.83 Hz indicated lower value than load resistance of 200 k and higher value than load resistance of 20 k but with small differences in value. Conversely, the velocity amplitude with load resistance of 60 k below the resonance frequency of 77.83 Hz indicated higher value than the load resistance of 200 k but with small differences in value and lower value than load resistance of 20 k . The power harvesting with the load resistance of 60 k showed the best response for the optimization covering the broadest frequency range. Conclusion This paper has presented the development of novel analytical methods for modeling the electromechanical dynamic equations for one and two input base excitations of the piezoelectric bimorph beam with tip mass using the Rayleigh and Euler-Bernoulli's beam assumptions. The Rayleigh piezoelectric bimorph beam only considers the second mass moment of inertia (rotary) of the bimorph where this can also be reduced to the Euler-Bernoulli's piezoelectric bimorph beam by ignoring the beam rotary inertia. The strong form of the Hamiltonian's principle under series and parallel connections was also presented. The weak form derived from strong form represents the analytical approach developed using the Ritz method whereas the closed form derived from strong form can be further formulated using the direct analytical solution with orthonormalization by introducing the space-and time-dependent eigenfunction series into boundary conditions. The accuracy of the closed-form solution over the frequency response domain is based on the convergence at any particular mode of interest derived using analytical methods. Corresponding to the convergence criteria, the weak form associated with the correct chosen typical mode shapes and number of modes also gives good agreement with the closed form. The effect of strain field due to transverse bending and longitudinal extension affects not only the mechanical moment and force of all layers of the bimorph but also the electrical moment and force of the top and bottom piezoelectric layers of the bimorph. Concerning the electrical moment and force, the backward and forward piezoelectric couplings due to the transverse and longitudinal forms can also be formulated, where these give the electromechanical coupling of the piezoelectric layers. Furthermore, the weak form of the electromechanical dynamic equations were further formulated using the orthonormality conditions to obtain the coupled electromechanical dynamic response of transverse-longitudinal form to give the multi-mode frequency response functions (FRFs) using Laplace transforms. The closed form of the electromechanical dynamic equations was also formulated in terms of orthonormality conditions. The multi-input dynamic excitation and multi-output electromechanical dynamic responses have been formulated. As a result, the validations between the analytical weak and closed forms and experimental results were achieved with good agreement where the single mode, reduced from multimode FRFs, was the main concern for power harvesting. The results obtained from single mode FRFs showed the changes in resonance frequency based on the load resistance changes. As also shown, the short and open circuit load resistances were not suitable conditions for optimizing power harvesting. This has been linked to other parameters where the maximum voltage occurred at the open circuit load resistance whereas the maximum current was found at the load resistances approaching short circuit. Moreover, the maximum level of tip absolute velocity of the bimorph was achieved at the short and open circuit load resistances which gave the lowest power amplitudes.
2018-12-10T03:10:55.689Z
2011-08-09T00:00:00.000
{ "year": 2011, "sha1": "a4dce2df641ca3ce89798bf29751b2012178a4ec", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1080/19475411.2011.592868", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "29376903a91f629d7098a89a3eabd7de451df7be", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
222128557
pes2o/s2orc
v3-fos-license
The correlation analysis of dental caries, general health conditions and daily performance in children aged 2–5 years Background: Oral health is important for general health and quality of life. One of the oral diseases with a high prevalence in Indonesia is dental caries. Dental caries can cause limiting disturbances of daily activities such as biting, chewing, smiling and talking, and of psychosocial well-being, including development and general health of children. Purpose: This study aims to analyse the correlation of dental caries incidence rate with general health conditions and daily performance of children aged 2–5 years. Methods: This was an analytical observational cross-sectional study. The study sample was 103 pairs of children and their mothers, selected using cluster random sampling technique. Intra-oral examination was conducted on the children to obtain decay, missing, filled-teeth (DMF-T) index score. Information about oral impacts on daily performance (OIDP) of the children was collected through a questionnaire distributed to the mothers. The data obtained were statistically analysed with a regression test (p < 0.05). Results: It was found that dental caries had a significant correlation with general health (p = 0.00) and daily performance, including chewing function disorder (p = 0.00), difficulties in maintaining oral health (p = 0.039), sleep disorders (p = 0.00), and emotional instability (p = 0.00). Conclusion: The incidence rate of dental caries has a significant effect on the general health conditions and daily performance of children aged 2–5 INTRODUCTION Oral and dental health is important for general health. Unfortunately, children seem to be vulnerable to oral and dental diseases because they generally have poor oral and dental care habits. Eating sweet food and drinking sweet drinks are some examples of their bad habits. 1 Based on the 2018 Basic Health Research data, dental caries prevalence in Indonesia was 81.5% in children aged 3-4 years and as high as 92.6% in those aged 5-9 years. Indonesia's decay, missing, filled-teeth (DMF-T) index in 2018 was 7.1, an increase of 54% from RISKESDAS 2018. 2 Oral health is also considered fundamental to public health since a healthy mouth allows individuals to talk, eat, and socialise without experiencing pain, discomfort, or embarrassment. 3 However, without adequate care, dental caries may occur and eventually lead to tooth decay. Dental caries is the main cause of toothache and tooth loss. Everyone is susceptible to dental caries throughout their lives. 4 Nonetheless, dental caries is one of the many childhood diseases that can be prevented. Dental caries can also interfere with the chewing system in general or become a focal infection, thus affecting the health and development of children. 5 For instance, dental caries greatly affects the quality of life of children in America, Canada and England. In Aboriginal children in Western Australia, dental caries is the fifth most common disease causing preschool children to be hospitalised (ages 1-4 years). 6 Toothache caused by dental caries causes a loss of 50 million school hours per year, affecting school attendance and future adult life. In Indonesia, toothache has caused 62.4% of the population to experience discomfort at work/in school for an average of 3.86 days per year. This condition indicates that dental disease, although not fatal, reduces work productivity. A research in Medan even reveals the impact of dental caries on four dimensions of quality of life, namely, limited function, pain, psychological discomfort and physical disability. In addition, Sheiham 7 highlights three effects of untreated dental caries on the growth and development of preschool children. First, pain caused by dental caries can interfere with children's food intake. Second, pain caused by dental caries can trigger sleep disturbances and subsequently leads to glucocorticoid production and growth disturbances. Third, chronic inflammation caused by dental caries can suppress haemoglobin and lead to anaemia, since the production of erythrocytes in the bone marrow is reduced. Thus, it is essential to treat dental caries in preschool children to improve not only their growth and development but also their quality of life. [8][9][10] A preliminary survey of 30 respondents conducted in pre-kindegarten schools in Kenjeran Health Center working area found a 50% incidence rate of caries severity. For the problems outlined earlier, this study is focused on the dental caries incidence rate and the general health conditions of children aged 2-5 years in some pre-kindergarten and kindergarten schools around the Kenjeran Health Center, Surabaya City. This study aims to analyse the correlation of the dental caries incidence rate with the general health conditions and the daily performance of children aged 2-5 years. MATERIALS AND METHODS This was an analytical observational cross-sectional study. The DMF-T index of children aged 2-5 years was collected along with questionnaire results distributed to their mothers. The study sample was 103 pairs of pre-kindergarten and kindergarten school children and their mothers, around Kenjeran Health Center, Surabaya, selected using the cluster random sampling method. This research has received a certificate (628/HRECC.FODM/X/2019) from the Ethics Commission of the Faculty of Dental Medicine, Universitas Airlangga. Each respondent's parent was asked to provide informed consent before participating in this study. The severity of children's caries was observed through direct primary tooth examination (intra oral). Next, DMF-T index measurement was conducted to observe the dental health conditions of children by observing cavities (decay), teeth lost due to caries (missed), and teeth that had been filled. Based on the data collected, the DMF-T score was obtained and analysed statistically to find any correlation with the general health conditions and the daily performance of the children collected through questionnaires distributed to their mothers. The questionnaire used in this study was concerned with oral impact on daily performance (OIDP) of the children involving a) eating and enjoying food, b) talking and pronouncing clearly, c) cleaning teeth, d) sleeping and relaxing, e) smiling, laughing and showing teeth without embarrassment, f) keeping emotions so as not to be easily offended, g) performing main work or social roles, and h) being able to understand conversations with people around them. In addition, a question instrument was added to analyse the general health conditions of the participants. The data obtained were statistically analysed using a regression test with the Statistical Package for Social Science (SPSS, IBM corporation, Illinois, US) software version 22, with a p-value of 0.05%. RESULTS This research was conducted in pre-kindergarten and kindergarten schools in Surabaya on 103 pairs of children aged 2-5 years and their mothers. The dependent variable in this study is caries, while the independent variable is the general health conditions of children aged 2-5 years. The data were statistically analysed with a regression test to find correlation between these variables. The regression test results showed a significant correlation between the incidence rate of dental caries and the general health conditions of those children. The distribution of respondents in this study can be seen in Table 1. Based on Table 1, the results show that 53.4% of the respondents had a high caries index of >6.6. Similarly, the data of the children's general health conditions indicate that 72.8% of the respondents had experienced illness in the last two months. The correlation of dental caries incidence rate with the children's daily performance was statistically analysed using the results of the OIDP questionnaire. The results of this analysis can be seen in Table 2. Table 2 also shows the correlation of the incidence rate of dental caries with OIDP of each respondent. The table also illustrates that there was a significant correlation between the incidence rate of dental caries and chewing function disorders, with a p-value of 0.000 (p < 0.05). However, there was no significant correlation between the incidence rate of dental caries and speech difficulties, with a p-value of 0.195 (p > 0.05). The OIDP index scores indicate a significant correlation between the incidence rate of dental caries and difficulties in maintaining oral hygiene, with a p-value of 0.039 (p < 0.05). There was also a significant correlation between the incidence rate of dental caries and sleep disorders due to oral and dental health problems, with a p-value of 0.000(p < 0.05). There was a significant correlation between the incidence rate of dental caries and emotional instability, with a p-value of 0.000 (p < 0.05). There was no significant correlation between the incidence rate of dental caries and In other words, the incidence rate of dental caries had a significant correlation with the general health conditions of children aged 2-5 years, related to chewing function disorders, difficulties in maintaining oral hygiene, sleep disorders due to oral and dental health problems, and emotional instability due to oral and dental health problems. However, the incidence rate of dental caries had no significant correlation with speech difficulties, avoiding meeting people, and difficulties in smiling. However, the statistical test using a regression test shows that the incidence rate of dental caries has a significant effect on the general health conditions of children aged <5 years, with a p-value of 0.000 (p < 0.05). DISCUSSION This study shows a significant correlation between the incidence rate of dental caries and the general health conditions of children aged 2-5 years. This may be due to several factors that can increase the severity of dental caries, such as education level; thus, the higher the education level, the higher the awareness of maintaining one's own general health. 5 In addition, it can also be assessed from how dental and oral health is maintained, such as not eating carcinogenic foods, use of toothbrushes, brushing teeth frequently, and using proper brushing technique. 11 Similarly, Wening et al. 8 argue that although there is no significant correlation between the severity of dental caries and the nutritional status of children, a decrease in desire Table 2. Regression test results on the correlation of the dental caries incidence rate with the children's daily performance using the oral impact on daily performance (OIDP) questionnaire Risk Factor N Sig. Chewing function disorders 103 *0.000 Speech difficulties 103 0.195 Difficulties in maintaining of oral hygiene 103 *0.039 Avoiding meeting people due to oral and dental health problems 103 0.077 Sleep disorders due to oral and dental health problems 103 *0.000 Difficulties in smiling due to oral and dental health problems 103 0.078 Emotional instability due to oral and dental health problems 103 *0.000 * significant at p-value <0.05 to eat still can be triggered by discomfort felt when eating when having a toothache. Decreased appetite can also have an impact on children's general health as nutrient intake decreases and causes decreased endurance. Ramayani et al. 9 state that children suffering from dental caries are lighter in weight than those without dental caries. The findings of the previous studies strengthen the results of this study, which revealed a significant correlation between dental caries and children's general health. This study also evaluates the severity of dental caries in children aged <5 years, using the DMF-T index and OIDP. It is known that severe dental caries can affect quality of life intrinsically and extrinsically. Intrinsically, a severe dental cavity can penetrate the pulp chamber and cause inflammation of the pulp tissue, causing pain and discomfort leading to sleep disorders, which can also reduce immune function. Extrinsically, dental caries can cause poor oral hygiene (OH) and tooth morphology, which can interfere with chewing function, leading to reduced nutrient intake, which can also reduce immune function. The decline in immune function can cause general health problems in toddlers. Dental health is one of several oral health factors that are important for child development especially. Dental caries is the most common dental health problem found in children, which is caused by food residue that sticks to the teeth. Calcification of teeth causes teeth to become porous, hollow, and even fractured (broken). Dental caries can also cause children to experience loss of chewing power and disruption of digestion, which results in less optimal growth. 12,13 This study found that age and gender of children aged <5 years had no significant effect on the incidence rate of dental caries. The OIDP questionnaire results consists of eight items evaluating the impacts of oral health on children's ability to perform their daily activities, including the measurement of physical, psychological and social dimensions. This questionnaire instrument focuses on ten basic daily activities, namely, eating, talking, cleaning the mouth, performing light physical activities, sleeping, relaxing, smiling, having emotional states, going out, and enjoying interacting with others. [14][15][16] In addition, the influence of daily life performance on oral caries aims to provide alternative sociodental indicators, focusing on measuring a person's ability to carry out the indicated daily activities with the condition of the oral caries. The results of the OIDP questions concerned with the general health conditions of children aged <5 years who often have toothache and dental caries indicated a significant correlation between the incidence rate of dental caries and the frequency of having toothache. There was a significant correlation between the incidence rate of dental caries and the behaviour of avoiding meeting people due to oral and dental health problems. There was also a significant correlation between the incidence rate of dental caries and emotional instability due to oral and dental health problems. In other words, the incidence rate of dental caries had a significant effect on the emotions of children. While the OIDP focuses on ten basic daily activities, it does not mean that all of those ten basic daily activities necessarily affect dental caries in children. [17][18][19] Finally, it can be concluded that the incidence rate of dental caries has a significant effect on the general health conditions and the daily performance of children aged 2-5 years.
2020-10-05T12:34:52.256Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "dbab86fbfe81dfa0dd0e26e01b5fb793730e69fb", "oa_license": "CCBYSA", "oa_url": "https://e-journal.unair.ac.id/MKG/article/download/18076/12056", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dbab86fbfe81dfa0dd0e26e01b5fb793730e69fb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52799915
pes2o/s2orc
v3-fos-license
Intrinsic Image Decomposition via Structure-Preserving Image Smoothing and Material Recognition Decoupling shading and reflectance from complex scene-images is a long-standing problem in computer vision. We introduce a framework for decomposing an image into the product of an illumination component and a reflectance component. Due to the ill-posed nature of the problem, prior information on shading and reflectance is mandatory. The proposed method adopts the premise that pixels in a region with similar chromaticity values should have the same reflectance. This assumption was used to minimize the l2 norm of the local per-pixel reflectance gradients to extract the shading and reflectance components. To obtain smooth chromatic regions, texture was treated in a new style. Texture was removed in the first step of the algorithm and the smooth image was processed for intrinsic decomposition. In the final step, texture details were added to the intrinsic components based on the material of each pixel. In addition, user-assistance was used to further refine the results. The qualitative and quantitative evaluation on the MIT intrinsic dataset indicated that the quality of intrinsic image decomposition was improved in comparison with previous methods. Introduction Intrinsic image components can be regarded as a set of images describing an image in terms of scene illumination, shape and reflectance of surfaces in the scene. Decomposing an image into its intrinsic components has a wide range of applications in industry. Eliminating the shading component provides illumination-free models that could be used for relighting [1], retexturing [2,3], gray scale colorization [4], and reflectance editing [5]. The process of recovering shading and reflectance can be accomplished by two approaches: using a single image or multiple images. Employing depth information and image sequences for deriving intrinsic components have also been considered in many studies. It is also convenient to aid intrinsic image decomposition through user-assistance. The user specifies pixels that have constant illumination or reflectance in order to disambiguate illumination and reflectance [6,7]. Additional information contributes to the improvement of intrinsic image decomposition. However, the required multiple images and depth information limit the general application of these methods. Acquiring intrinsic components from a single image is a non-trivial task due to its ill-posed nature, and solving this problem remains an open challenge. To solve the intrinsic decomposition problem, it is necessary to obtain prior knowledge on reflectance and shading. Some approaches consider the global sparsity assumption by stating that each image is composed of a small set of chrominance values, and smooth variation of intensity is due to luminance changes which should be assigned to the shading component. These methods usually apply clustering algorithms to find regions with similar chromaticity values and utilize the information within each cluster to calculate the reflectance value. This paper proposes a new framework for intrinsic image decomposition. Our approach applies several steps in order to obtain high quality shading and reflectance components from a single image. The approach is based on the assumption that regions with smooth intensity variations share the same material properties and have the same reflectance. Thus, the reflectance of a pixel can be obtained as a weighted function of a connected set of pixels (O) with similar intensity values. To find O for an input pixel, region growing was applied to ensure that O is connected and any change of intensity is smooth. To avoid ambiguity caused by texture, we treat texture differently from the preceding methods. Texture details were removed from the image and the smooth image was processed for intrinsic decomposition. Texture details were added to the reflectance or shading components based on the material of each pixel in the final stage. To evaluate the performance of our method qualitatively, the algorithm was tested on several natural-scene images to demonstrate the advantages of the proposed method. For quantitative evaluation, the MIT intrinsic dataset was considered and the results were compared with results of methods tested on this dataset. The contributions of the paper towards solving the intrinsic image decomposition problem can be stated as below: • Applying material recognition for handling fine texture. • Calculating the reflectance of the input pixel based on its neighbor pixels obtained via region growing which results in reflectance values that are globally more preserved. • Performing structural-preserving image smoothing for handling fine texture details and enhancing the performance of region growing. The rest of the paper is organized as follows: Section 2 describes pervious work related to intrinsic image decomposition. In Section 3, the method for intrinsic image decomposition is explained and the advantages of the method is discussed. Section 4 shows the experimental results and Section 5 concludes the paper. Related Work Intrinsic image decomposition methods which compute shading and reflectance components are briefly reviewed in this section. We divide the methods into three categories, Retinexbased methods, global sparsity assumption methods, and methods that use additional information other than the 2D image. Retinex-Based methods In 1971, Land and McCann [8] proposed the Retinex algorithm. They assumed that Mondrian-like images have regions of constant reflectance where large illumination gradient indicates reflectance changes, and small gradients are caused by shading. Intrinsic images were then decomposed by integrating their respective derivatives across the input image. For methods such as simple Retinex which operate on grayscale images, the output reflectance image has the same chromaticity as the input image. Therefore, this approach struggles if the assumptions of white light and Lambertian surfaces are violated. However, using RGB information, makes the method more resilient against the violation of these assumptions [9]. Inspired by the work of McCann [8], Fun et al. [10] extended the Retinex algorithm for color images by assuming that shading variations does not alter chromaticity, and associated reflectance derivatives to significant chromaticity changes. Garces et al. [11] leveraged the correlation between reflectance and chrominance reported in [10] by detecting regions of similar chromaticity in the input image to approximate regions of similar reflectance. Other algorithms have been built upon the original Retinex algorithm [12][13][14]. However, the original algorithm outperformed all the other algorithms prior to 2009 when tested on the MIT intrinsic image dataset [15]. Tappen et al. [16] presented a system that used multiple cues for recovering shading and reflectance components. They trained a classifier based on color information to distinguish illumination changes caused by shading and reflectance. They applied Markov Random Fields to propagate the classification of areas in order to disambiguate regions where the local analysis delivered unsatisfactory results. In Tappen's method, training a comprehensive classifier suitable for all range of shading and reflectance is exhaustive, and classifying pixels into shading and reflectance based on local evidence is not always easy. Methods based on the Retinex algorithm are intuitively simple and efficient but the mentioned preassumption on real scenes does not always hold. Global sparsity prior Some recent methods assume the global sparsity prior on reflectance which suggests natural images are subjugated by a relatively small set of material colors [17]. Bell et al. [18] used Kmeans clustering to estimate distinct regions and employed Conditional Random Fields (CRF) for pixel labeling. Applying clustering algorithms to find regions with distinct reflectance in the image has some disadvantages. Firstly, there is no guarantee that pixels in each cluster form connected regions. This could violate the assumption made by Fun et al. [10]. Secondly, in the case of c 0 and c 1 discontinuity, shading smoothness assumption breaks and leads to undesired clustering of the input image. Bi et al. [19] proposed an L 1 image transform for image flattening and used the flattened image to develop a pipeline for intrinsic image decomposition relying on probabilistic boosting trees for reflectance labelling. Utilizing the smooth image for clustering is more desirable compared to clustering the raw input image because of higher probability of associating regions of similar chrominance to reflectance. Rother et al. [14] introduced prior on reflectance values as being drawn from a sparse set of basis colors resulting in a random field model with global, latent variables and pixellevel output reflectance values. Solving the intrinsic decomposition by energy minimization problems has been frequently proposed. Shen et al. [7] suggested neighboring pixels have similar intensity values and therefore have similar reflectance. Their decomposition was formulated by minimizing an energy function with the addition of a weighting constraint to the local image properties. Barron et al. [20] presented a unified method to shape, shading, and reflectance estimation from a single image. Nevertheless, for scene-level images, their assumption about depth continuity does not hold, and unsatisfactory results are produced for such images. Finlayson et al. [21] proposed entropy minimization for calculating illuminant-free images. The invariant image was derived from the physics behind color formation in the presence of a Planckian light source, Lambertian surfaces, and narrowband imaging sensors. Shen et al. [22] suggested neighboring pixels with similar chromaticity share the same reflectance. Also they adopted the premise that natural images are dominated by a small set of material colors. They used multi-resolution analysis to enforce the local reflectance sparseness constraint at a global level. They further applied a total-variations-like cost term to take into account the global sparsity assumption. Multiple Images and Image sequences Some techniques use a sequence of images or video streams for intrinsic image decomposition. While some of these methods use multiple images to find the intrinsic component for a single image, other methods try to find the intrinsic components throughout the entire input video stream. Bonnel et al. [23] used a hybrid l 2 -l p formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. They used a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints. They also used userassistance for refining the results of the initial decomposition. More recently, Meka et al. [9] proposed a variational approach by solving an l 2 -l p optimization problem to find the intrinsic image components. Their optimization problem includes local sparsity prior on reflectance, spatio-temporal reflectance consistency prior, reflectance clustering prior, and a data fitting term. Laffont et al. [24,25] uses a collection of images taken from a scene for intrinsic image decomposition. Assuming Lambertian surfaces, they use Multiview-stereo to produce an oriented 3D cloud point of the scene, from which, they derive relationships between reflectance values at different locations, across multiple views. They then identify reflectance ratios between pairs of points and infer constraints to optimize a coherent solution across multiple views and illuminations. Lee et al. [26] used both image sequences and depth information to extract intrinsic components from video. Their shading constraints enforce relationships among the shading components of different surface points according to the similarity of the surface orientation. Further, temporal constraints applied to video data allowed for handling view-dependent non-Lambertian reflections. Incorporating video sequences makes it possible to use spatio-temporal features to increase the accuracy of intrinsic image decomposition and further provides information to handle intense lighting conditions. However, requirement of additional image frames and depth cues limits the application of these method when only a single image is available for decomposition. The main focus of this study is to obtain intrinsic image components from a single image. In summary, our method applies the following steps to extract the shading and reflectance components: To preserve the piecewise constancy of chrominance values, we separate the texture information and process the smooth image for intrinsic decomposition. In the next step, assuming Mondrian-like images, we achieve a sparse solution by minimizing an l 2 norm of the local per-pixel reflectance gradients. In contrast to conventional methods that use fixed size patches, we apply region growing for choosing the local pixels used in the minimization process. In addition, we consider material recognition in our frame work. Through material recognition, texture that was removed in the initial step, will be added to either reflectance or shading component based on the material of each pixel. Finally we consider user brushes for assisting the minimization process. The addition of user brushes allows to disambiguate shading and reflectance at a global level by defining regions of the image that share either the same reflectance or shading information. This step will help to find regions with similar reflectance and shading information at a global level defined by the user. Our Method Intrinsic image decomposition is an ill-posed problem and cannot be solved without prior information on reflectance and illumination. The proposed approach in this paper is built based on well-established assumptions on reflectance and illumination, suggested by Funt et al. [10] and the Retinex algorithm. The first assumption suggests that changes in reflectance are associated with changes in chromaticity, and the Retinex algorithm suggests that shading is smooth. Based on these assumptions, we employed the prior on shading and reflectance as an image region with smooth intensity variations indicates constant reflectance. Thus, the reflectance of pixel p can be represented by the weighted summation of pixels in a set O that contains p, whose members have similar intensity compared to p. Implementation steps of our approach are depicted in Fig 1. First, a smooth version of the input image was obtained via structure preserving image smoothing. Then, the smooth version of the image was used for intrinsic decomposition, and the shading and reflectance components were extracted. In the final stage, the texture information was added to either the shading or the reflectance component based on the material of each pixel. Structure-preserving image smoothing The proposed assumption on reflectance and shading suggests that pixels with similar chromaticity are parts of the same material and hence share the same reflectance values. Retinex algorithm [8] confirms the proposed assumption by suggesting that any change of intensity in an image is caused by either reflectance or shading, and generally, large gradients correspond to reflectance changes while shading is smoother. In some cases, the Retinex pre-assumption may not hold. For instance, the wrinkles on a piece of fabric produce large gradient changes that belong to the shading component. Therefore, removing texture details and processing the smooth image increases the chance of finding regions with similar reflectance values. In general, the image smoothing step has two objectives: 1. Handle fine synthetic texture: Many natural scene images contain fabric, tiles and similar regions with fine and smooth texture details that belong to the reflectance component. However, in many cases, variations of illumination may be assigned to the shading component. By image smoothing, it is possible to separate texture details which can be later reassigned to the shading or reflectance components based on the output of the material recognition algorithm. 2. Improving the calculated reflectance value: In order to estimate the reflectance of a pixel, we used the color values of its neighbor pixels which are found through region growing. Existence of small edges in the image reduces the performance of the region growing algorithm. In the smooth version of the image, larger regions can be found for calculating the reflectance value of the input pixel. In order to find image regions with similar reflectance, it is necessary to remove as much shading information as possible without changing the image structure, i.e. obtaining the structure-preserved smooth version of the image. To acquire the smooth image, the method presented by Karacan et al. [27] was adopted. The desired goal is to decompose a given image I into its structural (J) and textural (T) parts as follows: The structural component of an image pixel is defined as: where N(p,r) shows a square neighborhood centered at p with (2r+1) 2 pixels. w pq measures the similarity between k×k patches centered on pixels p and q. The similarity between two regions can be calculated via the region covariance descriptor proposed by [28], where an image region R is described with a d×d covariance matrix: z i = 1,. . .,n denotes a d-dimensional feature vector inside R and μ is the mean of these feature vectors. The features are intensity values, first and second order intensity change in horizontal and vertical directions of the image, and pixel locations. Karacan et al. [27] proposed two functions for calculating the similarity between pixels. Based on our observations, the w pq that better suits our approach is defined as: where C = C p + C q and μ are region covariance and mean values extracted for image patches Intrinsic decomposition In the second stage of our algorithm, the smooth image J is processed for intrinsic decomposition. The interaction between light and objects can be described using RGB color channels. Assuming Lambertian surfaces, the observed color at pixel of the input image can be denoted as: R and S are the reflectance and shading components respectively, and × denotes per-channel multiplication. The objective is to retrieve S and R components for every pixel i J , but Eq (5) is an ill-posed problem with two unknowns and one known which results in an infinite set of answers without any prior information on shading or reflectance. Lambert surface is the general assumption for solving many image processing problems. In the case of intrinsic image decomposition using a single image, since there is no information about lighting conditions and scene geometry, it is not possible to use more complex models and the Lambert assumption has been used frequently in previous methods [2,6,19]. In order to solve Eq (5), a new decomposition approach is developed based on the assumption that pixels of a connected region O with similar pixel intensity values have the same reflectance. Thus, R for a pixel i O is obtained as the weighted sum of reflectance values of pixels p j O, denoted as: where w ij defines the similarity between p i and p j . In (6), we have followed the notation of Shen et al. [7], however, our approach for choosing O is different as will be explained in the next section. To measure similarity between pixels, many affinity functions have been proposed in the literature of image segmentation [24,27,29,30] and colorization [31]. We considered two terms for measuring affinity between p i and p j : illumination similarity and color angle difference. Illumination difference between p i and p j as a conventional affinity function is defined as w I ij and formulated as: where Y is the luminance component of a pixel in YUV color space and σ y represents the variance between the luminance of pixel i with its neighbor pixels. The second affinity measure used in this study is color difference introduced in [16], where each pixel is treated as a vector in the RGB space and the difference between color angels is used to measure the similarity between two pixels: where σ c represents variance between color difference of pixel i and its neighbor pixels. ∠ is the angle between pixel i and j. To obtain the affinity function, Eqs (7)) and (8) are combined as: Based on the above explanation, the energy term for intrinsic image decomposition is formulated as: where O i is obtained by region growing p i . Fast region growing algorithm presented in [32] was employed for region growing. The optimization problem to be solved is then defined as: arg min EðR; S À 1 Þ R;S À 1 The quadratic function (11) was solved by setting the derivative of its dependent variables to zero (i.e.: R i,r , R i,g , R i,b , and S -1 ) and Gaussian-Seidel method was applied iteratively to improve the results of the linear solver [33]. Discussion on selecting Ω Selecting pixels to obtain the reflectance value at p i needs to be handled carefully. In order to find the region O i , we performed region growing and used p i as the input seed. Eq (9) was used to find w ij for pixels in O i . O i obtained from region growing consists of pixels that more likely have the same reflectance. This is supported by Funt et al. [10] which suggests regions with similar reflectance to have similar chromaticity values. In contrast, Shen et. al. [7] selected pixels in a k×k patch centered at p i to calculate the reflectance value at p i . This method does not impose any constraints on the properties of the selected pixel in the patch that limits the choice of k to small values. Thus, reflectance of each pixel is only represented by a limited number of neighboring pixels. Our approach to finding the reflectance at p i is also more advantageous compared to those methods that employ clustering or segmentation to find a region with similar reflectance to p i . As an example, Bi et al. [19] used clustering in the CIE-Lab color space and merged the generated clusters based on similarity measures to find regions with similar reflectance. It seems that their approach has two disadvantages: first, clustering generates unconnected image segments that require an additional merging step to find regions with similar reflectance. Second, selecting the efficient number of clusters for an arbitrary image is an issue on its own. An insufficient number of clusters leads to selecting clusters with high intra-class variance which in turn results in undesired cluster merging. Common issues such as handling empty clusters and algorithm convergence are other problems that need to be considered. The proposed region growing approach for finding pixels with similar reflectance ensures that O i is connected and smooth. Region growing is a costly process; therefore, we add two steps to speed up this process. First, to prevent generating very large regions, the region size is limited to pixels that are within a distance of D pixels from the input pixel (D = 30 was chosen in this study). Second, to execute the region growing algorithm less frequent, we make use of pixel labeling. For this task, let O last be the last output of the region growing algorithm and p i be the next input pixel. If p i has been labeled to be part of O last , that is p i O last , we let region O i be the same as O last with a small modification. To obtain O i , pixels farther than D pixels from p i are removed from O last , and pixels closer than D with an intensity less than half of the average values of O last are added to form O i . Algorithm 1 summarizes the steps preformed to generate the image region required to calculate w ij . Algorithm 1. Reducing calls to region growing algorithm 1. Region distance = D; 2. For input pixel p i 3. For pixels in range ||p j -p i || D 10. Let p j Ω i }; 12. Calculate w ij User brushes In many applications such as texture mapping and matting, user interaction is required. This interaction can be extended to the intrinsic image decomposition process to improve the results. Bousseau et al. [6] recommended three kinds of user strokes for specifying local cues about illumination and reflectance. We apply the constant-reflectance and constant-illumination brushes. Pixels marked with constant-reflectance brush can be used to find a local cue about reflectance of the image pixels as: where z(.) is a normalization factor defined as z(β) = 1/| β| that ensures strokes have an influence independent of their size, and | β| is the number of pixels specified by the user-brush. Eq (12) was generated taking R as the variable to be optimized since the optimization process was solved for reflectance and shading was obtained using Eq (5). The constant-illumination brush covering pixel p i and p j denotes that I j S j = I i S j and hence the energy function for this brush can be formulated as: The optimization problem for intrinsic image decomposition with added user brushes can be defined as: Learning material After performing intrinsic decomposition for image J, the texture information should be assigned to the correct component. High frequency changes in the image are conventionally assigned to the reflectance component [8,34]. However, there are cases that high frequency changes are part of the shading component (e.g. surfaces with bumps and wrinkles). In order to obtain better decomposition results, we apply material recognition which helps to assign texture information to the correct intrinsic component. Material recognition is a challenging problem by itself and many methods have been proposed for solving this problem [18,35,36]. Bell et al. [18] presented a large scale dataset of materials in the wild and used learning techniques for material recognition. They enabled detection of materials in 23 categories which cover a large variety of materials (e.g. foliage, human skin, wood, glass and etc.). We directly use their implementation to find out if assigning texture information to intrinsic components based on material information has a positive effect on the intrinsic image decomposition problem. Fig 4 shows an example where material recognition was beneficial for adding texture details to the correct intrinsic component. Fig 4A and 4B show the reflectance and shading component obtained from the scene shown in Fig 2. The colored regions in Fig 4C and 4D show how the texture removed in the image smoothing process should be added to shading and reflectance components respectively. For example, in Fig 4D, the foliage and water are the colored regions and hence, the texture details of pixels that belong to these regions should be added to the shading component. Fig 4E-4H show the intrinsic components before and after assigning texture components to shading and reflectance images. It can be observed that fine details added to the shading and reflectance images improve the results. A Lookup Table (LUT) was created for assigning texture details to the intrinsic components. A number between 1 and 23 was assigned to each unique material in the first column of the LUT. The second column of the LUT determines the intrinsic component for the corresponding material. For example, the texture details of Foliage were assigned to the shading component and the texture details of materials such as Brick, Fabric, and Carpet were assigned to reflectance component. After detecting the material type for an input pixel, the texture details of that pixel were assigned to the reflectance or shading component based on this LUT. The correspondence between each material class with shading and reflectance components is supplied in the supplementary material (S1 File). Experimental Results This section reports the experimental results to evaluate the performance of the proposed method. The full pipe-line for producing the intrinsic components is illustrated in Fig 5. The Fig 5D and 5E illustrate how texture information should be added to the reflectance or shading components. Fig 5F and 5I illustrate the extracted reflectance and shading components for the smooth image, while Fig 5G and 5J show the reflectance and shading components with added texture details. As an example, texture of the human hair, shown in yellow, was added to the shading component (Fig 5J), while the texture information of the purple region was assigned to the reflectance component (Fig 5I). To our knowledge, this is a very fine-level improvement in intrinsic image decomposition since previous methods do not provide such solutions for handling small illumination variations. Fig 5H and 5K show the reflectance and shading components when user brushes were considered in the decomposition process. The texture details were added the same as the automatic decomposition mode. Fig 6 shows the performance of the proposed method for handling non-smooth shading without applying user brushes (i.e. the doll's winter cap). The figures illustrate improvement of results when compared with the methods of [6,7,16,37]. This improvement is due to texture removal in the first stage of the proposed algorithm which enables more accurate selection of regions with similar reflectance. As previously discussed, Shen et al. [8] used a small patch of pixels for reflectance calculation. This has led to obtaining different reflectance values for a region with the same material, as shown in Fig 6. In [6], the effect of JPEG compression significantly distorts the output reflectance. In [16] and [37] the reflectance components are not correctly extracted and lots of shading details are mistakenly added to the reflectance image. Fig 7 shows a visual comparison between our method and recent methods of [19] and [38] which adopt the sparsity assumption on reflectance. The input image is quite complex due to large shading variations such as highlights and shadows (Fig 7A). Fig 7F shows the input image with added brushes. Fig 7B-7D show the reflectance components extracted from our method and methods of [19] and [38], respectively. Three regions are shown distinctly for better visualization. First, the reflectance component for the human face region extracted using our method is more uniform compared to the results of [19,38]. Second, our method was able to handle shading component in the sofa region more accurately and specular regions were In Fig 8 we illustrate how our method can handle intense lighting conditions. The building in the scene has produced an intense shadow region on the grass leading to a large color difference between regions "a" and "b", which cannot be handled with region growing introduced in (6). Fig 8B shows the user brushes applied to the image which specify regions with constant shading or reflectance. Fig 8E and 8F show the extracted shading and reflectance components with and without user brushes. When brushes were applied the shadows on the grass were correctly detected and assigned to shading component. Fig 9 shows an example were processing the smooth image for intrinsic image decomposition is beneficial. This example is focused on the patterned fabric. We compare the result of our method with that of Shen et al. [7], where the raw input image is processed for intrinsic decomposition. It is obvious that the pattern of the fabric is part of the reflectance component. Fig 9A shows the original image and Fig 9B and 9C show the structural and texture components. The result of intrinsic decomposition using our method and Shen et al. [7] are shown in Fig 9D through 9G. Using our method, the texture details of the fabric region were correctly assigned to the reflectance component. This example shows that our method can be used for intrinsic image decomposition of images with fine texture details. For quantitative evaluation of our intrinsic image decomposition method, ground truth data about the scene geometry and lighting conditions is required. Intrinsic images in the wild and the MIT intrinsic image dataset are the available datasets that represent images in terms of their intrinsic components. The intrinsic images database is a collection of annotated images from real-world consumer photographs. In this database, surface segments are drown on crowd souring and surface properties including textural and contextual information are added for each segment [18]. The MIT intrinsic images dataset was created by Grosse et al. [15]. To create the complete image, each object was photographed using a polarizing filter set to maximum specular value. The diffuse image was captured by setting up the filter to remove specular regions. Each object was then painted and re-captured to obtain the shading component. These three images were used to calculate the reflectance and specular images. We used the MIT dataset since special imaging conditions were considered for acquiring the shading and reflectance components. Further explanation about the MIT intrinsic dataset can be found in [15]. The objective performance measures used in this evaluation was the Least Mean Square Error (LMSE) as defined in [15]. Table 1 provides the LMSE values obtained from our method with and without applying user brushes. The results presented by Shen J. et al. [7], Shen L. et al. [22], and Color-Retinex [39] as reported in [15] are also shown in Table 1. The values shown in bold indicate the lowest value for each image among the tested methods. For method of Shen J. et al. [21] and Color-Retinex, the LMSE values for images that contain specular regions were not provided. The values in Table 1 show that when user brushes were applied, the LMSE values for our method were less than the previous methods except for Teabag1 and Teabag2. We have calculated the average value of LMSE values for all the images except those with specular regions. The average value for our method with user brushes was 0.0129 which is about 0.07 better that the method of Shen L, et al. [22]. LMSE values for images with specular regions were obtained by adding the ground truth shading and specular values. For each image, the ground truth reflectance and shading components are also shown. For visual comparison, the output reflectance and shading components obtained from the work Shen J. et al. [7] are also depicted. The first row shows the shading and reflectance components for the turtle image. This example illustrates the effectiveness of the proposed method approach to handle the texture. Our method has successfully extracted the reflectance component while in the reflectance component extracted from [7], the reflectance details are incorrectly added to the shading component. In the second row, the example shows that our reflectance output is more monotonic compared to the reflectance component obtained from the work of Shen J. et al. [7]. The last row shows a limitation of the proposed method when user brushes were not applied. For this image, the surface of the apple violates the Lambert assumption, and the reflectance component of this image contains smooth reflectance variations. The complete results on the MIT intrinsic images dataset including Base, Shading, Reflectance, and Texture images can be found in (S10 File). In order to illustrate how the added user brushes improve the output of our algorithm, the result of intrinsic decomposition for the image Apple from the MIT dataset is shown in Fig 11. Reflectance and shading components extracted with and without user brushes, added user brushes, and the ground truth images are shown in this figure. When user brushes were applied, more shadows have been correctly assigned to the shading component. Also, the specular region on the surface of the apple were correctly assigned to the shading component. Fig 12 shows more examples of intrinsic image decomposition using our method and the method of Shen L. et al. [22]. For the image Raccoon, applying the constant reflectance brush resulted in accurate extraction of the reflectance component. In addition, the shadows in the original image were correctly added to the shading component. For Cup1, less shading information is present in the reflectance component compared to Shen L. et al. [22], however, the shading component from Shen L. et al. [22] method holds less reflectance information. The image of the Paper1 is another example with complex shading. As the results shows, our method has been able to correctly decompose this surface into shading and reflectance components. Implementation Our algorithm was implemented in MATLAB 2015a using an Intel Corei7 CPU for data processing. The region growing algorithm was programmed in C++ and combined with the rest of the code through a Mex file. For image smoothing, the code provided by Karacan et al. [27] was utilized. Our intrinsic decomposition was implemented in two parts: weight matrix calculation and optimization step. Calculating the w ij matrix required less than 1 minute to process for an image with 640×480 pixels. The Gaussian-Seidel iterative process requires intensive vector multiplications and each iteration takes about 10-13 second. In general, 15 to 30 iterations were required to obtain the reported results. The time variation in the optimization step for each iteration was due to un-fixed w ij vector length for each pixel. The vector size was determined by the number of pixels in O i . The overall intrinsic decomposition processing time of our approach is comparable to recent intrinsic image decomposition algorithms. Bi et al. [19] has reported that their image decomposition pipeline requires about 5 to 10 minutes to process. The proposed method by Shen J. et al. [7] requires 200 iterations for the algorithm to obtain acceptable decomposition, and each iteration requires 2 to 5 seconds of processing time depending on the size of image patches used for calculating the weights. Weight calculations should also be added to the processing time required for intrinsic decomposition for this method. Parallel processing and utilization of GPU can improve the computation speed of our algorithm, which is also a recommended solution suggested in [7,19]. Conclusions A framework for intrinsic image decomposition was presented in this paper. Our method was based on the assumption that pixels in a region with similar chromaticity share the same reflectance value. To ensure Mondrian-like images, we applied structure preserving image smoothing and processed the smooth image for intrinsic decomposition. The reflectance component was obtained by minimizing an energy function which defined the reflectance value of a pixel as the weighted sum of reflectance values of pixels obtained by region growing the current pixel. Texture details separated in the smoothing step were added to the shading or reflectance components based on the material of each pixel. Qualitative examples showed that processing the smooth image along with user brushes and material recognition results in correct separation of shading and reflectance components for a wide range of images with large illumination variations and complex surfaces. Quantitative experiments conducted on the MIT dataset showed that our method has improved the quality of intrinsic image decomposition compared to previous methods tested on this dataset. The average LMSE values of our method on the MIT intrinsic images was about 0.013 which showed to be at least 0.07 better than the methods tested on this dataset. Altogether, our approach to intrinsic image decomposition allows for accurate extraction of shading and reflectance components for a wide range of images which makes this method attractive for applications that require intrinsic components of the image.
2018-04-03T01:01:05.530Z
2016-12-16T00:00:00.000
{ "year": 2016, "sha1": "77ee270155b1a6c161023a136cf4f8e0dc4ea13c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0166772&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77ee270155b1a6c161023a136cf4f8e0dc4ea13c", "s2fieldsofstudy": [ "Computer Science", "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
251268884
pes2o/s2orc
v3-fos-license
Hypospadias: A Comprehensive Review Including Its Embryology, Etiology and Surgical Techniques Hypospadias is among the most prevalent urogenital malformations in male newborns. It is characterized by the displacement of the urethral meatus to the ventral side of the penis, an aberrant ventral curve of the penis referred to as "chordee," and an abnormally arranged foreskin with a "hood" found dorsally and lacking foreskin ventrally. Patients may have an extra genitourinary abnormality based on the area of the lesion. In around 70% of cases, the urethral meatus is positioned distally to the shaft, representing a milder form of the disease. The remaining 30% of cases are located proximally, are more complicated, and require further evaluation. Although the origin of hypospadias is mostly obscure, several suggestions exist about genetic susceptibility and hormonal factors. The objective of hypospadias restoration is to restore aesthetic and functional regularity, and surgery is currently advised at a young age, mostly between six and 18 months. At any age, hypospadias can be repaired with an equivalent risk of complications, functional outcomes, and aesthetic outcomes. However, the best age of treatment is still undetermined. Even though the long-term effects on appearance and sexual function are usually good, males may be less likely to make the first move after rectification. Also, people who have hypospadias treated are twice as likely to have problems with their lower urinary tract. These problems can last for years after the initial repair. Introduction And Background Hypospadias is a congenital deformity of the external genitalia in males. It is defined by the aberrant growth of the urethral fold and the ventral foreskin of the penis, which results in the incorrect location of the urethral opening [1]. In hypospadias, the external urethral meatus may be mispositioned to a different degree and may be accompanied by penile curving. Patients could have an extra genitourinary abnormality based on the location of the hypospadias [2,3]. It is considered among the most prevalent congenital abnormalities in males. Hypospadias occurs in one out of 150 to 300 live births [4,5]. After undescended testis, hypospadias is the second most common congenital abnormality [2]. Hypospadias is frequently characterized as posterior, penile, or anterior based on the preoperative location of the meatus. Nearly 70% of hypospadias are glandular or distally placed on the penis and are regarded as moderate variants, while the remaining are more severe and complicated. This classification was suggested by Duckett [6] (Figure 1). Approximately 18.6 out of every 10,000 live births in Europe are affected by hypospadias. Registrations in 23 European registries between 2001 and 2010 demonstrated a steady number despite previously observed increases and decreases in temporal patterns [9]. North America has the highest prevalence, with 34.2 cases per 10,000 live births, whereas Asia has the lowest, at 0.6-69 cases per 10,000 live births. Even with more than 90 million screened newborns, the real global prevalence and trends are still difficult to quantify due to various methodological issues [5]. Given its frequency, hypospadias can place a significant strain on healthcare spending. A significant risk of complications may necessitate many procedures, particularly in the most severe instances. In addition, a substantial proportion of patients struggle with aesthetic or functional issues [2,10]. Etiology Concerning the genesis of hypospadias, several explanations have been offered, including genetic susceptibility, insufficient prenatal hormone stimulation, maternal-placental variables, and environmental impacts. Thus, it is plausible that hypospadias has several causes [11]. Premature birth, small-forgestational-age newborns who are less than the 10th percentile for weight, length, and/or head circumference, and intrauterine growth restriction are risk factors. All of these have been linked to an increased chance of having a baby with hypospadias [12,13] (Table 1). Hypospadias rates have been linked to both inadequate placentas and the use of assisted reproductive technologies [14,15]. Genetic abnormalities Environmental exposure Small for gestational age ( <10 th percentile for weight, length, and head circumference ) Intra-uterine growth restriction. TABLE 1: Risk factors for hypospadias. One in every seven occurrences of hypospadias is passed down through first, second, or third-degree family members. For anterior and middle forms, familial occurrence appears to be more prevalent than for posterior kinds. It is estimated that between 9 and 17% of the male siblings of a hypospadias-infected kid may get the condition [11]. One-third of hypospadias are directly linked to a genetic abnormality [16]. Nearly 200 disorders with recognized genetic etiology are connected with hypospadias. However, only a percentage of males with idiopathic variants have this condition [17]. The most common associations are WAGR syndrome, Denys-Drash syndrome, and Smith-Lemli-Opitz syndrome [2,18]. Another important factor in hypospadias is hormonal influence. Most hypospadias is solitary conditions, while uni-bilateral cryptorchidism and micropenis are related abnormalities [19]. These co-morbidities indicate a lack of hormonal effects during development. Androgens and estrogens both play a crucial role in genital development, and in the event of an imbalance, a range of congenital penile malformations, including hypospadias, micropenis, and ambiguous genitalia, can be observed [19]. A shortened anogenital distance in males with hypospadias as a consequence of a disturbance in embryonic androgen exposure [20] is a clinical observation that supports this notion. Other studies highlight the possible impact of so-called endocrine-disrupting environmental pollutants on the formation of hypospadias. Hypospadias was created in mouse models by the exposure of their mothers to synthetic estrogens. Due to the enormous variances across animals, it remains disputed whether someone has a significant effect on humans [21]. Evaluation Hypospadias is among the most prevalent birth defects in males. A misplaced, ventrally-located urethral meatus; a ventral penile curvature; and an imperfect, dorsally-hooded foreskin are the physical exam criteria for diagnosing an ectopic urethral meatus. Hypospadias is a vast concept, however, and the degree of each symptom can vary significantly across boys. The second and third components are not usually present. Up to 5% of boys suffering from hypospadias have an undamaged prepuce, and the condition is not recognized till the foreskin becomes retractable or diminished during circumcision. Since an intact prepuce can conceal the existence of inadequate urethral growth in a newborn infant, it is essential to retract the foreskin before circumcision to prevent losing this oddity and presumably harming the imperfect urethra or expelling foreskin that could be incorporated into a subsequent urethral reconstruction [22]. Initial assessment of males with hypospadias must include a thorough medical history and physical examination. In conjunction with the trio of hypospadias, males may have related abnormalities such as penile torsion, penoscrotal webbing, and penoscrotal displacement, which must be taken into account while planning the surgery. On physical examination, boys with hypospadias may have dysplastic ventral tissue. On examination, a shortage of ventral axis skin may be instantly apparent. The position of the urethral meatus has traditionally been used to determine the degree of hypospadias [7]. Using these criteria, almost 85% of males have a mild distal meatus variation [23]. Proximal hypospadias occurs in almost 15% of individuals and provides the surgeon with various distinct therapeutic issues [9]. A classification of hypospadias based only on the position of the urethral meatus is very simplistic and may even be deceptive. A classification system that incorporates the position of the urethral opening and the degree of penile curvature following degloving results in a more accurate and pertinent diagnosis. The GMS score (glans meatus and penile shaft [curvature]) integrates physical exam outcomes in the operating room, evaluating the quality of the glans and urethral plate, the position of the urethral opening, and the degree of penile curvature, to objectively allocate scores for severity stratification ( Table 2). The GMS score was designed for use in the operating room since office measures are less reliable in determining severity, namely the extent of ventral penile curvature [24,25]. Inguinal hernia, hydrocele, and cryptorchidism are the malformations most frequently linked with hypospadias. Inguinal hernia and/or hydrocele are up to 16% more prevalent [26]. Approximately 7% of individuals with hypospadias have cryptorchidism. With more proximal hypospadias, this jumps to approximately 10% [27]. Further diagnostic testing is recommended, such as an ultrasound of the urinary system and inner genital organs, to identify other nephro-urological anomalies [28]. Up to 14% of all hypospadias and up to half of the perineal hypospadias have a Müllerian remnant, resulting in catheterization difficulties, urinary blockage, or urinary tract infections (UTIs) following repair [29]. The majority of them are seen by ultrasonography. The American Urology Association cryptorchidism guideline suggests that all boys with unilateral or bilateral undescended testes and severe proximal hypospadias receive further testing to rule out a disorder of sexual differentiation (DSD), which is significantly more common in these situations. Assessment and management The primary objective of hypospadias treatment is to restore both aesthetic and functional normalcy. Indications for correcting hypospadias comprise spraying of urine stream, inability to pee in a standing posture, curvature causing difficulties during intercourse, reproductive concerns due to trouble sperm deposition, and decreased pleasure with genital appearance [30]. The objectives of surgical repair in males with hypospadias comprise restoration of penile curvature to guarantee long, straight arousal, the extension of the urethra to enable proper flow of urine and sperm through the glans; and the development of an aesthetically normal penis. The surgeon must evaluate the defect's possible long-term importance and have an informed debate with the boy's parents about whether surgical intervention should be undertaken. In circumstances when the penis is straight when upright and the urethral opening is sufficiently distant to permit urination while standing, a repair may be of minimal value. To guarantee a satisfactory long-term outcome, continuing into maturity, repair should be performed with the fewest possible operations. This objective is attained by preparing the patient and family for the appropriate surgery, doing an accurate anatomic evaluation, and engaging in an open dialogue regarding the functional outcome and potential consequences. Surgical timing is crucial. The timeframe of the repair should take into account the potential unfavorable psychological consequences of surgery, the anesthetic risk to the kid, the degree of penile growth that will assist a satisfactory repair, and the age-related changes in wound healing in boys [31]. The onset of genital awareness occurs at 18 months of life and increases with age [32]. Boys who had repair sooner (typically before 12 months of age) expressed less anxiety and had better psychosexual outcomes than boys who underwent repair later [33]. Boys who get corrective surgery at a younger age may also experience fewer problems, a result that underscores the need for early intervention [33]. In comparison, adult hypospadias surgery may be associated with a greater risk of complications [34]. In 1996, based on this research, the American Academy of Pediatrics Section on Urology advised that surgical intervention for hypospadias repairs be performed between both the ages of six and 12 months, with some exceptions in our current practice [35]. Given the seriousness and the necessity for numerous treatments, some standards place the best age for hypospadias correction within six and 18 months [30]. Those who did not recollect the operation were more likely to have a better body image and be content with their overall physical appearance. These findings relate to early-life surgery to reduce psychological load. Aesthetic hazards, age-dependent tissue diameters, and emotional repercussions of genital surgery are all factors that have an impact [28]. When considering surgery for their young boy, many parents inquire about the appropriateness of anesthesia. In the last decade, disturbing discoveries about aesthetic-induced neurotoxicity in the growing central nervous system of rats have been reported. However, scientific concerns cast doubt on the applicability of these findings to people [36]. At two years of age, neurodevelopmental impairments were not detected in children subjected to anesthesia for hernia surgery, whether it was general anesthesia or regional anesthesia [37]. Therefore, the preoperative surgical evaluation with the boy's parents must include a thorough evaluation of the advantages of surgical repair against an age-appropriate explanation of the risks of general anesthesia. Some anatomical characteristics, such as a short glans width and a thin urethral plate, are associated with greater postoperative problems and provide technical difficulty [38,39]. However, penile size is rarely considered a consideration in determining the ideal timing for hypospadias treatment, as penile development is minimal throughout the first few years of life. Therefore, delaying surgery appears to be without benefit [28]. In hypospadias surgery, the use of preoperative androgen stimulation is contentious. Some surgeons suggest testosterone supplementation for increasing anatomical proportions. Preoperative androgen stimulation in the form of dihydrotestosterone (DHT), human chorionic gonadotropin (hCG), or testosterone can be utilized to enhance the size of the glans and penis in preadolescent males [40,41]. It is believed that increasing glans size will reduce stress on the glansplasty and improve the amount of tissue accessible for urethroplasty, hence minimizing the risk of complications. Concerns associated with androgen stimulation in these boys involve abusive tendencies and behavior, enhanced erections, skin pigmentation, and secondary masculine characteristics. All traits are temporary and dissolve spontaneously, approximately six months following the final dosage [41]. Some surgeons omit preoperative testosterone as a consequence of the perceived greater risk of bleeding and enhanced angiogenesis. Others argue that the poor healing process may be attributable to subsequent androgen administration [42]. With more than 300 restorative surgical treatments documented in the present literature, it appears that a general strategy for hypospadias surgical correction is needed [43,44]. A reoperation rate of less than 5% is considered a good indicator of success. Hypospadias complications can occur in 5-10% of patients with mild variants and 15-56% of patients with severe forms, according to most estimates over the short term [3]. Short-term outcomes may not accurately represent the experiences of males throughout their adolescence. An accurate assessment of the long-term aesthetic and functional outcomes of the repaired penis cannot be made during a 12-month follow-up following surgery because psychosexual development and pubertal physical changes have not been completed [45,46]. Using magnification, atraumatic tissue manipulation, delicate equipment, suture materials, and proper hemostasis are the most fundamental prerequisites. In most cases, the anterior and middle hypospadias is corrected in a single procedure. On the other hand, a two-step treatment is frequently required for the posterior variant [3,28]. Intraoperative Assessment Anesthesia does not signal the end of preoperative planning. Following antiseptic preparation and intravenous antibiotic treatment, the genitalia is scrutinized to decide the surgical strategy. Except for extremely severe cases of proximal hypospadias or subsequent surgical interventions, we do not perform cystoscopies on a normal basis. The preoperative evaluation of hypospadias should continue as described. The placement of the urethral meatus, the quality of the ventral shaft tissue, and the level of penile curvature are evaluated while the kid is sleeping. Depending on the extent of penile curvature, a circumferential incision is subsequently created, and the penis is partially or entirely degloved. Care must be taken to generate a mucosal collar by rotating inner glossy preputial tissue from the dorsolateral skin to the ventrum, where it is absent. This will help with ventral shaft skin covering and produce a more aesthetically pleasing outcome [47]. Penile Curvature: Diagnosis and Treatment Whether or not hypospadias is present, a curved penile structure (chordee) may develop. The degree of curvature is a crucial factor in deciding between a one-stage and two-stage correction. The choice to treat men's scoliosis is based on their possible functional and aesthetic difficulties as they age into adulthood. Males suffering from untreated congenital curvature or Peyronie disease have been found to experience severe morbidity at even 20-30 degrees of ventral curvature, including difficulty with intercourse and patient displeasure with the look of the penis [48]. Curvature can be caused by reduced ventral skin, a small urethra, or the inherent curvature of the erectile body. Outside of surgery, it is exceedingly difficult to determine the source of curvature. The conclusive diagnosis is made with a simulated erection in the operating theatre after the penis has been degloved. Parents should be queried whether they see a history of penile curvature during erections and may even record this in their children with photographs. Before cutting the skin, the extent of curvature must be evaluated in the operating room. Through the insertion of a catheter into the meatus, the condition of the urethra and ventral skin may be determined. To remove dysplastic dartos tissue, a circumferential incision is created and the penis is degloved beyond the penoscrotal junction. Then, a mechanical erection should be conducted, often with a tourniquet inserted at the penoscrotal junction and a sterile normal saline injection [49]. Alternately, the surgeon can squeeze the corpora at the base of the penis to mimic an erection in tiny boys without the use of injections. In addition to saline injection, prostaglandin injection can be used to generate an erection [50]. Various approaches, such as unassisted visual examination and goniometry, which works as a protractor to reliably quantify the extent of penile curvature, are used to determine the degree of penile curvature. Other technological alternatives, such as tablets and applications, are beginning to appear. Although there is no consensus about the treatment of particular degrees of curvature, the majority of surgeons appear to think that a dorsal plication is adequate for curvatures less than 30 degrees [51]. If the curvature is greater than 30 degrees, the urethra would need to be divided. A corporal curvature higher than 30 degrees at this point necessitates a corporal lengthening surgery that involves transection of the corpus spongiosum distal to the urethra or urethra transection [52]. As these males advance through puberty and experience more considerable penile development, their curvature may increase. Therefore, it is essential to diagnose and fix curvature during the first repair [53]. Distal Hypospadias Repair Repair of distal hypospadias is one of the most frequent surgical operations performed by pediatric urologists, and several surgical approaches have been devised to treat this condition [47]. Different procedures are used to treat this condition. There are a variety of repair operations that may be divided into advancement, tubularization, or the use of grafting and flap surgeries. Here, we are going to discuss the most commonly used surgical techniques in treating hypospadias. The recommended surgical procedures for hypospadias correction may vary depending on the location of the meatus. Techniques such as the tabularized incised plate (TIP) urethroplasty, the Mathieu method, the meatal advancement and glanuloplasty incorporated (MAGPI), and the glans approximation procedure (GAP) are utilized to treat distal hypospadias. It is possible to reconstruct the urethra in a single step or two. When feasible, the majority of surgeons now choose a single-stage operation. A single-stage technique is suitable for distal, mid-shaft, and proximal hypospadias without substantial chordee. When a single operation would not be adequate to correct a severe or perineal case of hypospadias with chordee, or when performing a difficult revision hypospadias surgery, a two-stage procedure may be necessary. The preponderance of surgeons now favors tubularization of the urethral plate as a one-step procedure [51]. The most prevalent single-stage technique is a Duplay-type operation with tubularization, with or without the vertical incision in the urethral plate, as described by Snodgrass [54]. The Thiersch-Duplay (TD) Repair The Thiersch-Duplay (TD) repair, pioneered by Thiersch and later Duplay approximately 140 years ago, employs the brilliant notion of urethral tubularization of surrounding tissues distal to the misplaced meatus [55]. They completed their repair by producing a U-shaped incision from the penile shaft using vascularized skin and extending the meatus to the coronal edge. Later, for distant hypospadias, the restoration was covered with two layers of preputial skin [56]. This procedure comprises de-epithelialization of excess preputial skin and fastening across the repair to give a blood supply replacement. The next logical step was to stretch these U-incisions into the distal glans, tabularizing the glans itself over the repair, and providing a more aesthetically pleasing meatus at the penis tip [57]. The TD method requires a glans of sufficient width to accommodate a properly sized neourethral canal, at least one water-resistant layer, and glans flaps that may approximate over the repair. Parallel incisions are made 12 Fr in diameter lateral to the glans groove; the glans wings should be fully and extensively mobilized to enable tension-free covering. Under optical magnification, a dual running subcuticular suture is used to conduct neourethral reconstruction. If the child is circumcised, a de-epithelialized pedicle flap is harvested from the preputial tissue or the more proximal axis and placed over the complete neourethral restoration [58]. If the repair is more proximal, a double dartos flap can be obtained from the dorsal prepuce, with one flap running distally and the other flap running proximally. The circumcision defect is completed by approximating the glans wings into two layers (spongiosum and then epithelium), accompanied by the mucosal collar. The Tabularized Incised Urethroplasty (TIP) The TIP method, a variation of the TD, is a global standard surgical treatment for hypospadias. It was originally described in 1994 by Warren Snodgrass [59]. The surgical techniques are described below. A straight 8F sound is sent into the hypospadias meatus to evaluate skin covering across the urethra. In distal hypospadias, a demarcating incision is performed 2 mm proximal to the meatus, although a U-shaped incision may be prolonged proximally to healthy skin if necessary. Degloving the penis to the penoscrotal union. In every situation, an artificial erection is performed, as even coronal hypospadias is occasionally coupled with penile bending. If a minor chordee remains following skin release, dorsal plication is performed to rectify the corpora cavernosa's asymmetry. The tunica albuginea is incised longitudinally on either end just lateral to the neurovascular bundle opposing the point of curvature, followed by the placement of 6-0 Prolene sutures with the knots concealed. There is no need for substantial mobilization of the neurovascular bundle while performing dorsal plication. Next, 1:100,000 epinephrine is injected into the ventral glans at the visible intersection of the glans wings and urethral plate. Then, parallel incisions are made to detach the plate from the glans, and the glans wings are deployed laterally. Depending on its native groove, the plate is just 4 to 8 mm broad at this point. A linear relaxing incision is created from the inside of the meatus to the distal edge of the plate. This incision penetrates the epithelial surface of the plate and spreads deeper into the connective tissues underneath, reaching the corpus cavernosum. With the surgeon and helper maintaining counter-traction with tiny forceps, the plate is observed to be considerably widened upon division until further incisions offer no more mobility. Rather than a knife, tenotomy shears are indicated for this procedure so that an appropriate depth may be achieved without harming the corpus cavernosum. When the urethral plate is naturally grooved, the incision will be shallower than when the plate is naturally flat. Some surgeons perform the relaxing incision first, followed by parallel incisions to establish the plate's breadth. Despite this, this procedure regularly expands the plate to 13 to 16 mm, independent of its arrangement, assuring that the neourethra will be larger than 12F. If bleeding develops, epinephrine diluted 1:1000 is poured over the incision, and pressure is maintained for many minutes. If a tourniquet is required, it might be placed near the base of the penis. Electrocautery shouldn't be used to make holes in the plate or stop bleeding so that the plate's tissues and the corpora cavernosa underneath don't get hurt. Next, a 6F stent is inserted into the bladder for urine diversion following surgery. The urethral plate is subsequently tabularized. To guarantee that the neo-meatus has a wide oval aperture, the initial stitch is always put at the level of the mid-glans, and no more than one or two stitches are removed distally. In this procedure, a single layer of 7-0 chromic catgut suture of full thickness is used. Those who prefer suture materials with a slower absorption rate might try subcuticular closures. A thin dartos pedicle derived from the dorsal prepuce and shaft skin covers the whole neourethra. Glansplasty is then performed, commencing at the cornea and extending distally for a total of three stitches. Even though tiny sutures at the four and eight o'clock locations may evert the meatus somewhat for cosmetic purposes, securing the neourethra to the glans is not essential. The mucosal collar is approached in the midline, and the skin of the shaft is remodeled to resemble the median raphe. Subcuticular sutures are employed to avoid the suture tracts previously observed when 6-0 chromic catgut was put through the skin. After applying a dressing, the child is sent home [54]. Flap Methods The Mathiew procedure is based on a meatal flap. This operation was documented for the first time in 1932, but it appears to have been performed earlier. The Mathieu method does not begin with penis degloving; rather, a penile shaft tissue flap is used to generate the neo-urethra. The Mathieu technique begins by determining the extent of the urethral gap from the meatus to the tip of the glans. Along the urethral plate, an equivalent distance is traced on the proximal penile shaft skin. An incision is created along these lines. For the proximal flap, an acceptable width of 7 to 8 mm is measured, with this width tapering to 5 to 6 mm towards the distal limit of the glans. After skin and glanular incisions, the shaft skin is degloved. The underlying tissue of the flap is dissected with care, enabling the flap to be advanced to the top of the glans. The flap is rolled over at the meatus and approximated to the lateral borders of the urethral plate with a running suture. Meatus has reached full maturity. The sutures are covered with a dartos flap of tissue, the glans wings are approached, and then a typical circumferential closure is done [60]. Concerns arise surrounding the vasculature of the utilized flap; if the flap's base is not adequately wide, the blood supply may be disrupted, hence increasing the prospect of fistula and stenosis. Others have expressed alarm at the fish-mouth look of the meatus. This method has been upgraded to the slit-like adjusted Mathieu (SLAM) process, which has shown favorable results, including an enhanced look of the meatus [61]. Advancement Techniques Advancement methods do not necessitate tubularization of the urethral plate and are usually reserved for the most distal glanular meatus with minor penile curvature. Urethromeatoplasty employs the Heineke-Mikulicz concept, in which a longitudinal, vertical incision is made in the ectopic meatus and, subsequently, its margins are closed horizontally. This provides a cosmetically normal meatus and straightens the posterior urethral plate. This approach is especially beneficial in the presence of a stenotic, distal meatus with an accompanying blind-ending pit in the middle of a closed glans. The meatal advancement glanuloplasty would become one of the most often performed procedures to treat glanular hypospadias (MAGPI). The primary purpose of this operation is to distally advance the meatus without technically tabularizing the urethra [62]. The frequency of problems reported following the MAGPI technique complications occurs up to 10% [63]. Meatal stenosis and meatal regression are the most commonly encountered issues, while other uncommon complications consist of urethro-cutaneous fistulas and chordee. The Glans Approximation Procedure (GAP) The glans approximation method is a surgical approach developed for individuals with proximal glanular/coronal hypospadias who have a broad, steep glanular groove and a non-compliant or fish-mouth meatus, which is frequently found in the mega-meatus intact prepuce type [64]. Proximal Hypospadias Repair The treatment of severe hypospadias has proven contentious. This disagreement persists as to the optimal treatment for proximal hypospadias. Numerous hypospadias correction procedures have been published, reflecting the difficulties of achieving optimal surgical outcomes for this illness [65]. Even though one-stage surgery has been shown to work for some types of proximal hypospadias, many people still prefer the more traditional two-stage method when moderate to severe chordee is present so that the length of the penis can be straightened during the first-stage repair. One-stage proximal hypospadias correction often entails dorsal plication to restore ventral penile curvature and is one of many urethroplasty procedures. These can be differentiated according to the tissue employed in the repair, namely preputial skin, local skin, and buccal transplant. The preputial island flap is widely recognized as an innovation that Duckett contributed to [66]. In this procedure, the inner prepuce is elevated as a pedicle flap, translated ventrally, and used as an Onlay graft to cover the urethral plate following degloving the penis and straightening the chordee. Neo-urethras have a roof made up of the urethral plate. To prevent stricture development, the onlay excludes circular anastomosis. The inner prepuce is similarly employed as a pedicle flap in the Asopa variant of the technique, but the neo-urethra is left connected to the underside of the foreskin. Consequently, the skin and neo-urethra share a blood supply [67]. Higher complication rates were observed in the Duckett technique, and those included poor aesthetic results marked by excessive ventral bulkiness, penile torsion, and meatal anomalies; fistulas, strictures, total breakdown, and anterior urethral diverticuli formation [68]. The two-stage repair has been the preferred method of most surgeons for treating proximal hypospadias since the treatment of severe ventral penile curvature has shifted toward corporal lengthening techniques. Modern two-stage methods may be broadly classified, despite their many technical variants, into repair with free graft or repair with pedicle flap. The Bracka two-stage repair is a urethroplasty technique that employs a free graft taken from the inner preputial skin or buccal mucosa [69]. STAG is an adaptation of Bracka's initial explanation [70]. In the first step, the penile curvature and urethral plate are rectified. A graft receiving bed is created by extending a midline incision into the glans. On the ventral penile shaft, compressive packing and patterning of the graft can reduce hematoma development and enhance graft uptake. Six months later, a U-shaped incision identical to the Thiersch-Duplay method is created, the urethra is tabularized, and glansplasty is carried out. Layered closure is performed to preserve vascular flow to promote healing [69]. The Byars flap treatment employs extra dorsal preputial skin, which is transferred ventrally with its vascular pedicle during the first surgery, as the urethral scaffold [71]. In the ventral part of the penis, the skin can be connected in the midline or positioned as a single unit, as in the STAG repair. In the second step, the neourethra is sealed by making a large U-shaped incision with a typical Thiersch-Duplay glansplasty. The development of a waterproof, two-layer closure and the establishment of a lumen of uniform diameter along the course of the urethroplasty are important technical elements. To guarantee that the neourethra retains a sufficient blood supply, several phases of closure are necessary. In particular, making a soft dartos bed above the clitoroplasty in the first step will ensure enough blood flow for the urethroplasty in the second step. Regardless of the methodology, it is essential to evaluate the quality of the graft or flap during the second phase of the surgery. As an interim step, if skin deficit or tethering prevents safe closure, a dorsal inlay buccal mucosal transplant may be employed as an interim measure [72]. After graft harvesting, the urethra is rebuilt when all of the tissues are pliable. Alternately, the second step of repair can be performed simultaneously with a dorsal buccal graft inlay and a urethroplasty. It is essential to check that the penile curvature is rectified with a subsequent synthetic erection before urethroplasty. If needed, a dorsal plication or repeat corporal lengthening can be done to fix a slight curvature that keeps coming back. Postoperative complication The majority of early postoperative problems are caused by incorrect surgical techniques and may be readily avoided via improved procedure planning and tissue management. These problems include edema, hematoma development, wound dehiscence, flap decay, and fistula formation [73]. To prevent hematoma development, optimal hemostasis must be achieved. As previously stated, adequate tissue manipulation is required to prevent postoperative edema. A compression circumferential covering can also reduce postoperative edema. Long-term outcomes There is a dearth of consistency in the literature when it comes to hypospadias correction procedures, as well as standardized definitions of problems and methods for evaluating outcomes [74]. Many questionnaires have been devised to evaluate the results of hypospadias treatment. Each questionnaire has its pros and limitations. These include the (Pediatric) Penile Perception Score (PPPS), the (Hypoplasia) Objective Scoring System, the (PedsQl), and the Hypoplasia Objective Penile Evaluation Score (HOPE) [75,76]. More than 70% of all patients who have hypospadias treatment are deemed cosmetically pleasing. More than 80% of males with repaired hypospadias had good sexual function [77]. However, these individuals are frequently prevented from initiating sexual interaction and frequently fear mockery due to the look of their genitals [77,78]. Symptoms of the lower urinary tract were twice as prevalent in individuals who had had hypospadias correction compared to controls [77]. After tabularized incised plate (TIP) urethroplasty, an obstructive urine flow pattern is usually observed, which may be due to aberrant elastic properties of the produced tube [79]. Almost 39% of patients who underwent proximal hypospadias surgery showed voiding problems, including hesitation and spraying [77]. Urinary problems (e.g., meatal stenosis, fistula, or urethral stenosis) may emerge years after the initial surgery; consequently, long-term follow-up is required [80]. Conclusions Hypospadias is a frequent disorder with an unknown cause and a wide range of manifestations and degrees of severity. The objective of hypospadias restoration is to restore normal function and appearance. The hypospadias is often repaired between six and 18 months of age. The optimal age for surgical intervention is still a matter of controversy and is impacted by anesthetic risks, tissue size at various ages, postoperative problems, and psychosocial effects. Long-term results for both function and appearance are typically satisfactory, although still inferior to those of males without hypospadias. Various procedures for its surgical intervention have been documented. Functional results are enhanced by early rebuilding. It has been shown that around 25% of people with hypospadias require a second procedure. This disorder is most effectively treated by a multidisciplinary team comprising of a urologist, neonatologist, pediatric surgeon, reconstructive surgeon, endocrinologist, geneticist, nurse, and mental health counselor. A thorough inspection of the genitalia should be performed at birth and, of course, before circumcision is planned. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-08-03T15:02:41.045Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "965f63c61bdc619fe8035ee6fd2443dda9751e3d", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/104846-hypospadias-a-comprehensive-review-including-its-embryology-etiology-and-surgical-techniques.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80d3b55f9bf4036d8cf9f7f800bd9a54462e9d5f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
211834665
pes2o/s2orc
v3-fos-license
Role of Biomarkers in the Prediction of Serious Adverse Events after Syncope in Prehospital Assessment: A Multi-Center Observational Study Syncope is defined as the nontraumatic, transient loss of awareness of rapid onset, short duration and with complete spontaneous recovery, and accounts for 1%–3% of all visits to the emergency department. The objective of this study was to evaluate the predictive capacity of the National Early Warning Score 2 (NEWS2) and prehospital lactate (pLA), individually and combined, at the prehospital level to detect patients with syncope at risk of early mortality (within 48 h) in the hospital environment. A prospective, multicenter cohort study without intervention was carried out on syncope patients aged over 18 who were given advanced life support and taken to the hospital. Our study included a total of 361 cases. Early mortality affected 21 patients (5.8%). The combined score formed by the NEWS2 and the pLA (NEWS2-L) obtained an AUC of 0.948 (95% CI: 0.88–1) and an odds ratio of 86.25 (95% CI: 11.36–645.57), which is significantly higher than that obtained by the NEWS2 or pLA in isolation (p = 0.018). The NEWS2-L can help stratify the risk in patients with syncope treated in the prehospital setting, with only the standard measurement of physiological parameters and pLA. Introduction Syncope is defined as nontraumatic, transient loss of consciousness due to cerebral hypoperfusion, characterized by a rapid onset, short duration, and complete spontaneous recovery. Syncope itself represents a major challenge for both prehospital emergency medical services (EMS) and emergency department (ED) and accounts for 1%-3% of all ED visits [1][2][3]. Syncope is a symptom with many clinical presentations from different underlying causes. It may be the result of altered reflexes, causing recurrent episodes or manifestation of transcendent diseases leading to serious adverse events (SAEs) with a serious outcome, i.e., sudden cardiac death [4]. Different and multidisciplinary guidelines have been developed to manage syncope based on risk stratification [1,5,6]. However, in the prehospital context, clinical decisions should be made immediately, with only few complementary tests (vital signs, electrocardiogram, and capillary blood glucose), so EMS professionals should base their decisions on the most predominant symptom. In addition, the demography of the target population itself means that the majority of patients are older adults (>65 years) [7], with multiple pathologies and comorbidities, often after falls associated with syncope and in many cases with polypharmacy [8,9], i.e., highly complex patients. The prehospital context is integrating different procedures and solutions that can help health professionals in the decision-making process, among them biomarkers, understood as the use of early warning scores (EWSs), point-of-care testing (POCT), or the combined use of both [10]. Within the EWSs, the National Early Warning Score 2 (NEWS2), is the most widely used internationally, is validated in the prehospital context, and has proven its usefulness in very diverse clinical contexts [11][12][13]. The NEWS2 is determined from simple clinical observations (respiration rate, oxygen saturation, supplemental oxygen, temperature, systolic blood pressure, heart rate, and level of consciousness). Aggregate weighting produces a final score and a level of risk that determines the emergency response (see Table 1). High scores are associated with an increase in prehospital advanced life support (PALS), admission to the intensive care unit (ICU), and early mortality [14]. One of the most promising biomarkers in the prehospital context, which can be employed at bedside, is the point-of-care lactate (pLA), a reliable indicator of hypoperfusion states caused by anaerobic metabolism [16][17][18]. There is a growing interest in analyzing the usefulness of NEWS2 and pLA, either in isolation [19,20], or together [21], but no studies have assessed the prognostic use of these biomarkers in patients with syncope, and less so in the prehospital context. The objective of this study was to evaluate the predictive capacity of the NEWS2 and pLA (individually and in combination) at the prehospital level to detect patients with syncope at risk of early mortality (within 48 h) in the hospital environment. Study Design Between April 2018 and June 2019, we conducted a prospective, multicenter cohort study without intervention in adults over 18 years of age that were attended consecutively by the EMS and evacuated in advanced life support (ALS) to their reference hospitals with a main prehospital diagnosis of syncope. The present study was approved by the Research Ethics Committee (REC) of the public health system of Castile and Leon (REC number: #PI 18-010, #PI 18-895, #PI 18-10/119, #PI MBCA/dgc, and #PI 2049). All patients (or guardians) signed informed consent. This study is reported in line with the STROBE statement [22]. Study Setting This study was carried out in the EMS of the public health system of the Community of Castile and Leon (Spain), with the participation of six ALS distributed in the provinces of Burgos, Salamanca, Segovia, and Valladolid, with a reference population of 1,351,962 inhabitants. The EMS operate 24 h a day 365 days a year. Calls are received by a technician (nonhealth personnel), who collects administrative and geo-location data. A medical doctor (MD) filters the calls and determines the most appropriate resource for each situation. ALS are made up of two paramedics, an MD and an emergency registered nurse (ERN), performing standard advanced life support maneuvers at the scene or en route. Three hundred, forty-three cases allow us to estimate the percentage of deaths expected by syncope, with an error not exceeding 2.5% (with a 95% confidence level). For this, we have assumed a loss percentage of around 15%. All patients included in the study were referred to hospitals belonging to the public health system (Burgos University Hospital, Segovia Hospital Complex, Salamanca University Assistance Complex, Rio Hortega University Hospital, and Valladolid University Clinic), all of them with extensive surgical capacity and ICU. Population To identify eligible patients, the sample was recruited from among all calls for adult care demand (over 18 years old) during the study period, which required evacuation to the ED in ALS with a prehospital main diagnosis of syncope. We excluded cases of cardiorespiratory arrest, patients in terminal status or with acute psychiatric pathology, pregnant women, patients evacuated by other means of transport or discharged in situ, patients in whom the first set of vital signs were incomplete and did not allow to calculate the NEWS2, and cases in which it has not been possible to determine pLA after two attempts. Before data analysis, we also excluded patients for whom it was not possible to obtain informed consent, and those who were assisted more than once (only the first chronological event was counted), or for whom followup via electronic medical record was not possible. Study Protocol The review protocol of this study was registered with ICTRP (doi.org/10.1186/ISRCTN17676798). The principal investigator (FMR) trained all members of the research group on the objectives of the study, the standardized way of obtaining the set of vital signs, the use of electromedical equipment, and on the calculation and interpretation of the NEWS2 according to recommendations by The Royal College of Physicians [15]. Similarly, a procedure for determining pLA was developed, with specific training on the operation, cleaning, maintenance, and calibration of the equipment. The traceability of all test strips used in the study has been monitored, through the control of expiration date, serial number, and batch number. A standardized case form was used (clinical history ordinarily used by the EMS), where the ALS ERN recorded the set of vital signs and pLA value. All prehospital clinical data analyzed refer to the first contact of the team with each patient. Next, the unit's MD recorded demographic, times of arrival, assistance and evacuation, administrative, and other prehospital care variables: Electrocardiographic rhythm and need for PALS at the scene or during the transfer (orotracheal intubation, use of external pacemaker, or vasoactive drugs). Within 30 days of the index event, an associated researcher from each hospital obtained hospital care data by reviewing the patient's electronic medical history. This data includes hospital admissions and/or ICU and mortality within 48 h from the index event. All patient data were recorded electronically in a database created for this purpose. Prior to statistical analysis, the database was cleaned by means of logical tests, range tests (for the detection of extreme values), and data consistency. Subsequently, the presence and distribution of unknown values of all variables were analyzed. The case registration form was tested to eliminate ambiguous elements and guarantee the robustness of the data collection instrument. Data Abstraction The main outcome variable was early hospital mortality (within the first 48 h) from any cause, in line with previous studies [11,12]. To calculate the NEWS2 (Table 1), we registered respiratory rate, temperature using a ThermoScan ® PRO 6000 tympanic thermometer (Welch Allyn, Inc, Skaneateles Falls, NY. USA), and oxygen saturation, systolic blood pressure and heart rate using a LifePAK ® 15 monitor (Physio-Control, Inc., Redmond, WA. USA). Mental state was assessed with the Glasgow Coma Scale (GCS). Confusion was defined as a GCS score of less than 15 points, or new confusion situation en route. To obtain pLA values, we used an Accutrend Plus measuring device (Roche Diagnostics, Mannheim, Germany) with a measuring range of 0.8-21.7 mmol/l. The whole procedure consists of three phases: First, the test strip is inserted after switching on the instrument; second, a drop of venous blood (extracted in a 1-mL syringe) is deposited on the test strip (15-40 µL); and third, the lid is closed and a result is obtained after 60 s. Between blood collection and placement of the sample in the device, no more than 1 min should pass. All measuring devices were calibrated every 50 measurements, always by the same researcher, using the Accutrend ® BM-Control-Lactate control solution (Roche Diagnostics, Mannheim, Germany). Secondary outcomes include advanced life support maneuvers during prehospital assistance and/or transfer and the need for ICU. To obtain the combined value of NEWS2 and pLA, the numerical value of the pLA was added to the numerical value of the NEWS2, generating the new scale NEWS2-L. Quantitative variables are described as median and interquartile range (IQR) and qualitative variables are described by absolute frequencies with their 95% confidence interval (95% CI). The Mann-Whitney U-test was used to compare the locations of quantitative variables. The Chi-square test for 2 × 2 and/or contingency tables of proportions was used to determine the association or dependence relationship between qualitative variables; if necessary (percentage of boxes with expected values less than five greater than 20%), we used Fisher's exact test. Data Analysis The area under the curve (AUC) of the receiver operating characteristic (ROC) of the NEWS2, pLA, and for both in combination was calculated in terms of mortality at 48 h, PALS, and the need for admission to ICU. We also determined the score that offered greatest sensitivity and joint specificity in each case, as well as the positive predictive value (PPV), negative predictive value (NPV), positive probability ratio (PPR), and negative probability ratio (NPR) for these scores. In all tests, a confidence level of 95% and a p-value of less than 0.05 were considered significant. The data are presented according to the Standards for Reporting Diagnostic Accuracy 2015 statement [23]. Patient Baseline Over the study period of 14 months, 720 cases were screened for eligibility and 361 patients fulfilled all inclusion criteria (Figure 1). In all tests, a confidence level of 95% and a p-value of less than 0.05 were considered significant. The data are presented according to the Standards for Reporting Diagnostic Accuracy 2015 statement [23]. Patient Baseline Over the study period of 14 months, 720 cases were screened for eligibility and 361 patients fulfilled all inclusion criteria (Figure 1). Median age was 74 years (IQR 62-83 years), 164 (54.4%) of the participants were women. The median times of arrival, assistance, and evacuation were 9 min (IQR 7-11 min), 28 min (IQR 22-35 min), and 9 min (IQR 6-12 min), respectively, without significant differences between survivors and nonsurvivors. Characteristics of the patients who were diagnosed with syncope in the prehospital setting are presented in Table 2. Median age was 74 years (IQR 62-83 years), 164 (54.4%) of the participants were women. The median times of arrival, assistance, and evacuation were 9 min (IQR 7-11 min), 28 min (IQR 22-35 min), and 9 min (IQR 6-12 min), respectively, without significant differences between survivors and nonsurvivors. Characteristics of the patients who were diagnosed with syncope in the prehospital setting are presented in Table 2. Nonsurvivors were significantly older, with high scores of NEWS2 and pLA, and presented more PALS and admission to ICU. Prognostic Accuracy of the Scores The prognostic precision of the scores for predicting early mortality is represented in Figure 2a. The NEWS2-L, with an AUC of 0.948 (95% CI: 0.88-1) had the best performance score. A comparison of the curves was statistically significant for the NEWS2-L with respect to the other scores studied (p = 0.018). Cut-off Points of the NEWS2-L The endpoints of 48-h mortality, PALS, and ICU obtained 9.5 points, 6.9 points, and 10.3 points, respectively. The capacity of the NEWS2-L to predict 48-h mortality with an odds ratio of 86.25 (95% CI: 11.36-645.57) and a NLR of 0.06 (95% CI: 0.01-0.40) stand out. For all outcomes studied, very high NPV were maintained. The results for the different outcomes for the NEWS2-L can be seen in Table 0 With respect to the early detection in PALS, the best performing score was also the NEWS2-L, with an AUC of 0.842 (95% CI: 0.77-0.91), although it did not reveal significant differences with the NEWS2 (comparison of curves p = 0.540). Figure 2b displays the different AUC with their confidence intervals and p-value. For predicting the risk of the need for ICU, the NEWS2, pLA, and NEWS2-L provided comparable results (p-value for all scales greater than 0.05). The score with the best prognostic capacity was again the NEWS2-L with an AUC of 0.809 (95% CI: 0.71-0.90), as can be seen in Figure 2c. Cut-off Points of the NEWS2-L The endpoints of 48-h mortality, PALS, and ICU obtained 9.5 points, 6.9 points, and 10.3 points, respectively. The capacity of the NEWS2-L to predict 48-h mortality with an odds ratio of 86.25 (95% CI: 11.36-645.57) and a NLR of 0.06 (95% CI: 0.01-0.40) stand out. For all outcomes studied, very high NPV were maintained. The results for the different outcomes for the NEWS2-L can be seen in Table 3. Table 3. Cut-off points for combined sensitivity and specificity with best score (Youden's test) on the NEWS2-L for two-days mortality, PALS, and ICU. Discussion In this multicenter observational cohort study, we found that the NEWS2-L presented a better behavior, equally for cases of early mortality, PALS, and the need for ICU. Comparison with Previous Studies Our results are in line with studies that have analyzed prognostic scores and mortality data for syncope patients [24][25][26][27][28], although those had been conducted in the ED. Other work has analyzed the combined use of EWSs and lactate in other contexts [29][30][31][32] and also obtained areas under the ROC curve between 0.830 and 0.914 for detecting two-day mortality. Our data confirm and add to previous studies, being the only study specifically developed to detect patients with syncope at high risk in the prehospital context, which provides very useful information on the need for prehospital advanced life support or ICU. Managing Prehospital Syncope Current guidelines for the management of syncope emphasize the initial clinical and physical evaluation, as well as the early performance of an electrocardiogram and analytical tests, with the fundamental objective of not underestimating potentially risky syncope situations [1,4,33,34]. The new score, NEWS2-L, is in line with the decision rules implemented in the current guidelines, with a high specificity (81.2%; 95% CI 76.7-85.0), higher than other risk stratification scores, such as the FAINT score [35] [36] (73.0%; 95% CI 0.66-0.74). This makes this score an ideal tool for the detection of syncopal episodes that could initially be classified as mild but that actually present a high risk of deterioration. Therefore, we consider that introducing both the NEWS2 and the NEWS2-L in the management recommended by the current guidelines could aid decision-making in the first contact with the patient, both when assessing the true severity of the condition as when identifying the need for a hospital transfer [37,38]. Implications Due to the ease of application and interpretation of the NEWS2-L, this score is ideal for use by nonexpert health personnel and can help discriminate the severity of the condition in different contexts very early. Therefore, it could be an advantage to not include electrocardiogram interpretation, more specific analytical tests or the presence of guiding symptoms, as in the FAINT score (history of heart failure, history of cardiac arrhythmia, initial abnormal ECG result, elevated pro B-type natriuretic peptide, and elevated high-sensitivity troponin T) [35], HEART (history, ECG, age, risk factors, and troponin) [26], OESIL score (abnormal ECG, age >65 years, history of cardiovascular disease) [39]. The social alarm that is generated when a person suffers a syncope, often due to the florid and unspecific symptoms that precede or accompany this condition (decreased level of consciousness, chest pain, seizures or muscle stiffness, vomiting, profuse sweating, pale skin, incontinence, etc.), makes this pathology one of the most frequent causes to which EMS must respond [40,41]. Our data indicate that a NEWS2-L score of 9.5 or higher implies with high probability a risk of death within 48 h, and a score of 6.9 or higher was associated with a higher frequency of advanced life support maneuvers at the scene or on the road, which means it can help us detect high-risk patients who may need advanced maneuvers. Limitations This study has several potential limitations. The first limitation is the possible patient selection bias, as the study was carried out based on opportunity criteria for a limited period of time and only in those patients with a prehospital diagnosis of syncope who were evaluated and evacuated in ALS. Second, the main outcome variable is hospital mortality from any cause within the first 48 h. Deaths occurring outside this temporary window or outside the hospital have not been counted. For future studies, it may be very interesting to analyze medium and long-term mortality, as well as extra-hospital mortality, but for this study we have considered this time limit for being the deaths that occurred acutely. Finally, the small sample size allows for preliminary results, but is insufficient to perform an external validation of the score. Hence, it would be necessary to conduct prospective multicenter studies with adequate power in different geographical contexts and with different EMS to validate the routine use of the NEWS2-L in the prehospital context for the early detection of impairment in patients with syncope. Conclusions In summary, the NEWS2-L can help identify patients who have undergone syncope and have a high-risk of needing ICU or early mortality in the prehospital setting, with only the standard measurement of physiological parameters and pLA, but it is a necessary more elaborate diagnostic evaluation to stratify the risk of patients after syncope to allow personalized management of different treatment options. EMS must implement among its routine procedures for the use of EWS and point of care tests to assist in decision making in clinical processes.
2020-03-04T14:04:35.794Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "da2210d03e7909a4e9c7c3c91fa793cdbe42be14", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/9/3/651/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a5c9eae314bfa600af1c49c88e79d8c3a8eddf1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
96440876
pes2o/s2orc
v3-fos-license
Combustion Synthesis of Chromium Nitrides This paper explores different modes of synthesis by combustion of chromium-nitrogen and ferrochromium-nitrogen alloys. The SH-synthesis of chromium nitrides and ferrochromium nitrides was performed. Regular patterns in layer-by-layer and surface modes of Cr combustion in nitrogen were investigated. The mechanism of non-stationary combustion during the synthesis of chromium was investigated. Regular patterns of chromium and ferrochromium combustion in the cocurrent filtration mode were analyzed, and the possibility to intensify the SHS process using the pressure filtration principle was assessed. The process of chromium powder combustion in the cocurrent flow of nitrogen-containing gas in the range of specific flow rates from 20 cm3/s·cm2 was investigated. Pressure filtration intensifies the process of combustion wave propagation in the Cr–N2 system. Here, the combustion rate increases while the degree of nitridation decreases. We discovered superadiabatic heating modes when the reaction zone was blown with pure nitrogen and a nitrogen-argon mixture. The tempering mode that was realized during pressure filtration allows for the uptake of high-temperature single-phase non-stoichiometric phases of Cr2N. Introduction Alloying with nitrogen is a technique used in modern metallurgy to improve the properties of many grades of steel [1].The content of nitrogen required by specifications for such steels may vary in the range from 0.0001% to 1% N. Good examples of modern steels where nitrogen is an indispensable component, are transformer steels (~0.01%N), rail steels (~0.012%N), high-strength low-alloy steels (~0.015%N), high-alloy steels for retaining rings in generators (0.4-0.9% N), valve steels for engines (0.3-0.6% N), high-strength stainless steels with reduced nickel content (0.08-0.45%N), etc. Currently, the most well-established steels are austenitic stainless steels with Cr-Mn and Cr-Mn-Ni bases [2].Such nitrogen-containing steels are used in the construction, automotive, chemical and food industries; for example, they are used in making car parts, household appliances, and kitchenware.New ways of developing new grades of nitrogen-containing steels are actively searched for.In fact, solely alloying by nitrogen can simultaneously provide for the improved strength, toughness, and corrosion resistance of a metal.The Earth's atmosphere is the only available and virtually infinite natural source of nitrogen.Nitrogen is produced by air enrichment at cryogenic temperatures.This technology is a very mature and economically efficient process that is used at a large industrial scale.When nitrogen is introduced into steel, it is uptaken by different alloys.To produce stainless nitrogen-containing steels, chromium nitrides or ferrochrome nitrides are used [3].Here, liquid metal is treated with nitrogen thus producing highly dense molten alloys.The maximum amount of nitrogen that can be achieved for ingots of such master alloys is determined by its solubility in specific alloy treatment modes.Virtually 100% uptake of nitrogen is an advantage of molten master alloys.Currently, molten master alloys are found to have limited use due to the low content of nitrogen. Sintered nitrogen-containing master alloys are obtained by high-temperature treatment of powder-like alloys in a nitrogen-containing atmosphere at temperatures below the melting temperature of the initial alloys and products [4].In such master alloys, the maximum content of nitrogen is determined by its content in the higher nitride of the nitrided metal.Sintered materials have high porosity, they are brittle, they can be easily ground, and they are suitable for being used as a filler for flux-cored wires. The main advantage of steel nitridation with different alloying additives is its versatility.Such master alloys can be used in steelmaking with all types of steel-melting devices at metallurgical plants with different equipment capabilities.Using alloys containing nitrogen, we can produce the entire range of steels: From those that are micro-alloyed by nitrogen to those with maximum nitrogen content [5]. To our knowledge, the first study of the influence of nitrogen on alloys of the chromium-iron system was undertaken by Frank Adcock in 1926, as mentioned in the article [6].Adcock studied nitrogen dissolution in iron-chromium alloys containing 0.21-58.5% Cr and pure metallic chromium.The research demonstrated that nitrogen improved the hardness of Cr-Fe alloys and had a positive impact on their structure. For the purpose of smelting stainless steel and chromium steels of other grades, nitrided Fe-Cr is normally used [7].In the production of Cr-Ni superalloys and alloys based on nickel, nitrided Cr (chromium nitride) is used.Chromium nitride is recommended for use in the smelting of high-nitrogen steels that are characterized by low content of carbon and other additives.Powders of Cr and its compositions can be used as initial materials for nitridation.Gaseous nitrogen or ammonia can be the source of nitrogen.The only economically viable option for the industrial production of nitrided Cr and its alloys is the technology of Cr powder treatment in vacuum furnaces. Molten nitrogen-containing master alloys of chromium are produced by the treatment of liquid chromium alloys with nitrogen, and sintered master alloys are produced by the treatment of solid alloys.The maximum amount of nitrogen that can be uptaken in molten master alloys is determined by its solubility under the treatment conditions, and for sintered master alloys, this amount is determined by the content of nitrogen in CrN.In theory, when nitriding high-purity chromium powder, nitride with ~21.2% N could be obtained, and for ferrochromium with 70% of Cr, 15.8% N could be possible.In practice, these values can hardly be achieved, especially in large-scale production [8].The resulting nitrogen content is influenced by impurities, limited treatment times, evaporation and contact melting processes, etc. In this research, we study the process of furnaceless synthesis of chromium and ferrochromium nitrides in the filtration combustion mode, where the nitrides are to be used as master alloys for stainless steels and alloys. Methodology of the Experiment Cr powder produced by JSC POLEMA (Tula, Russia) was used in this research; the powder was obtained by means of calcium hydride reduction.Additionally, Cr and Fe-Cr powders produced via the aluminothermic process by the Kluchevskiy Ferroalloys Plant (624013, Dvurechensk, Russia) were used (see Table 1).For the purpose of nitridation in the cocurrent filtration mode, Cr powder (particle size in the range of 63-80 µm) that was obtained by the calcium hydride process was used.The content of nitrogen was determined on a TS-600 analyzer (LECO Corporation, St. Joseph, MI, USA).The phase composition of combustion products was determined on an XRD-600 X-ray diffractometer (Shimadzu Corporation, Kyoto, Japan).Cr and Fe-Cr were nitrided via two methods.The first method envisaged, was synthesis on a laboratory high-pressure SHS reactor (Figure 1) in the mode: Natural nitrogen filtration.The second method envisaged, was synthesis on a laboratory flow-type SHS reactor (Figure 2) in the mode: Forced filtration of nitrogen.Natural filtration occurs spontaneously due to the pressure differential resulting from the consumption of gas in the combustion front.If the gaseous reactant (nitrogen) is forced through with a given flow rate through a porous sample, then this is forced filtration.In this case, the side surfaces of porous samples are gas-tight and nitrogen is supplied only through the ends of the sample [9]. For the synthesis in the high-pressure reactor, a batch of Cr or Fe-Cr powder was placed in a gas-permeable container (metal mesh), and a thermocouple (Tungsten Rhenium W-Re, 5/20 wire) was inserted into the bottom part of the container (Figure 3).From the top side, a tungsten coil heating element was brought into contact with the powder.The sample was placed into the unit, and the unit was then vacuumized and filled with nitrogen, having a purity of 99.9%, until the pressure reached 0.2-10 MPa.The interaction of the powder with the nitrogen was initiated by applying an electric pulse into the tungsten coil.Then, a combustion wave was formed by the exothermic reaction, which yielded Cr 2 N and CrN; and the combustion wave spread along the sample in a lengthwise direction. For the purpose of nitridation in the cocurrent filtration mode, Cr powder (particle size in the range of 63-80 µm) that was obtained by the calcium hydride process was used.The content of nitrogen was determined on a TS-600 analyzer (LECO Corporation, St. Joseph, MI, USA).The phase composition of combustion products was determined on an XRD-600 X-ray diffractometer (Shimadzu Corporation, Kyoto, Japan).Cr and Fe-Cr were nitrided via two methods.The first method envisaged, was synthesis on a laboratory high-pressure SHS reactor (Figure 1) in the mode: Natural nitrogen filtration.The second method envisaged, was synthesis on a laboratory flow-type SHS reactor (Figure 2) in the mode: Forced filtration of nitrogen.Natural filtration occurs spontaneously due to the pressure differential resulting from the consumption of gas in the combustion front.If the gaseous reactant (nitrogen) is forced through with a given flow rate through a porous sample, then this is forced filtration.In this case, the side surfaces of porous samples are gas-tight and nitrogen is supplied only through the ends of the sample [9]. For the synthesis in the high-pressure reactor, a batch of Cr or Fe-Cr powder was placed in a gas-permeable container (metal mesh), and a thermocouple (Tungsten Rhenium W-Re, 5/20 wire) was inserted into the bottom part of the container (Figure 3).From the top side, a tungsten coil heating element was brought into contact with the powder.The sample was placed into the unit, and the unit was then vacuumized and filled with nitrogen, having a purity of 99.9%, until the pressure reached 0.2-10 MPa.The interaction of the powder with the nitrogen was initiated by applying an electric pulse into the tungsten coil.Then, a combustion wave was formed by the exothermic reaction, which yielded Cr2N and CrN; and the combustion wave spread along the sample in a lengthwise direction.The flow-type reactor setup comprises three main parts: the combustion chamber (1-9), the gas feeding system (10)(11)(12)(13)(14)(15)(16)(17)(18), and the combustion parameters reading system (19-21).The setup allows for analyzing combustion processes in a powder layer with diameter 10 × 10 −3 m-30 × 10 −3 m, height of up to 0.2 m, and pressure of up to 0.2 MPa at the inlet and maximum gas flow rate of up to 0.83 × 10 −3 m 3 /s•m 2 .A batch of Cr powder (7) was poured into the quartz tube (3).The upper layer of the powder was brought into contact with the heating coil (5).The tube was then sealed.Into the bottom part of the tube, a layer of aluminium oxide powder (6) was poured in order to cool down the escaping gases.The nitrogen-containing gas from the tank (10) was fed into the quartz tube using the pressure reducer valve (11).An electric pulse was fed into the coil (8) to initiate an exothermic reaction in the surface layer of the Cr powder.A flat combustion front was formed and it started spreading along the sample in a lengthwise direction.The pressure was monitored via readings from the manometer (13) and the pressure sensor (14).Gas flow rate at the reactor inlet and outlet was adjusted by using the Red-y type electronic micro flow meters ( 16) and (17).The reaction temperature was measured with the W-Re BP5/20 thermocouple (9); the signal from the thermocouple was fed into the recorder device via an amplifier.The refrigerator (18) served to cool down gaseous products of the reaction The flow-type reactor setup comprises three main parts: the combustion chamber (1-9), the gas feeding system (10-18), and the combustion parameters reading system (19-21).The setup allows for analyzing combustion processes in a powder layer with diameter 10 × 10 −3 m-30 × 10 −3 m, height of up to 0.2 m, and pressure of up to 0.2 MPa at the inlet and maximum gas flow rate of up to 0.83 × 10 −3 m 3 /s•m 2 .A batch of Cr powder (7) was poured into the quartz tube (3).The upper layer of the powder was brought into contact with the heating coil (5).The tube was then sealed.Into the bottom part of the tube, a layer of aluminium oxide powder (6) was poured in order to cool down the escaping gases.The flow-type reactor setup comprises three main parts: the combustion chamber (1-9), the gas feeding system (10-18), and the combustion parameters reading system (19-21).The setup allows for analyzing combustion processes in a powder layer with diameter 10 × 10 −3 m-30 × 10 −3 m, height of up to 0.2 m, and pressure of up to 0.2 MPa at the inlet and maximum gas flow rate of up to 0.83 × 10 −3 m 3 /s•m 2 .A batch of Cr powder (7) was poured into the quartz tube (3).The upper layer of the powder was brought into contact with the heating coil (5).The tube was then sealed.Into the bottom part of the tube, a layer of aluminium oxide powder (6) was poured in order to cool down the escaping gases.The nitrogen-containing gas from the tank (10) was fed into the quartz tube using the pressure reducer valve (11).An electric pulse was fed into the coil (8) to initiate an exothermic reaction in the surface layer of the Cr powder.A flat combustion front was formed and it started spreading along the sample in a lengthwise direction.The pressure was monitored via readings from the manometer (13) and the pressure sensor (14).Gas flow rate at the reactor inlet and outlet was adjusted by using the Red-y type electronic micro flow meters ( 16) and (17).The reaction temperature was measured with the W-Re BP5/20 thermocouple (9); the signal from the thermocouple was fed into the recorder device via an amplifier.The refrigerator (18) served to cool down gaseous products of the reaction The nitrogen-containing gas from the tank (10) was fed into the quartz tube using the pressure reducer valve (11).An electric pulse was fed into the coil (8) to initiate an exothermic reaction in the surface layer of the Cr powder.A flat combustion front was formed and it started spreading along the sample in a lengthwise direction.The pressure was monitored via readings from the manometer (13) and the pressure sensor (14).Gas flow rate at the reactor inlet and outlet was adjusted by using the Red-y type electronic micro flow meters (16) and (17).The reaction temperature was measured with the W-Re BP5/20 thermocouple (9); the signal from the thermocouple was fed into the recorder device via an amplifier.The refrigerator (18) served to cool down gaseous products of the reaction before they entered the flow rate meter (17).The process parameters were read by the TRM-138 multi-channel meter regulator (20) and then they were processed by the computer (21). Natural Filtration Mode The rates of combustion of Cr that were obtained by the calcium hydride process in comparison to Cr obtained by the aluminothermic process (particle size below 40 µm) differ (Figure 4).The maximum combustion temperature read from the thermocouples increased from 1220 • C (0.1 MPa) to 1500 • C (10 MPa).It was also close to the estimated combustion temperature (Figure 5). Metals 2019, 9, x FOR PEER REVIEW 5 of 14 before they entered the flow rate meter (17).The process parameters were read by the TRM-138 multichannel meter regulator (20) and then they were processed by the computer (21). Natural Filtration Mode The rates of combustion of Cr that were obtained by the calcium hydride process in comparison to Cr obtained by the aluminothermic process (particle size below 40 µm) differ (Figure 4).The maximum combustion temperature read from the thermocouples increased from 1220 °C (0.1 MPa) to 1500 °C (10 MPa).It was also close to the estimated combustion temperature (Figure 5). Layer-by-Layer and Surface Combustion of Chromium in Nitrogen When filtration obstructions occur (the formation of combustion products prevents the passage of gas to maintain combustion processes), the nitridation mode switches from layer-by-layer to surface nitridation.This change exposes itself by the dependency of the combustion rate on the diameter of the samples, which was not observed before this change occurred (Figure 6a).At high pressures, changes in the diameter do not lead to changes in the combustion rate.The content of nitrogen in the combustion products also remains constant (Figure 6b).At low pressures, increases in the diameter leads to the reduction of the combustion rate and the degree of Cr nitridation.before they entered the flow rate meter (17).The process parameters were read by the TRM-138 multichannel meter regulator (20) and then they were processed by the computer (21). Natural Filtration Mode The rates of combustion of Cr that were obtained by the calcium hydride process in comparison to Cr obtained by the aluminothermic process (particle size below 40 µm) differ (Figure 4).The maximum combustion temperature read from the thermocouples increased from 1220 °C (0.1 MPa) to 1500 °C (10 MPa).It was also close to the estimated combustion temperature (Figure 5). Layer-by-Layer and Surface Combustion of Chromium in Nitrogen When filtration obstructions occur (the formation of combustion products prevents the passage of gas to maintain combustion processes), the nitridation mode switches from layer-by-layer to surface nitridation.This change exposes itself by the dependency of the combustion rate on the diameter of the samples, which was not observed before this change occurred (Figure 6a).At high pressures, changes in the diameter do not lead to changes in the combustion rate.The content of nitrogen in the combustion products also remains constant (Figure 6b).At low pressures, increases in the diameter leads to the reduction of the combustion rate and the degree of Cr nitridation. Layer-by-Layer and Surface Combustion of Chromium in Nitrogen When filtration obstructions occur (the formation of combustion products prevents the passage of gas to maintain combustion processes), the nitridation mode switches from layer-by-layer to surface nitridation.This change exposes itself by the dependency of the combustion rate on the diameter of the samples, which was not observed before this change occurred (Figure 6a).At high pressures, changes in the diameter do not lead to changes in the combustion rate.The content of nitrogen in the combustion products also remains constant (Figure 6b).At low pressures, increases in the diameter leads to the reduction of the combustion rate and the degree of Cr nitridation.However, at 0.4 MPa the combustion rate is continuously decreasing, and at 1.0 MPa a considerable reduction of the combustion rate is only observed for diameters greater than 40 mm.An elemental analysis of the combusted samples has shown that, at high pressures, the content of nitrogen is virtually constant throughout the cross-section in the samples of different diameters. Metals 2019, 9, x FOR PEER REVIEW 6 of 14 However, at 0.4 MPa the combustion rate is continuously decreasing, and at 1.0 MPa a considerable reduction of the combustion rate is only observed for diameters greater than 40 mm.An elemental analysis of the combusted samples has shown that, at high pressures, the content of nitrogen is virtually constant throughout the cross-section in the samples of different diameters.When the pressure is changed to low, uniform nitrogen distribution throughout the cross-section is only retained in samples of minimal diameter.As the sample dimensions are increased, the concentration of nitrogen in the center becomes significantly lower than that in the surface layers.For samples with a diameter of 50 mm (and a pressure of 0.4 MPa), the difference between nitrogen concentrations in central and surface layers is 2-3% N. The results of X-ray diffraction analysis of the samples after combustion are in agreement with the results of elemental analysis.In the samples burning at 5 Mpa, only the δ-CrN phase was discovered and at low pressures, β-Cr2N was the predominant phase. A more prominent manifestation of uneven phase distribution was found in quenched samples.By quenching we imply sudden cut-off of nitrogen flow into the combustion zone.Quick depressurization and subsequent injection of inert gas (argon) into the reaction zone stopped the combustion.Thus, an intermediate state of the sample is conserved, which allowed us to analyze the processes taking place during each respective stage of combustion.Elemental analysis of samples of different sizes that burned at high pressures has shown that the concentration of nitrogen throughout the cross-section stayed virtually constant.The same pattern is observed in the samples of minimal diameter that burned at low pressures.The situation was different with concentration gradients on a sample with the maximum diameter that combusted at low pressure.In surface layers of that sample, the same amount of nitrogen was found, as in the samples of lesser diameter, and the concentration of nitrogen rapidly decreased towards the center.In the axial area of the sample, virtually no nitrogen was present. Nitridation Process Stages when Chromium is Burned in Nitrogen Figure 7 shows the dependency of N content in the combustion products of Cr on the pressure.When the pressure is changed to low, uniform nitrogen distribution throughout the cross-section is only retained in samples of minimal diameter.As the sample dimensions are increased, the concentration of nitrogen in the center becomes significantly lower than that in the surface layers.For samples with a diameter of 50 mm (and a pressure of 0.4 MPa), the difference between nitrogen concentrations in central and surface layers is 2-3% N. The results of X-ray diffraction analysis of the samples after combustion are in agreement with the results of elemental analysis.In the samples burning at 5 MPa, only the δ-CrN phase was discovered and at low pressures, β-Cr 2 N was the predominant phase. A more prominent manifestation of uneven phase distribution was found in quenched samples.By quenching we imply sudden cut-off of nitrogen flow into the combustion zone.Quick depressurization and subsequent injection of inert gas (argon) into the reaction zone stopped the combustion.Thus, an intermediate state of the sample is conserved, which allowed us to analyze the processes taking place during each respective stage of combustion.Elemental analysis of samples of different sizes that burned at high pressures has shown that the concentration of nitrogen throughout the cross-section stayed virtually constant.The same pattern is observed in the samples of minimal diameter that burned at low pressures.The situation was different with concentration gradients on a sample with the maximum diameter that combusted at low pressure.In surface layers of that sample, the same amount of nitrogen was found, as in the samples of lesser diameter, and the concentration of nitrogen rapidly decreased towards the center.In the axial area of the sample, virtually no nitrogen was present. Nitridation Process Stages when Chromium Is Burned in Nitrogen Figure 7 shows the dependency of N content in the combustion products of Cr on the pressure.Curve 1 in Figure 7 is drawn for slowly cooling down samples, and Curve 2 is drawn for quenched samples.The sample with interrupted combustion consists of two parts.The upper sintered part is a burned part and the lower non-sintered part is the initial Cr.The half-burned samples obtained in this way were subjected to layer-by-layer elemental and phase analyses.The initial chromium contains at least 0.01% N. Between sintered and non-sintered parts of the sample, a thin non-sintered layer (~2 mm) with altered color was found with the nitrogen concentrations of up to 3% N (Figure 8).X-ray diffraction analysis showed that Cr forms the base of this layer with a significant amount of Cr2N.In the first layers of the sintered area of the quenched sample (3-5 mm thick), the concentration of nitrogen increases to ~9-11% N. The predominant phase here is Cr2N with traces of Cr and CrN.In the subsequent layers, the traces of Cr disappear in the X-ray diagrams and the share of CrN increases.At the distance of ~20 mm from the quenched combustion front, the content of Cr2N and the content of CrN are approximately the same.The concentration of N stays within the range of 14-16%.The share of the mononitride increases as we go farther from the transitional layer.In Figure 7, Curve 2 characterizes the concentration of nitrogen from the layers that are ~10 mm away from the transitional layer.Thus, during the nitridation of chromium in the layer-by-layer combustion mode, the formation of elemental and phase composition of the combustion products is finalized in the afterburning zone.The reason for this is an incomplete transition in the combustion zone and the retention of the permeability of the samples behind this zone. Microstructure of Nitrided Chromium Figure 9 shows a typical microstructure of sintered nitrided chromium, synthesized at 0.2 and 5 MPa.After combustion, the samples retained high porosity, and the particles retained their initial Curve 1 in Figure 7 is drawn for slowly cooling down samples, and Curve 2 is drawn for quenched samples.The sample with interrupted combustion consists of two parts.The upper sintered part is a burned part and the lower non-sintered part is the initial Cr.The half-burned samples obtained in this way were subjected to layer-by-layer elemental and phase analyses.The initial chromium contains at least 0.01% N. Between sintered and non-sintered parts of the sample, a thin non-sintered layer (~2 mm) with altered color was found with the nitrogen concentrations of up to 3% N (Figure 8).X-ray diffraction analysis showed that Cr forms the base of this layer with a significant amount of Cr 2 N.In the first layers of the sintered area of the quenched sample (3-5 mm thick), the concentration of nitrogen increases to ~9-11% N. Curve 1 in Figure 7 is drawn for slowly cooling down samples, and Curve 2 is drawn for quenched samples.The sample with interrupted combustion consists of two parts.The upper sintered part is a burned part and the lower non-sintered part is the initial Cr.The half-burned samples obtained in this way were subjected to layer-by-layer elemental and phase analyses.The initial chromium contains at least 0.01% N. Between sintered and non-sintered parts of the sample, a thin non-sintered layer (~2 mm) with altered color was found with the nitrogen concentrations of up to 3% N (Figure 8).X-ray diffraction analysis showed that Cr forms the base of this layer with a significant amount of Cr2N.In the first layers of the sintered area of the quenched sample (3-5 mm thick), the concentration of nitrogen increases to ~9-11% N. The predominant phase here is Cr2N with traces of Cr and CrN.In the subsequent layers, the traces of Cr disappear in the X-ray diagrams and the share of CrN increases.At the distance of ~20 mm from the quenched combustion front, the content of Cr2N and the content of CrN are approximately the same.The concentration of N stays within the range of 14-16%.The share of the mononitride increases as we go farther from the transitional layer.In Figure 7, Curve 2 characterizes the concentration of nitrogen from the layers that are ~10 mm away from the transitional layer.Thus, during the nitridation of chromium in the layer-by-layer combustion mode, the formation of elemental and phase composition of the combustion products is finalized in the afterburning zone.The reason for this is an incomplete transition in the combustion zone and the retention of the permeability of the samples behind this zone. Microstructure of Nitrided Chromium Figure 9 shows a typical microstructure of sintered nitrided chromium, synthesized at 0.2 and 5 MPa.After combustion, the samples retained high porosity, and the particles retained their initial The predominant phase here is Cr 2 N with traces of Cr and CrN.In the subsequent layers, the traces of Cr disappear in the X-ray diagrams and the share of CrN increases.At the distance of ~20 mm from the quenched combustion front, the content of Cr 2 N and the content of CrN are approximately the same.The concentration of N stays within the range of 14-16%.The share of the mononitride increases as we go farther from the transitional layer.In Figure 7, Curve 2 characterizes the concentration of nitrogen from the layers that are ~10 mm away from the transitional layer.Thus, during the nitridation of chromium in the layer-by-layer combustion mode, the formation of elemental and phase composition of the combustion products is finalized in the afterburning zone.The reason for this is an incomplete transition in the combustion zone and the retention of the permeability of the samples behind this zone. Microstructure of Nitrided Chromium Figure 9 shows a typical microstructure of sintered nitrided chromium, synthesized at 0.2 and 5 MPa.After combustion, the samples retained high porosity, and the particles retained their initial shape and they were weakly bound.There were no traces of melting.After nitridation at 0.2 MPa, chromium particles had a two-phase microstructure, and after 5.0 MPa the microstructure was single-phase.These results are in agreement with the data of X-ray diffraction analysis.According to these data, the products of chromium combustion at 5.0 MPa are almost entirely single-phase and they are comprised exclusively of CrN.However, the products of chromium combustion at 0.2 MPa were two-phase (Cr 2 N + CrN).Cr 2 N is represented by the lighter-colored areas. Metals 2019, 9, x FOR PEER REVIEW 8 of 14 shape and they were weakly bound.There were no traces of melting.After nitridation at 0.2 MPa, chromium particles had a two-phase microstructure, and after 5.0 MPa the microstructure was singlephase.These results are in agreement with the data of X-ray diffraction analysis.According to these data, the products of chromium combustion at 5.0 MPa are almost entirely single-phase and they are comprised exclusively of CrN.However, the products of chromium combustion at 0.2 MPa were twophase (Cr2N + CrN).Cr2N is represented by the lighter-colored areas. Nitridation of Ferrochromium Figure 10 shows a dependency of the combustion rates of Fe-Cr powder with different particle sizes on nitrogen pressure.The finer the powder, the higher the rate of nitridation, and the greater the amount of nitrogen absorbed.In the investigated pressure range (1.0-10.0MPa), the combustion temperature that was measured using the thermocouple was 1220-1300 °C.It is significantly lower than the estimated combustion temperature for ferrochromium (~1680 °C). Figure 11a shows a typical profile of Fe-Cr nitridation.Such a profile suggests that there is an extended after-reaction stage.Figure 11b shows the curves of the dependency of the combustion rates of Fe-Cr with different particle sizes on the initial powder temperature (T0).As T0 grows, the rate of Fe-Cr combustion significantly increases.When T0 reaches 400 °C, nitridation of more coarse powder (0.2 mm) becomes possible. Nitridation of Ferrochromium Figure 10 shows a dependency of the combustion rates of Fe-Cr powder with different particle sizes on nitrogen pressure.The finer the powder, the higher the rate of nitridation, and the greater the amount of nitrogen absorbed.In the investigated pressure range (1.0-10.0MPa), the combustion temperature that was measured using the thermocouple was 1220-1300 • C. It is significantly lower than the estimated combustion temperature for ferrochromium (~1680 • C). Figure 11a shows a typical profile of Fe-Cr nitridation.Such a profile suggests that there is an extended after-reaction stage.Figure 11b shows the curves of the dependency of the combustion rates of Fe-Cr with different particle sizes on the initial powder temperature (T 0 ).As T 0 grows, the rate of Fe-Cr combustion significantly increases.When T 0 reaches 400 • C, nitridation of more coarse powder (0.2 mm) becomes possible. Metals 2019, 9, x FOR PEER REVIEW 8 of 14 shape and they were weakly bound.There were no traces of melting.After nitridation at 0.2 MPa, chromium particles had a two-phase microstructure, and after 5.0 MPa the microstructure was singlephase.These results are in agreement with the data of X-ray diffraction analysis.According to these data, the products of chromium combustion at 5.0 MPa are almost entirely single-phase and they are comprised exclusively of CrN.However, the products of chromium combustion at 0.2 MPa were twophase (Cr2N + CrN).Cr2N is represented by the lighter-colored areas. Nitridation of Ferrochromium Figure 10 shows a dependency of the combustion rates of Fe-Cr powder with different particle sizes on nitrogen pressure.The finer the powder, the higher the rate of nitridation, and the greater the amount of nitrogen absorbed.In the investigated pressure range (1.0-10.0MPa), the combustion temperature that was measured using the thermocouple was 1220-1300 °C.It is significantly lower than the estimated combustion temperature for ferrochromium (~1680 °C). Figure 11a shows a typical profile of Fe-Cr nitridation.Such a profile suggests that there is an extended after-reaction stage.Figure 11b shows the curves of the dependency of the combustion rates of Fe-Cr with different particle sizes on the initial powder temperature (T0).As T0 grows, the rate of Fe-Cr combustion significantly increases.When T0 reaches 400 °C, nitridation of more coarse powder (0.2 mm) becomes possible.The metallographic analysis of the combustion products confirmed the absence of traces of melting in Cr-Fe-N.The solid-phase mechanism normally improves the degree of nitridation of both metals and ferrous alloys.The larger part of the nitrogen was absorbed by the alloy directly in the synthesis wave, and this also happened in the layer-by-layer combustion mode.The unreacted part of the ferrous alloy interacted with the nitrogen and this time it happened in the volumetric combustion mode.The high porosity retained behind the combustion front promotes this afterreaction.This afterburning may greatly increase the content of nitrogen in the product.Notwithstanding, we could not reach maximum nitridation by nitriding ferrochromium.For an alloy with 75.6% of Cr, the concentration of nitrogen is estimated at ~16.8% N for a reaction whereby chromium turns into CrN1,0.However, the maximum measured content of nitrogen in ferrochromium was ~13.0%N. Thus, the degree of nitridation was greater than 77%. Nitridation of Chromium and Ferrochromium in Cocurrent Nitrogen Flow In the above, we considered the filtration combustion of chromium and ferrochromium in N2 under natural filtration conditions.In this case, the oxidizer (N2) and the fuel (Cr) were spatially divided before the reaction.N2 was forced into the combustion zone by the constantly maintained difference between the pressure in the samples and their surrounding medium.Depressurization in the combustion zone was due to continuous absorption of nitrogen by chromium, which resulted in the filtration of gas into the bulk of the burning sample.Thus, combustion under natural filtration in this study was a self-regulatory process with a strong feedback.Nitrogen entered the reaction zone through an opening in the side. In the case of nitridation in the cocurrent filtration mode, nitrogen was forcedly fed into the combustion zone.According to the preliminary experiments, stable combustion of chromium in a cocurrent flow of nitrogen at specific flow rates exceeding 8 cm 3 /s can only be maintained when powder with particle sizes of 63-80 µm is used.For finer powders, combustion is hampered due to reduced permeability. Figure 12 shows the dependencies of Cr combustion rates on the pressure under normal conditions.Under normal filtration conditions, 63-80 µm Cr powder produced by the calciumhydride process burns at 5 MPa only.To implement combustion at 0.2 MPa, a finer Cr powder was used.In such conditions, the combustion rate equals 0.04-0.05cm/s.That is why it is not possible to compare the results of Cr combustion in nitrogen in different filtration modes under the same conditions (nitrogen pressure, powder fineness, and porosity).The results of chromium powder combustion under different conditions of natural filtration and pressure filtration are shown in Table 2. Minimum temperature and combustion rate are observed during nitridation in the conditions of natural filtration.Here, the degree of nitridation is at its highest.The products of Cr combustion in the cocurrent flow of nitrogen and nitrogen-argon (in ratio to 50:50) mixture were single-phase.In both cases, only the Cr2N phase had been detected by X-ray diffraction.However, in the samples that The metallographic analysis of the combustion products confirmed the absence of traces of melting in Cr-Fe-N.The solid-phase mechanism normally improves the degree of nitridation of both metals and ferrous alloys.The larger part of the nitrogen was absorbed by the alloy directly in the synthesis wave, and this also happened in the layer-by-layer combustion mode.The unreacted part of the ferrous alloy interacted with the nitrogen and this time it happened in the volumetric combustion mode.The high porosity retained behind the combustion front promotes this after-reaction.This afterburning may greatly increase the content of nitrogen in the product.Notwithstanding, we could not reach maximum nitridation by nitriding ferrochromium.For an alloy with 75.6% of Cr, the concentration of nitrogen is estimated at ~16.8% N for a reaction whereby chromium turns into CrN 1,0 .However, the maximum measured content of nitrogen in ferrochromium was ~13.0%N. Thus, the degree of nitridation was greater than 77%. Nitridation of Chromium and Ferrochromium in Cocurrent Nitrogen Flow In the above, we considered the filtration combustion of chromium and ferrochromium in N 2 under natural filtration conditions.In this case, the oxidizer (N 2 ) and the fuel (Cr) were spatially divided before the reaction.N 2 was forced into the combustion zone by the constantly maintained difference between the pressure in the samples and their surrounding medium.Depressurization in the combustion zone was due to continuous absorption of nitrogen by chromium, which resulted in the filtration of gas into the bulk of the burning sample.Thus, combustion under natural filtration in this study was a self-regulatory process with a strong feedback.Nitrogen entered the reaction zone through an opening in the side. In the case of nitridation in the cocurrent filtration mode, nitrogen was forcedly fed into the combustion zone.According to the preliminary experiments, stable combustion of chromium in a cocurrent flow of nitrogen at specific flow rates exceeding 8 cm 3 /s can only be maintained when powder with particle sizes of 63-80 µm is used.For finer powders, combustion is hampered due to reduced permeability. Figure 12 shows the dependencies of Cr combustion rates on the pressure under normal conditions.Under normal filtration conditions, 63-80 µm Cr powder produced by the calcium-hydride process burns at 5 MPa only.To implement combustion at 0.2 MPa, a finer Cr powder was used.In such conditions, the combustion rate equals 0.04-0.05cm/s.That is why it is not possible to compare the results of Cr combustion in nitrogen in different filtration modes under the same conditions (nitrogen pressure, powder fineness, and porosity).The results of chromium powder combustion under different conditions of natural filtration and pressure filtration are shown in Table 2. Minimum temperature and combustion rate are observed during nitridation in the conditions of natural filtration.Here, the degree of nitridation is at its highest.The products of Cr combustion in the cocurrent flow of nitrogen and nitrogen-argon (in ratio to 50:50) mixture were single-phase.In both cases, only the Cr 2 N phase had been detected by X-ray diffraction.However, in the samples that burned in the natural filtration mode, different amounts of the second phase (CrN) were detected.As the degree of nitridation grows, the amounts of the higher nitride also grow. Metals 2019, 9, x FOR PEER REVIEW 10 of 14 burned in the natural filtration mode, different amounts of the second phase (CrN) were detected.As the degree of nitridation grows, the amounts of the higher nitride also grow.Figure 13 shows the curves of the dependency of the combustion rate and degree of Cr nitridation on the specific nitrogen flow rate.There is a rapid increase in the combustion rate as the flow rate of N2 is increased, while the degree of nitridation is decreasing.Therefore, switching the combustion mode to cocurrent filtration of the reacting gas may lead to an increase in the combustion rate by an order of magnitude.Additionally, Figure 13 shows the result of chromium nitridation in the nitrogen-argon mixture, for the sake of comparison.While the degrees of nitridation for both scenarios are virtually identical, the rate of combustion in diluted nitrogen is lower by almost two times.Figure 13 shows the curves of the dependency of the combustion rate and degree of Cr nitridation on the specific nitrogen flow rate.There is a rapid increase in the combustion rate as the flow rate of N 2 is increased, while the degree of nitridation is decreasing.Therefore, switching the combustion mode to cocurrent filtration of the reacting gas may lead to an increase in the combustion rate by an order of magnitude.Additionally, Figure 13 shows the result of chromium nitridation in the nitrogen-argon mixture, for the sake of comparison.While the degrees of nitridation for both scenarios are virtually identical, the rate of combustion in diluted nitrogen is lower by almost two times.Table 2 shows the combustion temperatures for different combustion modes that were taken from the thermocouple inserted into the lower side of the sample at a depth of ~1.0 cm.The diameter of the powder fill was 18.8 mm and the height was ≈150 mm.For the natural filtration mode, the measured combustion temperature (1250 °C) is insignificantly lower than the estimated adiabatic temperature (~1320 °C).The phase composition corresponds to the equilibrium phase composition for the degree of nitridation measured after the combustion (12.21%N).The situation with the cocurrent nitrogen filtration modes is different.The combustion temperatures that were recorded for the thermocouple turned out to be significantly higher than the estimated temperatures: 1540/1030 °C and 1440/690 °C, respectively.Furthermore, this difference grows larger as the flow rate of nitrogen increases. Figure 14 shows the microstructure of the PH 1S chromium sample that has been burned in the cocurrent flow of nitrogen-argon mixture as compared to the structure of chromium powder that has been nitrided under natural filtration conditions at 0.2 MPa.The first sample has retained high porosity (~ 80%) and the particles have retained their initial facet pattern.The microstructure of the particles treated in the mode of forced filtration combustion is single-phase, and in the natural filtration mode it is two-phase.The lighter-colored areas in the median parts of the particles represent the Cr2N phase, while the darker-colored areas represent the CrN phase. Ferrochromium Nitridation in the Cocurrent Flow of Nitrogen Figure 15 shows the curves that represent the dependency of the combustion rate and the degree of ferrochromium nitridation on the flow rate of nitrogen.Table 2 shows the combustion temperatures for different combustion modes that were taken from the thermocouple inserted into the lower side of the sample at a depth of ~1.0 cm.The diameter of the powder fill was 18.8 mm and the height was ≈150 mm.For the natural filtration mode, the measured combustion temperature (1250 • C) is insignificantly lower than the estimated adiabatic temperature (~1320 • C).The phase composition corresponds to the equilibrium phase composition for the degree of nitridation measured after the combustion (12.21%N).The situation with the cocurrent nitrogen filtration modes is different.The combustion temperatures that were recorded for the thermocouple turned out to be significantly higher than the estimated temperatures: 1540/1030 • C and 1440/690 • C, respectively.Furthermore, this difference grows larger as the flow rate of nitrogen increases. Figure 14 shows the microstructure of the PH 1S chromium sample that has been burned in the cocurrent flow of nitrogen-argon mixture as compared to the structure of chromium powder that has been nitrided under natural filtration conditions at 0.2 MPa.The first sample has retained high porosity (~80%) and the particles have retained their initial facet pattern.The microstructure of the particles treated in the mode of forced filtration combustion is single-phase, and in the natural filtration mode it is two-phase.The lighter-colored areas in the median parts of the particles represent the Cr 2 N phase, while the darker-colored areas represent the CrN phase.Table 2 shows the combustion temperatures for different combustion modes that were taken from the thermocouple inserted into the lower side of the sample at a depth of ~1.0 cm.The diameter of the powder fill was 18.8 mm and the height was ≈150 mm.For the natural filtration mode, the measured combustion temperature (1250 °C) is insignificantly lower than the estimated adiabatic temperature (~1320 °C).The phase composition corresponds to the equilibrium phase composition for the degree of nitridation measured after the combustion (12.21%N).The situation with the cocurrent nitrogen filtration modes is different.The combustion temperatures that were recorded for the thermocouple turned out to be significantly higher than the estimated temperatures: 1540/1030 °C and 1440/690 °C, respectively.Furthermore, this difference grows larger as the flow rate of nitrogen increases. Figure 14 shows the microstructure of the PH 1S chromium sample that has been burned in the cocurrent flow of nitrogen-argon mixture as compared to the structure of chromium powder that has been nitrided under natural filtration conditions at 0.2 MPa.The first sample has retained high porosity (~ 80%) and the particles have retained their initial facet pattern.The microstructure of the particles treated in the mode of forced filtration combustion is single-phase, and in the natural filtration mode it is two-phase.The lighter-colored areas in the median parts of the particles represent the Cr2N phase, while the darker-colored areas represent the CrN phase. Ferrochromium Nitridation in the Cocurrent Flow of Nitrogen Figure 15 shows the curves that represent the dependency of the combustion rate and the degree of ferrochromium nitridation on the flow rate of nitrogen.Ferrochromium in the cocurrent flow of nitrogen starts burning at greater gas flow rates.The combustion rate of Fe-Cr is significantly lower than that of Cr, both for forced filtration and natural filtration.However, the rate of combustion of Fe-Cr increases as the flow rate of nitrogen is increased, and the same applies to Cr.It is noteworthy that, in the investigated range of process parameters, the degree of Fe-Cr nitridation under forced filtration (4.7-7.5% N) is much lower than that under natural filtration (8.8-14.2%N).The reason for this lies in the absence of the after-reaction stage, in the case of the forced flow of nitrogen.In the course of tempering of the nitridation products by an overflowing stream of gas, the amount of uptaken nitrogen is equal to that which has been absorbed directly in the combustion zone. The porosity of the chromium samples is ~80%.According to the estimations, the amount of nitrogen located in the pores of such samples was not enough to turn Cr into CrN.At 10.0 MPa, only ~12% of Cr may turn into CrN by reacting with nitrogen from the pores.Such a degree of nitridation will only provide for heating by no more than 300 °C, and this is not enough for the process to take place in the self-sustained combustion mode.Therefore, for the synthesis of Cr nitrides to be selfsustained by the heat from the exothermic nitridation reaction, the majority of the nitrogen must come to the reaction zone from an external source.In this case, nitrogen can only be delivered by the filtration of gas through the porous structure of the sample.Thus, in the chromium nitridation experiment conditions that were investigated in this research, chromium combustion would take place via the so-called filtration combustion mechanism.This type of SHS process is peculiar, due to the fact that the characteristics of the system that define the filtration delivery of the gaseous reagent into the combustion zone will influence the combustion trends and make combustion either possible or impossible. The Cr-N system is peculiar, due to the fact that higher chromium nitride δ-CrN has low thermal stability: At 0.1 MPa, it is dissociated at ~1050 °C. 4CrN = 2Cr2N + N2 The dissociation temperature increases as the pressure is increased [6].Tδ = 10,620 × (13.03 -lgP) −1 At 10 MPa, the dissociation temperature will be ~1490 °C, which is lower than the estimated combustion temperature by ~570 °C.That is why, when chromium burns in nitrogen, yielding CrN as a synthesis product, the temperature does not exceed the temperature of its stable state.As the pressure increases, the estimated combustion temperature also increases, from 1140 °C (0.1 MPa) to 1520 °C (10.0 MPa).That is why the combustion temperature that was recorded from the thermocouple was close to the estimated temperature. In order that we could directly observe surface combustion, a series of chromium nitridation experiments were conducted, in which an ad hoc gas-permeable rectangular-block container (110 × Ferrochromium in the cocurrent flow of nitrogen starts burning at greater gas flow rates.The combustion rate of Fe-Cr is significantly lower than that of Cr, both for forced filtration and natural filtration.However, the rate of combustion of Fe-Cr increases as the flow rate of nitrogen is increased, and the same applies to Cr.It is noteworthy that, in the investigated range of process parameters, the degree of Fe-Cr nitridation under forced filtration (4.7-7.5% N) is much lower than that under natural filtration (8.8-14.2%N).The reason for this lies in the absence of the after-reaction stage, in the case of the forced flow of nitrogen.In the course of tempering of the nitridation products by an overflowing stream of gas, the amount of uptaken nitrogen is equal to that which has been absorbed directly in the combustion zone. The porosity of the chromium samples is ~80%.According to the estimations, the amount of nitrogen located in the pores of such samples was not enough to turn Cr into CrN.At 10.0 MPa, only ~12% of Cr may turn into CrN by reacting with nitrogen from the pores.Such a degree of nitridation will only provide for heating by no more than 300 • C, and this is not enough for the process to take place in the self-sustained combustion mode.Therefore, for the synthesis of Cr nitrides to be self-sustained by the heat from the exothermic nitridation reaction, the majority of the nitrogen must come to the reaction zone from an external source.In this case, nitrogen can only be delivered by the filtration of gas through the porous structure of the sample.Thus, in the chromium nitridation experiment conditions that were investigated in this research, chromium combustion would take place via the so-called filtration combustion mechanism.This type of SHS process is peculiar, due to the fact that the characteristics of the system that define the filtration delivery of the gaseous reagent into the combustion zone will influence the combustion trends and make combustion either possible or impossible. The Cr-N system is peculiar, due to the fact that higher chromium nitride δ-CrN has low thermal stability: At 0.1 MPa, it is dissociated at ~1050 The dissociation temperature increases as the pressure is increased [6]. T δ = 10,620 × (13.03 − lgP) −1 At 10 MPa, the dissociation temperature will be ~1490 • C, which is lower than the estimated combustion temperature by ~570 • C. That is why, when chromium burns in nitrogen, yielding CrN as a synthesis product, the temperature does not exceed the temperature of its stable state.As the pressure increases, the estimated combustion temperature also increases, from 1140 • C (0.1 MPa) to 1520 • C (10.0 MPa).That is why the combustion temperature that was recorded from the thermocouple was close to the estimated temperature. In order that we could directly observe surface combustion, a series of chromium nitridation experiments were conducted, in which an ad hoc gas-permeable rectangular-block container (110 × 60 × 30 mm 3 ) was used.Two faces of the block were made of quartz plates and the other two faces were made of metal mesh.At the initial stage, the shape of the combustion front is close to the shape corresponding to layer-by-layer combustion.The results of the visualization of the surface combustion of chromium in nitrogen using such a setup, are shown in [10].Then, protrusions in the areas with more favorable filtration conditions (i.e., the surface of the sample) emerge.The dimensions of such protrusions gradually grow because the propagation of the combustion front is slowed down at the center.At the final stage, the shaped-up protrusions of the combustion front start moving towards each other, to eventually merge in the center of the sample. The maximum temperature that was developed in the course of the layer-by-layer combustion of Cr in N 2 was lower than the melting temperature of Cr (1860 • C) and that of the eutectic alloy in the Cr-N system (1640 • C). The combustion products and the initial metal did not melt, and the sample retained good permeability.Due to this, conditions for post-nitridation were maintained behind the combustion wave. The next curve (Figure 16) shows the estimated dependency of the adiabatic temperature of chromium combustion in the case of its incomplete transition.The estimation was performed for the formation of a two-phase Cr 2 N-Cr product.When a single-phase Cr 2 N (11.86%N) product was formed, the combustion temperature was 1287 • C. The combustion temperature that was close to that value was achieved by the nitridation of chromium powder in natural filtration conditions (Table 2).60 × 30 mm 3 ) was used.Two faces of the block were made of quartz plates and the other two faces were made of metal mesh.At the initial stage, the shape of the combustion front is close to the shape corresponding to layer-by-layer combustion.The results of the visualization of the surface combustion of chromium in nitrogen using such a setup, are shown in [10].Then, protrusions in the areas with more favorable filtration conditions (i.e., the surface of the sample) emerge.The dimensions of such protrusions gradually grow because the propagation of the combustion front is slowed down at the center.At the final stage, the shaped-up protrusions of the combustion front start moving towards each other, to eventually merge in the center of the sample. The maximum temperature that was developed in the course of the layer-by-layer combustion of Cr in N2 was lower than the melting temperature of Cr (1860 °C) and that of the eutectic alloy in the Cr-N system (1640 °C). The combustion products and the initial metal did not melt, and the sample retained good permeability.Due to this, conditions for post-nitridation were maintained behind the combustion wave. The next curve (Figure 16) shows the estimated dependency of the adiabatic temperature of chromium combustion in the case of its incomplete transition.The estimation was performed for the formation of a two-phase Cr2N-Cr product.When a single-phase Cr2N (11.86%N) product was formed, the combustion temperature was 1287 °C.The combustion temperature that was close to that value was achieved by the nitridation of chromium powder in natural filtration conditions (Table 2). Conclusion In conclusion, both chromium and ferrochromium can be successfully nitrided both by natural and forced filtration.The conditions of nitridation can be adjusted to synthesize both single-phase products (Cr2N or CrN) and a mixture of phases with different component ratios.Here, to synthesize the CrN phase, we must use natural filtration mode and layer-by-layer combustion.However, we have managed to obtain the single-phase Cr2N half-nitride in the forced filtration mode only.Therefore, CrN was formed as a result of two-stage nitridation (layer-by-layer combustionvolumetric after-burning), while Cr2N was formed by single-stage nitridation.SHS nitridation of chromium and ferrochromium took place in the filtration mode.The main parameters governing the consistent patterns and mechanism of chromium and ferrochromium combustion were: 1) The elemental composition of the initial materials, 2) the particle size of the nitrided powder, 3) the porosity of the powder filler, 4) the pressure of nitrogen, and 5) the conditions of its transport into the combustion zone.It has been experimentally established in this research that layer-by-layer combustion of chromium in nitrogen takes place via the solid phase mechanism.The maximum temperature of chromium and ferrochromium combustion was limited by the dissociation Conclusions In conclusion, both chromium and ferrochromium can be successfully nitrided both by natural and forced filtration.The conditions of nitridation can be adjusted to synthesize both single-phase products (Cr 2 N or CrN) and a mixture of phases with different component ratios.Here, to synthesize the CrN phase, we must use natural filtration mode and layer-by-layer combustion.However, we have managed to obtain the single-phase Cr 2 N half-nitride in the forced filtration mode only.Therefore, CrN was formed as a result of two-stage nitridation (layer-by-layer combustion-volumetric after-burning), while Cr 2 N was formed by single-stage nitridation.SHS nitridation of chromium and ferrochromium took place in the filtration mode.The main parameters governing the consistent patterns and mechanism of chromium and ferrochromium combustion were: (1) The elemental composition of the initial materials, (2) the particle size of the nitrided powder, (3) the porosity of the powder filler, (4) the pressure of nitrogen, and (5) the conditions of its transport into the combustion zone.It has been experimentally established in this research that layer-by-layer combustion of chromium in nitrogen takes place via the solid phase mechanism.The maximum temperature of chromium and ferrochromium combustion was limited by the dissociation temperature of CrN.As the pressure increased, the dissociation temperature increased, and the combustion temperature also increased. It has been shown in this study, that the use of forced filtration extends the range of possible SHS process realizations for the Cr-N 2 system with lower pressure conditions and larger metal particle sizes.It was discovered that forced filtration promotes the initiation of the combustion mode in superadiabatic heating conditions.The use of forced filtration modes allows for the synthesizing Cr 2 N of different composition in the self-sustained combustion mode.Switching the chromium nitridation process into the forced filtration mode using the N 2 -Ar mixture, promotes the formation of an inverse combustion wave.A single-stage mechanism of the formation of products was discovered. To our knowledge, it has already been shown in theoretical research in the body of literature that, in the case of self-sustained combustion realization in the cocurrent filtration mode, the temperature that exceeds the adiabatic combustion temperature is quickly achieved.In our research, a significant exceedance of actual combustion temperature over the estimation has been noted for chromium powder nitridation in the forced gas filtration mode. Figure 2 . Figure 2. Schematic of the experimental Laboratory flow-type SHS reactor. Figure 3 . Figure 3.The appearance of the samples before and after combustion: (a) powder mixture before synthesis and (b) product after synthesis. Figure 2 . Figure 2. Schematic of the experimental Laboratory flow-type SHS reactor. Figure 2 . Figure 2. Schematic of the experimental Laboratory flow-type SHS reactor. Figure 3 . Figure 3.The appearance of the samples before and after combustion: (a) powder mixture before synthesis and (b) product after synthesis. Figure 3 . Figure 3.The appearance of the samples before and after combustion: (a) powder mixture before synthesis and (b) product after synthesis. Figure 4 . Figure 4. (a) the impact of nitrogen pressure on the rate of chromium combustion and (b) nitrogen content in the product.1-calcium-hydride process chromium; and 2aluminothermic process chromium. Figure 5 . Figure 5.The impact of nitrogen pressure on the maximum combustion temperature; 1-estimated and 2-measured. Figure 4 . Figure 4. (a) the impact of nitrogen pressure on the rate of chromium combustion and (b) nitrogen content in the product.1-calcium-hydride process chromium; and 2-aluminothermic process chromium. Figure 4 . Figure 4. (a) the impact of nitrogen pressure on the rate of chromium combustion and (b) nitrogen content in the product.1-calcium-hydride process chromium; and 2aluminothermic process chromium. Figure 5 . Figure 5.The impact of nitrogen pressure on the maximum combustion temperature; 1-estimated and 2-measured. Figure 5 . Figure 5.The impact of nitrogen pressure on the maximum combustion temperature; 1-estimated and 2-measured. Figure 7 . Figure 7.The impact of nitrogen pressure on the extent of chromium nitridation.1-slowly cooling down samples and 2-quenched samples. Figure 8 . Figure 8.(a) Microstructure of particles at the initial stage of nitridation.(b) The formation of the Cr2N layer.Nitrogen pressure-4 MPa. Figure 7 . Figure 7.The impact of nitrogen pressure on the extent of chromium nitridation.1-slowly cooling down samples and 2-quenched samples. Figure 7 . Figure 7.The impact of nitrogen pressure on the extent of chromium nitridation.1-slowly cooling down samples and 2-quenched samples. Figure 8 . Figure 8.(a) Microstructure of particles at the initial stage of nitridation.(b) The formation of the Cr2N layer.Nitrogen pressure-4 MPa. Figure 8 . Figure 8.(a) Microstructure of particles at the initial stage of nitridation.(b) The formation of the Cr 2 N layer.Nitrogen pressure-4 MPa. Figure 13 . Figure 13.(a) Impact of the specific flow rate of nitrogen on the combustion rate, and (b) degree of nitridation of chromium.1-combustion in nitrogen, 2-combustion in the mixture of nitrogen and 3-argon. Figure 14 . Figure 14.Microstructure of chromium that has been burned in the forced (a) and natural (b) filtration modes. Figure 13 . Figure 13.(a) Impact of the specific flow rate of nitrogen on the combustion rate, and (b) degree of nitridation of chromium.1-combustion in nitrogen, 2-combustion in the mixture of nitrogen and 3-argon. Figure 13 . Figure 13.(a) Impact of the specific flow rate of nitrogen on the combustion rate, and (b) degree of nitridation of chromium.1-combustion in nitrogen, 2-combustion in the mixture of nitrogen and 3-argon. Figure 14 . Figure 14.Microstructure of chromium that has been burned in the forced (a) and natural (b) filtration modes. Figure 14 . Figure 14.Microstructure of chromium that has been burned in the forced (a) and natural (b) filtration modes. 3. 7 . Figure15shows the curves that represent the dependency of the combustion rate and the degree of ferrochromium nitridation on the flow rate of nitrogen. Figure 16 . Figure 16.Impact of nitrogen content on the combustion temperature.1-estimation; 2,3experimental data; 2-combustion in the mixture of nitrogen and argon, and 3-combustion in nitrogen. Figure 16 . Figure 16.Impact of nitrogen content on the combustion temperature.1-estimation; 2,3-experimental data; 2-combustion in the mixture of nitrogen and argon, and 3-combustion in nitrogen. Table 2 . Parameters of the combustion of chromium under natural and forced filtration. Table 2 . Parameters of the combustion of chromium under natural and forced filtration.
2019-01-30T16:08:59.709Z
2019-01-17T00:00:00.000
{ "year": 2019, "sha1": "fef9d63c38b900d13b03dc58df1726e571565bb1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/9/1/98/pdf?version=1548126489", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fef9d63c38b900d13b03dc58df1726e571565bb1", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
225832809
pes2o/s2orc
v3-fos-license
Impact of fire and harvest on forest ecosystem services in a species‐rich area in the southern Appalachians To mitigate and adapt to climate change, forest carbon sequestration and diversity of the ecosystem must be included in forest management planning, while satisfying the demand for wood products. The future provisions of ecosystem services under six realistic management scenarios were assessed to achieve that goal. These services were carbon sequestration, types and quantities of roundwood harvested, and different indicators of forest health—biomass of major species, species diversity, and variation of tree age. A spatially explicit forest succession model was combined with statistical analyses to conduct the assessment at the level of both the whole forest landscape and different ecological zones (ecozones) within. An important aspect of this study was to explore the effects of the biophysical heterogeneity of different ecological zones on the outcomes of different management scenarios. The study area was located in an area of the southern Appalachian Mountains in North Carolina with high tree diversity and active forest management activities. Along with a range of management practices, such richness in diversity allowed us to examine the complexity of the interaction between management activities and species competition. The results showed that fire suppression had a greater effect on increasing biomass carbon sequestration than any management scenario that involves harvest and replanting afterward, but at the expense of other indicators of forest health. The effect of fire on species composition was the largest in the xeric parts of the study area. Based on the study results, it was proposed that a low harvest intensity with a mix of fire and fire suppression across the landscape would best balance the need for roundwood products, biomass carbon sequestration, and desirable species composition. This study also demonstrated that the combination of a spatially explicit forest succession model and statistical analyses could be used to provide a robust and quantifiable projection of ecosystem service provisions and possible trade-offs under different manage- ecosystem must be included in forest management planning, while satisfying the demand for wood products. The future provisions of ecosystem services under six realistic management scenarios were assessed to achieve that goal. These services were carbon sequestration, types and quantities of roundwood harvested, and different indicators of forest health-biomass of major species, species diversity, and variation of tree age. A spatially explicit forest succession model was combined with statistical analyses to conduct the assessment at the level of both the whole forest landscape and different ecological zones (ecozones) within. An important aspect of this study was to explore the effects of the biophysical heterogeneity of different ecological zones on the outcomes of different management scenarios. The study area was located in an area of the southern Appalachian Mountains in North Carolina with high tree diversity and active forest management activities. Along with a range of management practices, such richness in diversity allowed us to examine the complexity of the interaction between management activities and species competition. The results showed that fire suppression had a greater effect on increasing biomass carbon sequestration than any management scenario that involves harvest and replanting afterward, but at the expense of other indicators of forest health. The effect of fire on species composition was the largest in the xeric parts of the study area. Based on the study results, it was proposed that a low harvest intensity with a mix of fire and fire suppression across the landscape would best balance the need for roundwood products, biomass carbon sequestration, and desirable species composition. This study also demonstrated that the combination of a spatially explicit forest succession model and statistical analyses could be used to provide a robust and quantifiable projection of ecosystem service provisions and possible trade-offs under different management scenarios. INTRODUCTION The increasing demand for forest roundwood products may be in conflict with the need to mitigate climate change, and to maintain ecosystem diversity, forest carbon sequestration, and species diversity. Thus, there is a pressing need to understand the effects of different types of management on forest dynamics. In particular, the interactions between the ecological processes and management prescriptions are critical because they can result in spatial and temporal differences in the trade-offs between ecosystem services (Seppelt et al. 2011). Some management schemes favor one ecosystem service over others (Schwenk et al. 2012). For example, intensive management such as clear-cutting yields the highest volume of roundwood harvested, but regrowth results in large patches of even-aged trees, which could reduce habitat diversity (Keenan and Kimmins 1993). Management schemes that focus on carbon sequestration may achieve their goal at the expense of other ecosystem services, such as timber production and habitat quality (Seidl et al. 2007, Schwenk et al. 2012). Studies show that some level of timber harvest can enhance biodiversity and increase carbon sequestration (Davis et al. 2009, Fedrowitz et al. 2014. Diverse landscapes with various forest types, topography, and soils provide multiple ecosystem services, and the amount varies greatly depending on locations. Having a robust understanding of such interactions can help predict if a particular policy may succeed or fail to achieve the desired balance (Carpenter et al. 2009). Systems of coupled human-nature interactions are complex and often involve non-linear processes and spatial variation (Liu et al. 2007). Many previous studies of the trade-offs between roundwood production and forest carbon sequestration have been highly simplified. For example, the spatial dimension, species competition, forest succession, the influences of natural disturbances, and types of roundwood harvested are often not considered (Seidl et al. 2007, Touza et al. 2008, Schwenk et al. 2012, Solarik et al. 2012, Niinim€ aki et al. 2013, M€ akip€ a€ a et al. 2014, Lutz et al. 2016). However, some recent studies (Scheller et al. 2019, Remy et al. 2019, Flanagan et al. 2019) have recognized the importance of these. With new modeling capabilities, it is now possible to analyze the outcomes of forest management in the context of these aspects of forest ecology. In addition to intensity, the location and the timing of roundwood harvest affect both the income of forest owners and carbon accounting. Following harvest, the time for the carbon to be released to the atmosphere varies depending on the type of harvested roundwood, whether it is sawlog or pulpwood. In general, it can take up to a century for the carbon stored in a sawlog to be completely released to the atmosphere, but only about three years for pulpwood (Smith et al. 2006, Dymond et al. 2012). However, their relative effects on ecosystem carbon sequestration over time remain a major uncertainty (Davis et al. 2009, Nunery and Keeton 2010, McKinley et al. 2011. Forests that consist of a variety of cover types and stand structures at different successional stages provide a range of habitat niches that support more diverse non-tree species (Farnsworth et al. 2015, Trumbore et al. 2015. Disturbances can create variations in tree species composition and stand structure, as they have important impacts on forest carbon dynamics and vegetation structure (Kurz et al. 2008, Running 2008, Trumbore et al. 2015. Some natural and anthropogenic disturbances have been found to lower carbon stocks, increase emissions, or decrease sequestration rates. They have also been found to affect indicators of forest health, including the forest average ages, the age variability, and species composition. In turn, tree species composition and stand structure affect the richness of non-tree species, such as birds (Goetz et al. 2007), bryophytes, lichens (Neumann and Starlinger 2001), and the herbaceous-layer (Gilliam et al. 1995, Gilliam 2002. The three forest health indicators show how well a forest habitat can support biodiversity (Gao et al. 2014, Trumbore et al. 2015. Therefore, knowing the locations and the impact of disturbances is important because it can help allocate different treatment plans in different locations to improve forest health (Thompson et al. 2011, Martin et al. 2015, Creutzburg et al. 2017, Krofcheck et al. 2017, 2018. It is particularly interesting to conduct the analysis of ecosystem service trade-offs in a species-rich area, because tree species diversity promotes productivity in temperate forests mostly through strong complementary effect between species (Morin et al. 2011). To conduct such analysis in a species-rich forest, a forest would be subdivided into smaller ecological zones as the analysis unit. An ecological zone (ecozone) is where climate and soil are assumed to be homogenous (Simon et al. 2005, Scheller et al. 2007. It is valuable for managers to have research results in finer ecozone scale in addition to the landscape scale (USDA Forest Service at North Carolina 2011). To illustrate this concept, an area with active management and high diversity was chosen. It was located in the southern Appalachian Mountains of the United States: It included 12 different vegetation communities as defined by Simon et al. (2005), 20 common tree species, and a range of management practices within an area small enough to model with operational-level details. The objectives of this study were to further our understanding of the impacts of fire, as well as various harvest and replanting prescriptions, on the following: (1) carbon sequestration in the standing forest biomass; (2) the amount of roundwood (sawlog and pulpwood) harvested; and (3) forest species diversity, composition, and age structure, which are indicators of forest health. In this study, it was hypothesized that no fire and intensive harvest would maximize the amount of roundwood produced at the expense of biomass carbon sequestration and the quality of the forest habitat to support biodiversity. Furthermore, when the harvest and replanting intensity decreased, the volume of total harvested roundwood would decrease, but the forest was expected to have a higher proportion of ecologically desirable species. In addition, the inclusion of fire was expected to decrease the average forest age while opening up the canopy to allow more early-successional species to grow. Study area The study area is located in the Grandfather Ranger District (GRD) within the Pisgah National Forest. It is about 777 km 2 in size and is located in the eastern edge of the Southern Blue Ridge Ecological Province, in North Carolina in the southeastern USA (Fig. 1). The elevation ranges from 314 to 1810 m above sea level, changing sharply within short distances (USDA Forest Service at North Carolina 2011). The steep topography, in combination with the soil type and climate, results in a diverse vegetation composition in the area (Pittillo et al. 1998). The GRD has 12 ecozones ( Fig. 1) that belong into three ecological groups: "Northern hardwood forest," "Mesic forest," and "Xeric forest" (Simon et al. 2005). The GRD is rich in plant and animal diversity and is facing pressure for increased timber harvest (Fox et al. 2007). According to the managing agencies of the GRD, due to historical management methods, five ecozones have "departed from desirable conditions" (USDA Forest Service at North Carolina 2011). Those ecozones include "Pine-Oak Heath," "White Pine-Oak Heath," "Shortleaf Pine-Oak," "Rich Cove," and "Acidic Cove." All of them lack quality early-successional habitat, diverse tree age structure, open canopy system, and high-quality soil. Fire had been the most frequent natural disturbance in the southern Appalachian region before the U.S. Forest Service started actively suppressing fires in the early 1900s, but fire suppression is being re-examined more recently by land managers in the area because of the unintended consequences of reducing the diversity and natural variability across the landscape (Waldrop et al. 2016, Lafon et al. 2017. The exact impact of fire on vegetation, with climate change, in the southern Appalachian region is a current active field of research: for example, Lafon et al. (2007), Flatley et al. (2011), Flatley et al. (2013, and Waldrop et al. (2016). An estimation of one of the possible natural fire scenarios was used in the model simulation of this study. Field data.-Field plot data were used for model calibration and validation. The field data were obtained from the USDA Forest Service (USFS) Forest Inventory Analysis (FIA) data version 6.0.1 (USDA Forest Service 2016), available from 1974 to 2015. The FIA consists of assessments of field plots of 0.4 ha that include information on tree age, species, site condition, forest type, and estimated biomass for individual trees. The sampling intensity of the FIA plots is, on average, one plot for every 2428 ha, which was designed to provide an unbiased representation of forest types. Plots are censused every five to seven years (Riemann et al. 2010). The biomass data for FIA plots are estimated using allometric equations provided by Jenkins et al. (2003). This study used the plot data of live trees located in areas classified as forestland, from 2002 to 2015. In order to preserve confidentiality for private forest data, FIA varies the locations of the plot centers randomly to within a radius of 1.6 km from the exact location. Some plots located in private forests are swapped with plots of similar conditions, but within the same county. Since the plots are swapped with nearby plots of similar forest conditions, the data remain valid for the forest type measured. Therefore, not having the precise location information could only marginally affect the use of the FIA data in this study. Species.-Twenty tree species were chosen for this analysis (Table 1). They compose the largest portion of biomass, out of the 88 species reported by the FIA data in the southern Appalachian region in North Carolina. The selected species occur at different successional stages, with some of them being resistant to fire and some of them being shade-loving. To understand better the impact of the management on forest health and the income from the forest, as well as to better model the species that would be planted and harvested in different management scenarios, two rankings were assigned to all the species: the ecological rank, based on the ecological role of each species; and the commercial rank, based on the type of wood sold and products made. The ecological rank of the species was based on the potential habitat and food that a species can ❖ www.esajournals.org provide to non-tree species; the commercial rank of the species was based on the potential commercial use of each species harvested and the average real prices of different types of roundwood in the recent five years (Appendix S1: Section 5). Both ranks were provided by local experts working in the National Forests in North Carolina, including forest silviculturists, land managers, and forest ecologists (J. A. Rodrigue, personal communication, Oct 2015). Based on the ranks, species were grouped into 4 classes based on ecological desirability (Table 1): from Class A, the most desirable, to Class D, the least. Class A consisted of pitch pine, white oak, and eastern hemlock, which are late-successional, climax species. For example, eastern hemlock is a keystone species in riparian forests and acidic (Brantley et al. 2013). Class B consisted of some mast producers and climax species. Class C consisted mostly of early-successional species. The three species in Class D, yellow poplar, eastern white pine, and red maple, are aggressive species with capability to displace other species. Species were also classified into 5 classes based on the commercial value: from Class I, the most economically valuable, to Class V, the least. Yellow poplar and eastern white pine belonged to Commercial Class I, but were also two of the three species in Ecological Class D. Management targets and scenarios Management prescriptions and goals can differ between forest owners. Harvest rates and replantation currently vary owing to the different management objectives of different spatial units (National Forests in North Carolina and USFS 2016). The forest in the GRD consists of private, public, and Congressional Designated Roadless areas (Appendix S1: Fig. S3). Management practices in private forests were assumed to aim at maximizing the economic return from selling harvested roundwood, but the harvest prescriptions varied depending on the protection status of the land unit as classified by the North Carolina GAP study (McKerrow et al. 2006); public forests were subdivided into smaller patches called the "Management Areas" (MAs) that took into account the specific ecological and logistical Notes: "Comm. Rank" and "Eco. Rank" stand for Commercial Rank and Ecological Rank, respectively. The rankings were provided by local experts based on resulting harvested timber products and the contribution to its community of each species. Smaller rank index indicates greater commercial or ecological importance. Based on the ranks, the species were grouped into four ecological classes (Eco. Class) and 6 commercial classes (Comm. Class). The three species with the lowest ecological rank are considered as undesirable aggressive species that are capable to displace other species under current forest disturbance patterns. concerns, and were managed accordingly. The MAs of the public forests were designated by the National Forests in North Carolina and USFS (2016). No harvest takes place in the Congressional Designated Roadless areas, where the U.S. federal law prohibits any forest management activity including logging or road construction. Six management scenarios were simulated. The scenarios consist of two harvest and planting prescriptions and no prescription, each with and without fire: (1) no harvest and planting ("reference"); (2) clear-cutting and planting economically valuable species ("Aggressive"); and (3) species selection harvest and planting ecologically valuable species ("Moderate"). The two prescriptions, the "Aggressive" and the "Moderate," each consisted of different degrees of harvest and planting that varied depending on the landownership (Table 2). Specific management prescriptions were applied to the forest stands depending on the MAs. The harvest intensities of both prescriptions were similar to those the land managers were planning to implement (J. A. Rodrigue, personal communication, Jan 2016). The spatial information of the stands in the GRD was obtained from the Continuous Inventory of Stand Conditions (CISC) data based on the National Forest Stands for the Southern Appalachian Assessment (SAA) Study Area (SAMAB, 1996). Although the Aggressive prescription consisted of various levels of clear-cutting, its harvest rates in different MAs, calculated based on the estimate from the historical forest disturbance maps in between 1990 and 2005 of the GRD, were still lower than those of the historical clear-cutting in the Appalachian regions (Yarnell 1998). The historical disturbance maps in the GRD were obtained from the forest disturbance history maps derived from the Vegetation Change Tracker (VCT) algorithm (Huang et al. 2010). The major disturbances in the area have been fire, harvest, and insect outbreak (USDA Forest Service at North Carolina 2011), but the disturbances detected at 30 m resolution were mostly fire and harvest, which were the disturbance types simulated in this study. The "Moderate" prescription simulated the practice of the U.S. Forest Services in the National Forest, which can be viewed as restoration management regimes. The MAs for private forests were designated based on the protection status of the land unit. The harvest intensity of the private forest located in protected areas was the same as that of the public forest with the same protection status. Prescription details in each MA and protection status are provided in Appendix S1: Section 6. Harvest events occurred every five years in the locations within each MA where the criteria of harvest detailed in the prescription were satisfied. The current complete harvest rotation of the southern Appalachian area of 80 yr was adopted in this study (Fox et al. 2007). A no-fire scenario was simulated that resembled the condition under complete fire suppression management currently undertaken by the USFS (Flatley et al. 2013). For comparison, a natural fire scenario was also simulated. It was based on an estimation of historical fire (Xi et al. Note: "Roadless" and "Ecology" are public forest areas with special designated protected status where minimal or no harvest activities can take place, compared to other publicly owned forest. 2009), where about 0.4% to 2.6% of each ecozone was burned in each fire event (see Appendix S1: Section 7 for details). Ecological model Forest change over 100 yr was simulated using the forest succession model, Landis-II (Scheller et al. 2007) at 150 m resolution (2.25 ha grid). Landis-II is a mechanistic, stochastic, forest succession model that can be used to simulate forest response to different disturbances and provide geospatial results that are based on the interactions of individual trees and their neighbors including species productivity, mortality, reproduction, natural disturbances, and management activities Mladenoff 2004, Scheller et al. 2007). The stochastic components of the model that were used in this study came from fire ignition and seed dispersal (Scheller and Mladenoff 2004). Since it is a spatially explicit model, Landis-II can simulate the spatial aspect of inter-and intraspecies competition, including processes such as seed dispersal in the landscape and succession processes. The Forest Carbon Succession extension version 2.0 (Dymond et al. 2016) was used to account for the carbon and biomass dynamics in the forest during the succession process. It also provides information on the carbon in the trunk and branches of the tree that goes to the harvested product pool (Dymond et al. 2016). In addition to the core model, two extensions of the model were used to simulate the impact of the harvest and fire disturbances: the Base Harvest and the Base Fire. The Base Harvest version 3.0 allows the simulation of different harvest methods, areas harvested, and rotation length (Gustafson et al. 2000). Wildfires were generated based on the physical characteristics of each ecozone in the Base Fire extension version 3.0.3 (He and Mladenoff 1999). Parameterization of the fire information, such as the probability of ignition, and the fire cycle of each ecozone were based on the estimation from the studies in the southern Appalachians before human settlement by Lafon et al. (2017), as well as Wade et al. (2000), Xi et al. (2009), Fesenmyer andChristensen (2010), and Flatley et al. (2013). Fire cycle in each ecozone for each management scenario is provided in Appendix S1: Section 7. Model parameterization, calibration, and uncertainties.-Landis-II models forests as a grid of interacting cells (Scheller et al. 2007), with each cell assigned to an ecozone (Fig. 1). The initial conditions of the species and age cohort composition in each cell were calculated based on the FIA data in between 2002 and 2010. Then, the model was simulated without any disturbance for 26 yr. Output from the model was in units of mass of carbon and was converted to biomass by assuming carbon is half of total biomass (Smith et al. 2006). The value of the aboveground live trees biomass in each cell changes with time in the model simulation based on species, the impact of disturbances, growth, and age-dependent mortality. Details of the initial values of climate, soil, and species are provided in Appendix S1: Section 1. The values of the live tree aboveground biomass density chosen from the FIA database and used for model calibration and validation were those located in forestland where there had not been any disturbance between 1990 and 2015: Those between 1990 and 2010 were used for calibration and those between 2011 and 2015 for validation of the model results. Calibration was done at the plot and species levels, while validation was done at the plot level. Details on the FIA data used for calibration and validation are provided in Appendix S1: Section 1.13. To capture the stochastic components of the Landis-II model, each management scenario was simulated five times to estimate the betweenrun variability (Thompson et al. 2011). The standard deviation of the mean of the results of the between-run variability for each management scenario was used to determine the significance of the difference between management scenarios. The contributions of the uncertainties in input parameters were analyzed using sensitivity analysis. The inherent uncertainties in the model projection results due to the uncertainties of the input parameters, the calibration, the accuracy of the model projection, etc., were assumed to be the same for each scenario. In all the six management scenarios that were simulated in this study, the two scenarios where no harvest and planting occurred served as reference scenarios and were used to compare the impact of fire and harvest and planting prescriptions. Analysis of the simulation results Carbon sequestration was simulated for the aboveground standing biomass, defined by the net change of carbon in the forest biomass between simulation years 1 and 100. The net change was divided by 100 to obtain the average annual carbon sequestration. The volume of roundwood harvested was estimated by converting the annual carbon flux to the harvested product pool into biomass, then into wood volume. An average specific gravity of both hardwood and pulpwood in the southeastern United States, 0.49 t/m 3 (Smith et al. 2006), was used to convert from biomass to volume units, in which commercial roundwood production is measured. Harvested roundwood was divided into sawlog and pulpwood harvested. For species that belong to Commercial Classes I to III, the harvested trunk was considered as sawlog while the branch portion was considered as pulpwood, except for harvested Virginia Pine, and all species that belong to Commercial Classes IV and V were considered to be all harvested pulpwood. Harvest density is a useful indicator for the productivity of the land and was calculated by dividing the harvested volume by the area of harvest. The results could be used to estimate the effect of the simulated forest on the diversity of the landscape if all organisms were present. Four indicators were established to quantify the quality of the forest habitat to sustain biodiversity: (1) the relative contribution of biomass in each ecological class to the total biomass; (2) the Shannon diversity index based on the biomass of each species; (3) the average age; and (4) the standard deviation of age (Gao et al. 2014, Farnsworth et al. 2015, Trumbore et al. 2015. Older forests and those having a wider range of tree ages can provide more distinct niches, and hence greater diversity (Spies 2004, Gao et al. 2014). To measure for the higher probability of niche diversity, the distribution of the species biomass and the average and the standard deviation of ages of all species for the entire GRD and for the individual ecozones at the end of the simulation year were also assessed. Biomass of each tree species in each management scenario was plotted to show the distribution of the species biomass. Slope of the biomass distribution shows the species equitability: The steeper the slope, the more unevenly distributed the species. The Shannon diversity index (Shannon 1948) was used to assess the tree species abundance and richness after 100 yr. Landis-II does not track individual trees, but models the species present, their biomass, and also the number of cohorts of different ages (Scheller et al. 2007), so the proportion of the biomass of each species was used in the calculation of a modified Shannon diversity index (H') as follows: where p i is the proportion of biomass of species i. The use of relative biomass in the Shannon diversity index rather than the relative species abundance results in an index of diversity that is more sensitive to the relative biomass (Dickman 1968), and energy distribution among species (Wilhm 1968), which is more appropriate for this study. For example, it corrects the discrepancy which would arise where a species with high number of individuals but only small relative biomass, such species would otherwise dominate the diversity index values. However, the Shannon diversity index does not account for the ecological values of species (Hurlbert 1971) nor the species age structure. But both the ecological values of the dominant species and the distribution of the species age structure are also important indicators of the likely ability of a forest habitat to sustain biodiversity (Noss 1999, Tews et al. 2004, Leinster and Cobbold 2012, which were also analyzed in this study. To compare the Shannon diversity index between each pair of management scenarios, bootstrapping was used to generate 1000 samples of the Shannon diversity index for each scenario. The biomass of each species in each management scenario for all of the five simulations was used to calculate the Shannon diversity index samples used in the bootstrapping. Pairwise t-tests were used to examine if the differences of the indices between each pair of management scenarios were significant (Hutcheson 1970), where the P-values were adjusted by the Benjamini-Hochberg method (Benjamini and Hochberg 1995) with the false discovery rate of 0.01. For the differences that were significant, the effective numbers of species, calculated by taking the exponent of the Shannon diversity index, were calculated, so that their differences could be ❖ www.esajournals.org compared on a linear scale (MacArthur 1965, Jost 2006, Leinster and Cobbold 2012. Effect of hemlock woolly adelgid infestation.-The eastern hemlock is threatened by an invasive insect, the hemlock woolly adelgid (HWA) (Elliott and Vose 2011). The FIA biomass data for the eastern hemlock may not reflect the overall damage by HWA, because of the frequency of revisiting the same plots (5-7 yr) and the possible selection bias because the data report the biomass of only healthy trees and not those killed by HWA. Species composition of the area changed drastically in the complete absence of the eastern hemlock as a result of HWA infestation in the GFD, as shown in a simulation study (Birt et al. 2014). In order to understand the impact of the reduction of eastern hemlock biomass due to HWA infestation, sensitivity analysis on the management scenario that had the highest proportion of resulting eastern hemlock biomass was conducted, as that scenario would be the most sensitive to any change on the eastern hemlock parameters. The eastern hemlock parameters were altered in three ways to test the resulting species and age composition: (1) no planting of eastern hemlock; (2) reducing the maximum biomass value of eastern hemlock by 70%, based on observed eastern hemlock biomass loss from the HWA infestation of 1987 in southern New England (Small et al. 2005); and (3) both no planting and maximum biomass reduction. Statistical analyses Besides directly comparing the Landis-II results, a regression analysis that controlled the impacts of the area of each ecozone and heterogeneity between ecozones was performed to assess the impact of each management scenario on the four indicators that were used to quantify the quality of the forest habitat. Those four indicators were as follows: (1) biomass of each ecological class; (2) the Shannon diversity index; (3) the average age; and (4) the standard deviation of age. Regression analysis was performed because it can provide quantification of the effects of various management scenarios. Moreover, since species biomass proportions are compositional data, where each of the proportion is nonnegative and summed up to one, using ordinary least squares (OLS) regression can give misleading results (Aitchison 1982). Therefore, a Dirichlet component regression approach was used in addition to the OLS regression (Aitchison 2003), because it takes into account the compositional nature of the data. In both of the regressions, the ecological classes of species, the Shannon diversity index, age, and its standard deviation were the dependent variables; fire, the harvest and planting prescriptions, and the areas of each ecozone were the independent variables. Natural logarithm transformation was taken for the dependent variables so that the data would satisfy the normal distribution assumption for the regression analysis. Moreover, in an effort to check the model for errors with OLS assumptions, tests were run for high multicollinearity in the regressors, looking for heteroskedasticity and spatial autocorrelation of the error term, using the variance inflation factors (VIF), White's heteroskedasticity-robust standard errors, and Moran's I. Calibration, validation, and sensitivity analyses For species-level calibration of Landis-II with the FIA data, the adjusted R-squared of the regression line was 0.65 (P < 0.01) (Fig. 2). Statistical analyses of both calibration and validation at the plot level showed that the model simulations were statistically similar to the FIA data: The simulated average aboveground biomass density was 162.5 t/ha, while that of the FIA was 171.7 t/ha. Since the Fisher F-test showed that the variances of the two distributions were not homogeneous (P = 0.003), a Welch two-sample ttest was used (Ruxton 2006). The results showed that the two groups were similar (P = 0.57). Details of various testing methods and comparison are available in Appendix S1: Section 1.13. Two variables, the maximum ANPP and the maximum biomass of each species, were varied by 10% to test the sensitivity of the model. Both variables have been shown to be the most influential in Landis-II (Thompson et al. 2011, Simons-Legaard et al. 2015. Overall, the aboveground biomass density showed <10% change with 10% change with the two tested parameters. Details of the sensitivity analysis of this study are provided in Appendix S1: Section 4. Heteroskedasticity and spatial autocorrelation were not found in the data, so the results of the regression analyses could be interpreted without any further transformation in the data. The results of the various tests conducted showed that there was no high multicollinearity in the regressors and no heteroskedasticity and spatial autocorrelation of the error term. The test results showed that the methods used in the statistical analyses were valid. Details of the tests performed and the results are provided in Appendix S1: Section 10. Effects of management on the long-term ecosystem services Biomass carbon sequestration and roundwood harvested.-The simulation of the entire GRD showed that fire control was the most important factor influencing carbon sequestration in the aboveground biomass (Table 3). When there was no fire, there was a net annual carbon gain in the standing biomass. Harvest and planting activities resulted in less carbon sequestered than those for the reference scenarios with the same fire suppression status. However, when harvest took place, some of the carbon in the aboveground biomass was transferred to the harvested sawlog, about 33% of which serves as a longterm carbon pool for storing carbon for up to 100 yr in the end-use products such as houses (Smith et al. 2006). The difference between the scenarios with the Aggressive prescription and the reference scenarios in annual live biomass Table 1. Orange line represents the 1:1 relationship and the red line represents the linear regression of the model and the FIA data. Slope of the regression line is 0.90, and the adjusted r-squared was 0.65, with P-value < 0.01. The outer pair of the red dash lines is the 95% prediction interval, while the inner pair is the 95% confidence interval. carbon sequestration was about 13% less, while that for the Moderate prescription was about 5% less. Although harvested sawlog provides more benefits in terms of serving as a carbon storage pool and generating profit, both harvested sawlog and pulpwood provide income. The percentage of harvest area was specified in the model; however, the actual volume of harvest depended on the number of species that satisfied the specified harvesting criteria, such as the requirements on species and age. For the scenarios with Aggressive prescription, there was no significant trend in the harvested density of pulpwood, and the decreasing trend in the harvested density of sawlog was not significant when there was fire. However, when there was no fire, the decreasing trend in the harvested density of sawlog was very significant with a rate of 0.02 m 3 Áha À1 Áyr À1 (P < 0.0001; adjusted r 2 = 0.91). For both of the scenarios with Moderate prescription, there was no significant trend in the harvested density for sawlog, but that for pulpwood increased significantly at a rate of 0.02-0.03 m 3 Áha À1 Áyr À1 (P < 0.001) (Fig. 3). Species composition and diversity. -The Shannon diversity index in the initial condition was 2.45. The Shannon diversity indices reflect the evenness of the tree species biomass distribution and the number of species. Fire resulted in more diverse forest but with less ecologically preferable species. Dominant species were killed in fire, equivalent to resetting the forest to an earlier successional stage, and hence resulted in a more even distribution of the biomass among species (Fig. 4). The values of the diversity index after 100 yr decreased for the No Disturbance, and the Moderate with no fire scenarios, but increased for all the scenarios with fire or with Aggressive prescription (Table 3). Fire significantly increased diversity by about 3.2% compared with scenarios with no fire, as shown by the regression analysis (Appendix S1: Table S10). The species distribution of the landscape in the initial condition was dominated by chestnut oak and other species of lower ecological ranks (Appendix S1: Fig. S1). After 100 yr, in the scenario where there was no fire and no planting and harvest, the landscape was dominated by American beech and eastern hemlock. However, in the scenarios with fire, especially in Xeric forest, no one species dominated, with the forest consisting mostly of early-successional species in Ecological Class C. This change was at the expense of ecologically preferred Class A and ecologically least preferred Class D. In other words, fire only affected the proportion of earlysuccessional species (Ecological Class C), as shown in Appendix S1: Table S8. Thus, it can be concluded that the ratio of Ecological Classes A and D appeared independent of fire status and that Ecological Classes A and D were replaced by Ecological Class C in approximately equal proportion in case of fire. It was even more evident when compared to scenarios with no fire. The Aggressive scenario had significant but different impacts depending on whether fire was present. It significantly decreased the Shannon diversity index by 53% with fire and significantly increased it by 2.6% with no fire (see Appendix S1: Section 9 for regression analysis details). The biomass of the dominant species decreased with fire ( Fig. 4), increasing in the evenness of the species biomass distribution, as indicated by the higher Notes: The reported values are the average AE standard deviation of the 5 model runs. Negative numbers indicate a loss. "Fire" indicates the presence of fire; "Avg. age" is the average tree age in year; "SD age" is the standard deviation of age in year. value of the Shannon diversity index for the scenarios with fire. In those scenarios, any planting or harvesting decreased the evenness of the species biomass distribution, because they favored the newly planted species over the others. However, for the scenarios where fire was absent, later successional species became dominant. Any management activities that did not favor the dominant species, such as planting, increased the biomass of other species and the evenness of the species biomass distribution (Fig. 4). When the input parameters related to eastern hemlock were altered, the species composition of the landscape changed drastically. The most significant change was when the allowed maximum biomass of eastern hemlock was reduced by 70% and no planting: The eastern hemlock was no longer the dominant species in the landscape being replaced by deciduous species. One such species was the ecologically more desirable white oak (QUAL), which was replaced by the less ecologically desirable, short-lived scarlet oak (Fig. 5). In other cases, the proportion of the biomass of eastern white pine (PIST), the least ecologically desirable, increased significantly. The regression analysis showed that when the allowed maximum biomass of eastern hemlock decreased, the Shannon diversity index increased significantly (about 1.7%), but the proportion of species belonging to Ecological Class A decreased significantly (about 62%) and the early-successional, Ecological Class C, increased significantly (about 42%) (see Appendix S1: Table S11). Average and the variation of age.-Ecozones with even-age tree distributions were especially sensitive to fire and the Aggressive prescription, especially the Xeric forest. Fire significantly reduced the average age of trees by about 38% and its variation by about 18%. With fire, the Aggressive prescription further reduced the standard deviation of tree age significantly by about 44% (see Appendix S1: Section 9 for regression analysis details). Results showed that the Xeric forest ecological group, ecozones 5 ("Pine-Oak Heath"), 6 ("White Pine-Oak Heath"), and 11 ("Shortleaf Pine"), all consisting of large blocks of even-aged forest (USDA Forest Service at North Carolina 2011), were especially sensitive to fire-the average and standard deviation of age were much lower with fire. In the case of no fire, the ages of the forest were particularly sensitive to the Aggressive prescription (Fig. 6). Implications for conservation policy The results showed that the aboveground live biomass carbon transferred to the harvested sawlog was a significant carbon sink. Harvested sawlog production serves as an indicator of the longterm carbon sequestration potential of the forest under different management scenarios. Carbon loss in the live biomass was transferred to the harvested sawlog pool, of which a small portion can remain for between 60 and 100 yr (Dymond et al. 2012, Hoover et al. 2014). In the case when there was no fire, the forest served as a net carbon sink. In this study, when there was no fire, the amount of carbon that was transferred to the long-term carbon pool was not that large, of about 6%, and it would eventually be released to the atmosphere at the end of its useful life. However, in the scenarios where fire occurred, the whole forest landscape became a carbon source. In this study, the amount of carbon that was transferred to the long-term carbon pool ranged from about 13% in the Moderate prescription to about 52% in the Aggressive prescription. Compared to the scenario where no harvest had taken place and the total carbon loss in the forest aboveground live biomass was released to the atmosphere, the harvested sawlog could at least serve as a long-term carbon pool that defers the release of carbon to the atmosphere for some time. This is an important component in more accurate carbon accounting. Although intensive roundwood harvest would bring more income to forest landowners and provide a larger potential long-term storage of carbon, the significant decreasing trend of the volume of the harvested sawlog per hectare in at least a prescription similar to the Aggressive prescription showed that such harvest intensity may not be sustainable for a long term. The annual average harvest volume per hectare of the whole area ranged between 2.60 and 3.45 m 3 Áha À1 Áyr À1 for pulpwood and between 3.62 and 4.95 m 3 Áha À1 Áyr À1 for hardwood (Table 3). The results are similar to the finding in Ling et al. (2016), where harvested roundwood volume per hectare of forestland in the same region between 1986 and 2009 was about 0.08 to 7.35 m 3 Áha À1 Áyr À1 . Although the harvest volume per hectare is similar for both prescriptions, the harvested areas for the Aggressive prescriptions were greater than those in the Moderate prescriptions. Also, the results in this study, as well as other field studies conducted in the area, showed that the density and the basal area of trees decreased under repeated intensive harvest (Leopold et al. 1985, Elliott andSwank 1994). Conserving eastern hemlock is very important in preserving the species composition of a forest similar to that in the GRD. The results of the simulation of the effect of the hemlock woolly adelgid (HWA) infestation found that with HWA infestation, eastern hemlock would be replaced by less desirable, deciduous species (Small et al. 2005, Brantley et al. 2013, Birt et al. 2014. Such drastic changes in species composition could exert a strong negative impact on the hydrological cycle (Brantley et al. 2013), on non-tree species diversity (Small et al. 2005), and on the carbon and nitrogen cycles (Nuckolls et al. 2009). Furthermore, the replacement tree species that became dominant, American beech and scarlet Fig. 5. Species biomass distribution after 100 yr of simulation for the three cases of altering the parameter for eastern hemlock in the management scenario where there was no fire and the Aggressive prescription. Each management scenario was simulated for five times to account for the between-run variability of the model. Error bars represent three standard deviations from the mean biomass of each species calculated from the five simulations. Green bars indicate the three most ecologically preferred species and the red the least ecologically preferred species. oak, are both deciduous. Not only would that affect the growth of other understory vegetation, but also it could affect the temperature of runoff the streams, affecting the insect population and aquatic ecosystem (Snyder et al. 2002, Ross et al. 2003. In addition, the results showed that moderate harvest and letting fire occur, especially in ecozones that belong to the "Northern hardwood" ecological group, would provide a forest habitat with increased diversity in tree species and ages. As a result, that can be expected to provide more niches for a greater variety of species. However, the amount of roundwood harvested and biomass carbon would be significantly lower. The lack of harvest and planting lowered the diversity of the forest, but the Aggressive prescription that included clear-cutting also decreased the diversity, because both management methods resulted in uniform stand ages of the forest. The impacts of those changes on the diversity of herbaceous species would vary depending on the ecozone (Gilliam et al. 1995, Gilliam 2002. These trade-offs have been reported in a species-rich area in Oregon (Creutzburg et al. 2017). Xeric forest in the area was sensitive to the fire regime, as recognized by the Nantahala and Pisgah National Forests Land Resource Management Plan (National Forests in North Carolina and USFS 2016). This study results agree with their finding: The three Xeric forest ecozones Pine-Oak Heath (ecozone 5), Shortleaf Pine-Oak (ecozone 6), and Shortleaf Pine (ecozone 11) were especially sensitive to fire (Fig. 6). Fire has also been reported to promote high levels of Table Mountain pine and pitch pine (Lafon et al. 2007), but also decreased both the average and standard deviation of the tree age in those ecozones decreased by about 50% (see Appendix S1: Table S8). Limitations A simplified fire simulation approach for management scenarios analysis was used in order to avoid introducing unnecessary model complexity that would be beyond the scope of this paper. The parameters used in the fire simulation were estimated based on existing literature that may not be accurate, and the Landis-II Base Fire extension is based on simple assumptions such as the relationship between fire intensity and Fig. 6. Age distribution for each ecozone for the initial condition and scenarios with different fire suppression status: (a) fire present; (b) no fire. Each number represents an ecozone: "Northern Hardwood" (1), "Spruce-fir" (7), and "High-Elevation Red Oak" (8) belong to the "Northern hardwood forest" group; "Mesic Oak-Hickory" (2), "Acidic Cove" (4), "Rich Cove" (10), and "Dry Oak-Hickory" (12) belong to the "Mesic forest" group; and "Oak Heath" (3), "Pine-Oak Heath" (5), "White Pine-Oak Heath" (6), and "Shortleaf Pine-Oak" (11) belong to the "Xeric forest" group. Specifically, the ecozones belonging to the "Northern hardwood forest" group are located in high-elevation environments, while the ecozones that belong to the other two ecological groups are located in the low-elevation environments (Simon et al. 2005). time since last fire, and does not account for fuel types and their spatial pattern (Sturtevant et al. 2009). This study simulated scenarios with fire and without fire. However, some fire does currently occur in the GRD. Although only one percent of the prescribed burns escaped, the Forest Service does undertake some low-to mediumdensity burning to thin out leaves and woody debris (USDA Forest Service 2018). Future studies using the Dynamic Fire extension of Landis-II model, which accounts for fuel types, topography, and weather, may give a better quantification of the impact of climate change, but additional parameters will be required. The impact of climate change was not modeled. In this study, temperatures and atmospheric CO 2 concentration were assumed to fluctuate within the range of the past 50 yr. Any increase in the values of these may increase tree growth, and possibly fire size and frequency (McKenzie et al. 2004, Westerling et al. 2006, Littell et al. 2010). This will affect not only the species composition (Remy et al. 2019) and the feedback between vegetation and fire ), but also the economic values of roundwood and hence the choice of species to harvest and plant. Climate change affects the supply of the types of roundwood, and hence its pricing. A detail analysis is needed to accurately model the impact of climate change on the harvested species pattern. Our study presented the possible impact of the scenarios without climate change, which can be served as a foundation to future research that investigates the effect of climate change on fire, species availability, and also harvested roundwood. Thinning was not simulated in this study, although it is a very common practice that allows the trunk-wood of desirable trees to increase faster (Keyser and Brown 2014). Accounting for the carbon flux to the harvested wood product pool from thinning and the impact of climate change coupled with different management regimes in a species-rich area could be included in a future study. In the model projection of 100 yr, there may be some uncertainties in the model that were not accounted for during the process of data collection, calibration, and projection (Sklar and Hunsaker 2001). Although sensitivity analyses, calibration, and validation were done to account for uncertainties, the statistical analyses in the study compared the results in each management scenario only with those in its corresponding reference scenario. One of the limitations of this study is that we used a number of different indicators with different units. It can be more direct to evaluate ecosystem services after converting each into the same unit, but it often involves additional complex analyses or is infeasible. In such cases, reporting the outcomes in biophysical terms is often preferable (Guerry et al. 2015). Statistical analyses that account for the spatial heterogeneity allowed the evaluation of the impact of each management scenario with useful insight for comparison. CONCLUSIONS For our study of a region in the southern Appalachians, the results showed that the presence of both eastern hemlock and fire has an important influence on the ecosystem. Despite all the limitations, the simplified fire model could provide a quantifiable impact assessment on fire restoration, achieving the goal to let the forest managers understand the impact of letting some fire return to in the landscape. The results of this study distinguished the different types of harvested roundwood, which, though important for both carbon accounting and consideration of the potential income of forest owners, often is not included in forest succession dynamics or ecosystem service trade-off studies. The results also suggested some level of harvesting and burning would be beneficial to the overall long-term forest health while satisfying the demand on roundwood. The methodologies used in this study could be applied to different places to assess the longterm ecosystem service trade-offs and to assist management decisions. The restoration project of the GRD planned to restore natural fire, especially in certain ecozones where the forest health had been negatively impacted by years of fire suppression (USDA Forest Service at North Carolina 2011). The results showed that for ecozones that suffered from lack of quality early-successional habitat and even-aged trees, such as the Pine-Oak Heath (ecozone 5), restoring fire would have a big impact of the average and standard deviation of forest age (Fig. 6). The result of this study also showed that some level of sawlog harvest helped delay some carbon that would have otherwise been released to the atmosphere immediately in the case of fire.
2020-06-25T09:06:27.999Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "daa398f000bcdf90290fd6dec8f25fbb03ff3eb0", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ecs2.3150", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "23cddf229fdba4fea0c129a8ca3d0cc7beb0db5a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
8153986
pes2o/s2orc
v3-fos-license
Evaluation of antispasmodic activity of different Shodhit guggul using different shodhan process According to ayurvedic texts shodhan vidhi is an important process which enhances the biological activity of a compound and reduces the toxicity at the same time. Before incorporating into formulations, guggul is processed using Shodhan vidhi involving different shodhan dravyas like gulvel, gomutra, triphala, dashmul . We have evaluated the antispasmodic activity of guggul on ileum of guinea pig and Wistar rats. The animals were sacrifi ced and ileum tissue of guinea pig and rat was isolated and tested for antispasmodic activity using different spasmogens like acetylcholine, histamine and barium chloride. It was observed that the different shodhit guggul ( shudha guggul ) i.e. processed using different shodhan vidhi , showed good antispasmodic activity as compared to Ashudha guggul . When acetylcholine was used as spasmogen, gulvel and triphala shodhit guggul showed good antispasmodic activity than other shodhit guggul. Thus shodhan vidhi enhances the therapeutic properties of guggul . simple, reproducible and effi cient reverse phase high performance liquid chromatographic method developed for simultaneous determination of valsartan and hydrochlorothiazide in tablets. A column having 200 × 4.6 mm i.d. in isocratic mode with mobile phase containing methanol:acetonitrile:water:isopropylalcohol (22:18:68:2; adjusted to pH 8.0 using triethylamine; v/v) was used. The fl ow rate was 1.0 ml/min and effl uent was monitored at 270 nm. The retention time (min) and linearity range (µg/ml) for valsartan and hydrochlorothiazide were (3.42, 8.43) and (5-150, 78-234), respectively. The developed method was found to be accurate, precise and selective for simultaneous determination of valsartan and hydrochlorothiazide in tablets. changing its position. The percentage RSD was found to be within the limits. The recovery study was carried out at two levels, 50% and 100%. The complete validation parameters are shown in Table 2. Hence the developed HPTLC technique is simple, precise, specific and accurate, statistical analysis proved that the method is repeatable and selective for simultaneous analysis of rabeprazole and itopride hydrochloride as bulk drugs and in pharmaceutical dosage forms without any interference from the excipients. ACKNOWLEDGMENTS The authors are grateful to M/s SNR and Sons Charitable Trust, Coimbatore, India, for providing the facilities to carry the experiment and Grandix using Shodhan vidhi involving different shodhan dravyas like gulvel, gomutra, triphala, dashmul. We have evaluated the antispasmodic activity of guggul on ileum of guinea pig and Wistar rats. The animals were sacrifi ced and ileum tissue of guinea pig and rat was isolated and tested for antispasmodic activity using different spasmogens like acetylcholine, histamine and barium chloride. It was observed that the different shodhit guggul (shudha guggul) i.e. processed using different shodhan vidhi, showed good antispasmodic activity as compared to Ashudha guggul. When acetylcholine was used as spasmogen, gulvel and triphala shodhit guggul showed good antispasmodic activity than other shodhit guggul. Thus shodhan vidhi enhances the therapeutic properties of guggul. Healthy Wistar Rats weighing 150-200 g or guinea pigs (weighing 300-500 g) were procured for the study (Haffkines Institute, Mumbai). Animals of either sex were fasted 24 h before the study. Then the animals were sacrifi ced to isolate the ileum pieces. In case of rats, ether was used as anaesthetic agent, until death and guinea pigs were sacrificed by stunning or exsanguination as per CPCSEA recommended guidelines. Two methods were used to evaluate the Antispasmodic activity. The in vitro method was performed using rat or guinea pig ileum using matching and interpolation method. Guinea pig ileum was used for acetylcholineand histamine-induced contractions as it is very sensitive to both whereas rat ileum was used for barium chloride-induced contractions. The abdominal cavity was quickly opened and a piece of ileum was isolated. It was placed in a beaker containing Tyrode solution maintained at 37°C. Tissue was cut into pieces of 2-3 cm in length. The distal piece was most preferred and used, being the most sensitive to different spasmogens. The remaining study was carried out on an assembly for isolated tissue i.e. Sherrington revolving drum and an organ bath. The spasms were induced using acetylcholine, histamine and barium chloride. Submaximal doses of acetylcholine, histamine and barium chloride were selected and different doses of different shodhit guggul (like gulvel, dashmul, triphala, gomutra shodhit, ashodhit) were administered. The responses were recorded on the Sherrington recording drum. The same protocol was followed for different processed gugguls to evaluate their respective antispasmodic activity. A dose response curve was obtained and then percent inhibition of the contractions produced by submaximal Guggul, Commiphora mukul (Indian bdellium Tree) is a small tree or shrub with spinescent branches. It is a gum resin obtained from incision of the bark. Ayurvedic literature is full of praises for guggul and its divine action 1 . The explanation of word guggul is Gunjo vyadhegurdti rakshati which means to give relief against different diseases. Indians know guggul from ancient age as it is used in the treatment of many diseases 2 . Guggul is a resin and hence before incorporating it into different formulations it is required to be purified and detoxified. This process is called as Shodhan vidhi. Shodhan vidhi is important to remove unwanted and harmful effects of the resin and to increase useful therapeutic effects. Guggul is purifi ed by two processes, Samanya shudhi (general detoxifi cation) and Vishesh Shudhi (special detoxification). Vishesh shodhit guggul is purified using special substances (vishesh dravyas) like gulvel, dashmul, triphala and gomutra. The present study has been carried out using guggul processed using different Shodhan vidhi, to evaluate its antispasmodic activity on guinea pig and rat ileum. Guggul was manufactured and supplied by Shri Dhootpapeshwar Ltd., Panvel, India. In order to compare the activity of various processed guggul with a marketed preparation, Entostal was used as a standard drug, which is a polyherbal, ayurvedic preparation used as antispasmodic marketed by Om Pharmaceuticals Ltd, Bangalore. Shodhan vidhi was carried out according to ayurvedic texts. First Samanya shodhan was done using distilled water and then Vishesh shodhan was done using different Shodhan dravyas 3 . Study was performed using healthy Wistar rats and guinea pigs of average weight of either sex. Approval for the use of animals was obtained from the of vagal stimulation in the body. So mostly all endogenous colic pain like biliary, gastrointestinal, ureter arise due to such acetyl choline-induced spasms. Histamine-induced spasms are mediated by H 1 receptor activation which is characteristic of allergy producing substances leading to abdominal pain e.g. lead poisoning, uremia, excessive gastric acid or even bile secretion. Barium chloride-induced spasms are not mediated by any receptor but they are mediated by increased Ca ++ channel entry due to spasmogen or increased phosphodiaesterase activity leading to calcium channel activation. Percent inhibition of induced spasms by different spasmogens was taken as the parameter to evaluate the antispasmodic activity. Various processed guggul namely gulvel, dashmul and triphala shodhit guggul showed considerable antispasmodic activity against spasms induced by all three different spasmogens used. This indicates that processed guggul possessed antispasmodic activity against spasms of different origin. This explains the possible utility of processed gugguls for variety of spasms. Ashodhit guggul failed to inhibit the spasms. Results indicated that Ashudha guggul was not effective in inhibiting the spasms induced by acetylcholine, histamine or barium chloride (Table 1, fi g. 1). On the contrary, processed gugguls inhibited the spasms induced by these spasmogens. Gulvel shodhit guggul showed maximum inhibition of acetylcholine-induced spasms whereas dashmul shodhit guggul showed maximum inhibition of histamineinduced spasms. Triphala shodhit guggul showed maximum inhibition of barium chloride-induced spasms. Gulvel shodhit, triphala shodhit and dashmul shodhit guggul have a considerable inhibitory activity against spasms induced by different spasmogens compared to other processed gugguls. Entostal, the marketed polyherbal formulation, also exhibited a highly significant inhibitory activity but only in acetylcholine induced spasms. It produced negligible inhibitory activity against barium chloride-induced spasms. When different processed gugguls were subjected to charcoal transit along with Entostal, most of the vishesh shodhit guggul showed significantly lower charcoal transit indicating that processed guggul defi nitely exhibited better antispasmodic activity than ashodhit or even samanya shodhit guggul ( Table 2, fi g. 2). It also showed higher activity than Entostal, which is a marketed antispasmodic preparation. dose of the spasmogen was reported. Results are expressed as mean±standard error mean (SEM), n=4, Student t test The effect of different processed guggul was noted on the length of the intestine travelled by orally administered charcoal in an in vivo test. This was expressed as % of small intestine length reached by lower limit of charcoal. The animals were kept fasting for 24 h before the experiment. Ashodhit guggul and different processed guggul were fed orally 30 min prior to the oral administration of charcoal meal. A 10% w/v solution of animal charcoal in 5% gum acacia was used. 0.5 ml of this suspension was administered orally per mouse irrespective of weight. After 20 min mice were sacrificed by severing the carotids. The stomach and intestine were excised from gastro-esophageal junction to the ileocecal junction. The distance the charcoal meal travelled from the pylorus was measured. As the intestines of the mice used were all of similar length, it was considered justifi able to use the distance travelled by the charcoal meal as an index of intestinal transit. In this way, the intestinal transit was measured for different groups of mice. Properties of guggul have been described as hridya, medoghna and mehaghna 4 . The two important pharmacological properties of guggul are antiinflammatory action 5 and antihypolipidemic action 6 . Ayurvedic physicians most widely prescribe guggul formulations for the treatment of arthritis. Guggul is also used for healing bone fractures and infl ammations, in cardiovascular disease, obesity and lipid disorders. Spasms are very common in human beings. Spasms are continuous smooth muscle contractions, may be induced due to endogenous acetyl choline and histamine. They can lead to discomfort, uneasiness and could result into irritation and infl ammation of the gastrointestinal tract posing a major health problem to the human being. It could even lead to threatening conditions such as gastritis and infl ammatory bowel disorders. Antispasmodics are used to treat such conditions successfully, though they show various side effects such as dry mouth, narrow angle glaucoma, tachycardia, obstructive disease of GI tract. Acetyl choline-induced spasms are due to muscarinic M 3 receptor activation, which is a characteristic histamine H 1 receptors or it may be simply inhibiting phosphodiaesterase enzyme and consequently inactivating Ca ++ channels responsible for spasms. The shodhit guggul can be thus explored for its activity as antispasmodic agents. Also, Ashodhit guggul exhibited very weak antispasmodic activity, indicating that the process of Shodhan vidhi is very important to increase the therapeutic activity of a drug. So while manufacturing of ayurvedic medicines, this concept of Shodhan vidhi must be considered for safer and better utilization of therapeutic activity of a drug. When Entostal was evaluated for its antispasmodic activity, it exhibited inhibition of only Ach induced spasms indicating anticholinergic activity against different spasmogens indicating its narrow range antispasmodic activity. As guggul processed by different ways exhibited antispasmodic activity both by in vitro and in vivo methods and against different spasmogens used; further exploration of antispasmodic activity would help guggul establish itself as a valuable antispasmodic in therapeutics. 2'-(1H-tetrazol-5yl)[1,1'-biphenyl]-4-yl]methyl]-L-valine, is an orally active specifi c angiotensin II receptor blocker effective in lowering blood pressure in hypertensive patients 1 . A number of high performance liquid chromatographic (HPLC) methods are available for separation and quantifi cation of valsartan from pharmaceutical dosage forms 2 . Hydrochlorothiazide is a diuretic of the class of benzothiadiazines widely used in antihypertensive pharmaceutical formulations, alone or in combination with other drugs, which decreases active sodium reabsorption and reduces peripheral vascular resistance 3 . It is chemically 6-chloro-3,4-dihydro-2H-1,2,4-benzothiadiazine-7-sulfonamide-1,1-dioxide, and was successfully used as one content in association with other drugs 4-9 in the treatment of hypertension. Simultaneous determination of both drugs is highly desirable as this would allow more effi cient generation of clinical data and could be more cost-effective than separate assays. There are very few methods appearing in the literature for the simultaneous determination of valsartan and hydrochlorothiazide in tablets. Since these methods were based on HPLC and UV-derivative spectrophotometry 10-11 , the procedure was inconvenient for determination and the run times were rather long. The aim of this study was to develop a simple, precise and accurate reverse-base high performance liquid chromatographic method to estimate valsartan and hydrochlorothiazide in tablets. This method was simple and rapid and provides accurate and precise results, as compared with other
2018-04-03T01:52:17.550Z
2008-05-01T00:00:00.000
{ "year": 2008, "sha1": "1b8dfee898106217a520eea82bfa8ac8c1365287", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2792506", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "4c1a6ea360d6bcbd9c3d4af620802d2d139cecf0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
246065510
pes2o/s2orc
v3-fos-license
A novel machine learning scheme for face mask detection using pretrained convolutional neural network Corona virus 2019 (COVID-19) erupted toward the end of 2019, and it has continued to be a source of concern for a large number of people and organizations well into 2020. Wearing a face cover has been shown in studies to reduce the risk of viral transmission while also providing a sense of security. Be that as it may, it isn't attainable to physically follow the execution of this strategy. This proposed system is built by pretrained deep learning model, Vgg16. The proposed scheme is easy to implement and use all the layers in vgg16 model and train only the last layer called fully connected layer, which reduce the training time and effort. The proposed scheme is trained and evaluated using two Face mask datasets, one having 1484 pictures and the other with 7200. For a smaller dataset, augmented pictures were utilized to enhance accuracy. The suggested model is tested on unknown pictures, and it correctly predicts whether the image is wearing a mask or not. The proposed scheme gives accuracy 96.50% during testing in small dataset. The model gives accuracy in medium dataset is 91% during testing. By using vgg16 pretrained model and image augmentation in the dataset improves performance and gives a high accuracy. Introduction The COVID-19 corona virus pandemic is wreaking havoc on the world's health. Using a mask in public places and in congested regions is the most effective COVID-19 prevention strategy. In these places, personally monitoring individuals is quite tough. For face mask identification, a hybrid model combining deep and conventional machine learning will be presented [1]. Since the outbreak of the Covid-19 epidemic, substantial progress in the fields of image processing and computer vision has been made in the identification of face masks. Several methods and strategies have been used to construct several face detection models. Due to the unexpected appearance of the COVID-19 epidemic, different facial recognition technologies is currently being used on persons wearing masks. Confront detectors face a difficult job in detecting face masks [2] (see Figs. 1,2,3,4,5,6,7,8,9,10,11 and 12). Corona virus (COVID-19) is a corona virus family member. This epidemic brings the entire globe to a halt and triggers a worldwide economic downturn. Face recognition software is used to manage complicated images. To extract the face region from the full image, the face detection method is used first. Face detection is carried out using a color-based algorithm. After face localization, a face mask detection technique is used to determine whether or not a face is wearing a mask [3]. The challenge of identifying whether a person is wearing a face mask from speech is useful in forensic investigations, surgeon communication, and persons protecting themselves against contagious illnesses like COVID-19. For mask identification from speech, we offer a unique data augmentation approach [4] (see Tables 1 and 2). Face Mask Detection Platform recognizes whether or not a user is utilizing a machine learning or deep learning scheme to wear a mask. Face masks have replaced conventional methods of protection during the pandemic, as they are efficient in stopping viral transmission. Several industrialized and developing countries across the globe have made it necessary for people to wear masks while they are at home or in public areas. Because of the fast spread of corona virus, the globe is facing a major health disaster COVID-19. Using a mask in public places and in congested regions is the most effective COVID-19 prevention strategy. Because personally monitoring individuals in these places is extremely difficult, Face Mask Detection serves a critical func- Contents lists available at ScienceDirect Materials Today: Proceedings j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / m a t p r tion. A transfer learning scheme is provided to computerize the process of detecting people who are not wearing masks. Literature review Nanette et al. (2020), the Covid-19 commonly known as serious acute respiratory disorder, is a disease that causes serious respiratory problems. This disease is an infectious illness spread by respiratory droplets from an infected unwell person who talks, sneezes and coughs. This spreads fast all the way through close contact with infected people or by touching contaminated goods or surfaces. There is presently no vaccination available to defend against Covid-19; therefore avoiding infection appears to be the merely way to guard ourselves. In public, exhausting a facemask conceals the nose and mouth. As technology has evolved, deep learning has proven its usefulness in image processing detection and classification. Deep learning approaches for facial recognition and determining whether or not a person is putting a facemask. The dataset gathered has a 96% accuracy rate when it comes to the trained model's performance. If the individual spotted is not wearing a facemask, the system creates a raspberry Pi-related real-time facemask identification system which alerts and records facial image [5]. Anushka et al. (2020), the corona virus outbreak has had a devastating impact on the entire planet. According to the World Health Organization, wearing a mask is now required to prevent the transmission of the infection, among other things. To prevent acquiring the fatal illness, everyone in the country prefers to live a healthy lifestyle by wearing a mask in public meetings. Recognizing faces wearing masks is a difficult task since there are few datasets available that include both masked and unmasked pictures. A layered Conv2D model for detecting facial masks that is quite effective. We used Gradient Descent for training and binary cross entropy as a thrashing function to build this scheme, which is a stack of 2-D convolutional layers with RELU activations and Max Pooling. The model was trained using a combination of two datasets. A 95% validation/testing accuracy were achieved [6]. Jiang and Fan (2020), corona virus illness has had a major impact on the world in 2019. Wearing masks in public places is one of the most common ways for individuals to protect themselves. Many public service providers only allow clients to utilize their services provided they appropriately wear masks. Nevertheless, there are just a few research works on image analysis-based face mask identification. Retina Face Mask is a face mask detector with great accuracy and efficiency. Retina Face Mask is a one phase detector that uses a attribute pyramid network to blend high level semantic data with numerous feature maps, as well as an unique context attention module to identify face masks. To decline predictions with little confidence and a high intersection of union, use a cross-class object removal method [7]. Chen et al. (2020), since its huge outbreak during December 2019, the Covid-19 have increased all over the globe, causing a tremendous loss to the entire world. Wearing masks is a simple and efficient way to stop it from spreading at the source. Face masks are typically used infrequently but for a short period of time. We propose a detection technique based on the mobile phone to address the problem of not knowing which service stage of the mask belongs to. Four characteristics may be extracted from the GLCMs of the face mask micro-photos. The KNN algorithm is then used to create a three-result detection system. Validation studies reveal that the system can achieve an accuracy of 82.87 percent on the testing dataset [8]. Ejaz et al. (2019), principal component analysis was applied to mask and non-mask face identification. In today's world, the phrase ''security" is crucial. Face recognition is extensively used in biometric technology to protect any system since it is superior to other conventional approaches such as PIN, password, fingerprint, and so on, and it is the most dependable way to identify or verify a person effectively. These kinds of masks have an impact on the accuracy of facial recognition. Many non-masked facial recognition algorithms have recently been created that are extensively utilized and provide higher performance. PCA is a more successful and effective statistical approach that is extensively used [9]. Meenpal et al. (2019), in image processing and computer vision, face detection has become a highly popular problem. Convolutional architectures are being used in many new algorithms to make them as accurate as feasible. These convolutional designs have allowed even pixel information to be extracted. The method for creating accurate face segmentation masks from any image of any size. Fully Convolutional Networks are used in the training to semantically separate out the faces in the image. The FCN's output picture is cleaned up to eliminate unnecessary noise and prevent erroneous predictions [10]. Zhang et al. (2019), Face identification and alignment in an unrestricted environment are difficult problems to solve, yet they are necessary for preserving traffic tidy and public security. The enhanced multi-task cascaded convolutional networks to achieve finest face area recognition and attribute alignment of the driver face on the road, and to forecast face and attribute placement using a coarse to fine pattern. In addition, ITS-MTCNN approach proposes an enhanced regularization method as well as an effective online hard sample mining methodology. In the self built driver face database, the training scheme and divergence experiment are run. Finally, comparison studies display the efficacy of the ITS-MTCNN scheme [11]. Wu et al. (2019), for accurate face identification, significant representations for describing face appearance were required. Current detectors, especially based on convolutional neural networks applies functions like convolution to any native areas on every face for attribute aggregation and consider all native options to be uniformly effectual for the recognition task. Specified a proposal where part-specific attentions sculptural by learnable gaussian kernel that will be used to look for correct native area placements and scales in order to dig out dependable and revealing facial element alternatives. Later, using LSTM, face specific notice is used to model relationships amid native components and modify their contributions to detection tasks. To illustrate the efficiency of our hierarchic attention and construct comparisons with progressive methods, we conducted in-depth tests on three tough face detection datasets. Vallimeena et al. (2019), CNN algorithms have been increasingly popular in recent years for a variety of computer visionbased applications, such as disaster management systems that use crowd-sourced pictures. Flooding is a common natural catastrophe that poses a hazard to human lives and property. Flood photos with individuals recorded by smart phone cameras are being used to calculate the depth of the water to determine the amount of damage in flood-affected areas. For these purposes, a variety of CNN algorithms are available. Each one has a different architecture, which has an impact on the accuracy of the results [13]. Set et al. (2019), Deep Metric Learning is used to detect and identify human faces. The approach employed in this work is called deep metric learning, and it combines face identification with human object detection. Real-time pictures or stream movies collected by CCTV or any other video capturing equipment may be used to analyse captured footage. Our approach uses widely known classification algorithms to recognise and identify faces even in blurry pictures or strewn-together movies [14]. Kumar et al. (2019), face recognition for diverse applications rely heavily on multiple face detection and extraction. Support vec-tor machine was utilized for multiple face recognition in the suggested approach, while Discrete Wavelet Transform, Edge Histogram, and Auto-correlogram were used for extracting features. For MFD, suggested technique was tested on two distinct databases: Carnegie Mellon University and BAO. The anticipated scheme performs superior than the existing strategy in this study article. Finally, our accuracy increased to around 90% [15]. Related work The improved viola-jones face detection scheme that is based on Holo Lens [16], which improves the classic viola-jones face detection scheme by relying on Haar like rectangle attribute expansion to improve detection competence and accelerating recognition building using 2D convolution separation and image Resampling techniques. Detecting face masks has become a critical duty in aiding worldwide civilization. The method properly recognizes a face from a picture and then determines if it has a mask on it or not, and it can also detect a face and a mask in motion. To accurately identify the existence of masks without generating over-fitting, a Sequential Convolutional Neural Network model was used [17]. Using the Haar Cascade approach, data extraction and human face detection may be done. The Haar Cascade technique may be used to filter selfie face pictures with a high level of accuracy [19]. Previous models were primarily trained and evaluated on high-quality pictures, which isn't necessarily the case in realworld applications like surveillance systems. While testing on low-quality pictures of various levels, performance degraded [20]. For the opinion-examination of restaurant evaluations and sentence-based opinion summary analysis, they introduced the traditional and VADER opinion mining schemes [20]. Proposed scheme Many of the articles in the aforementioned literature discovered that deep learning and machine learning techniques were utilized to classify whether or not a person was wearing a mask. They train a model from scratch that is computationally costly and prolonged. On real world pictures, the performance fails. In this article, data augmentation and a pre trained convolutional neural network scheme named vgg16 are used to extract features and categorize the picture as with or without mask to obtain greater accuracy. This improves the dataset's accuracy while also being simple to apply. The proposed method overcomes the disadvantage of the existing method. Research methodology There are several phases involved in detecting face masks, and below is a graphical depiction of the scheme. Data collection The process of gathering data from kaggle is known as data collection. For Face mask identification, we collected two datasets from the Kaggle website, each containing training, validation, and testing pictures. Image resizing The collection comprises pictures of various sizes. The first step is to resize the image to 224 * 224 * 3, where 3 denote RGB color (red, green, blue). Before sending a picture to the model, every image in the dataset is scaled to 224 * 224 pixels. Image augmentation It is an approach for artificially increasing the range of a training dataset by manipulating pictures within it. The training pictures of a smaller dataset are enhanced in this article, including rescaling, width, height, horizontal flipping, rotating and zooming. Pretrained convolutional neural network In image categorization, a convolutional neural network i.e. Deep Neural Network is utilized. CNN image classification takes an input image and processes it before categorizing it (Wearing Mask or Not Wearing Mask). To categorize pictures with mask and without mask, this study employs one of the pre-trained models -VGG 16 with Deep Convolutional Neural Network. To train quicker and more costeffectively, the vgg16 pre-trained model, also known as transfer learning, was developed. Convolution layer is used in the Vgg16 model to extract the feature map without changing the size of the input and the output of the convolution level is sent to the max pooling level. By extracting just feature from the input, the yield of the convolution layer is provided as input to the Max pooling level, which reduces the picture by half. The vgg16 model extracts features using 13 convolution layers and five max pooling layers. These two layers take features from the input and provide a feature vector as an output. Pass the output to the fully linked layer first that is converted from a feature vector to a 1D array. An output layer that is fully linked is utilized to categorize the pictures. Soft Max is an activation function that is used to determine which class has the greatest likelihood. The Soft Max function is used to classify binary and multiclass data. In this suggested technique, all layers are frozen, and we don't need to train the entire network for our dataset; instead, we may just utilize the data from the prior model. We're not going to use the last layer since we're weary with the pre-created network; instead, we'll use our own custom FCL to train our dataset. We'll add one output layer on top of FCL and then integrate our fully connected stratum with the pre trained network. The Phrase transfer learning refers to describe a scheme that has been previously trained. Transfer learning is a technique for applying previously learned model information to a new task. To categorize pictures as with mask or without mask, this project employs one of the pre-trained models VGG 16 with Deep Convolutional Neural Network. Transfer learning saves a significant amount of computational power. All vgg16 levels, excluding completely linked layers, are frozen in the proposed technique. Only the last layer should be retrained. Result and discussion Image augmentation and resizing are performed on two distinct datasets i.e. Face mask dataset and dataset train on the pre trained vgg16 model, also known as transfer learning, which yielded excellent accuracy and required minimal computing time. Face masks dataset that is larger in size which consists of 1484 images. It gives accuracy of 96.50%. Testing model with different images i. Testing the model by giving unknown input images to model and checking whether it is predicting the image accurately as wearing or not wearing mask. ii. In order to spot a face in a image, the Haar cascade is employed. iii. It uses a Haar cascade classifier to read the input image, recognize the face in the image, then crop the only face region in the image. iv. Resizing the input image size as 224 * 224 and using model. predict() method to input image to predict the output. v. The output is shown as wearing a mask or not wearing mask with the given input image. Conclusion In this project, an effective transfer learning VGG16 is used. The main importance of using vgg16 is that it works well with medium and smaller dataset, less computation and no need of training the model from scratch. After analyzing many review papers, we concluded that pretrained vgg16 gives higher accuracy in smaller dataset which is having 1484 images of 96.50% and less computation than any other Trained Deep Learning algorithms. The same work can be improved further by applying this model to realtime face mask detection and fine tuning the model for mediumsized datasets, which entails adding a custom layer to the model to improve accuracy and implementing a facial recognition system that can be deployed at different workplaces to hold up person detection while wearing the mask.
2022-01-21T14:12:49.920Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "63c0ae57d38f98b3b08ad6947ca4458bc1013772", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.matpr.2022.01.165", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "170131a784a086757c93cf70a6049feacf372845", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
214680805
pes2o/s2orc
v3-fos-license
The natural history of progressive fibrosing interstitial lung diseases We used data from the INBUILD and INPULSIS trials to investigate the natural history of progressive fibrosing interstitial lung diseases (ILDs). Subjects in the two INPULSIS trials had a clinical diagnosis of idiopathic pulmonary fibrosis (IPF) while subjects in the INBUILD trial had a progressive fibrosing ILD other than IPF and met protocol-defined criteria for ILD progression despite management. Using data from the placebo groups, we compared the rate of decline in forced vital capacity (FVC) (mL·year−1) and mortality over 52 weeks in the INBUILD trial with pooled data from the INPULSIS trials. The adjusted mean annual rate of decline in FVC in the INBUILD trial (n=331) was similar to that observed in the INPULSIS trials (n=423) (−192.9 mL·year−1 and −221.0 mL·year−1, respectively; nominal p-value=0.19). The proportion of subjects who had a relative decline in FVC >10% predicted at Week 52 was 48.9% in the INBUILD trial and 48.7% in the INPULSIS trials, and the proportion who died over 52 weeks was 5.1% in the INBUILD trial and 7.8% in the INPULSIS trials. A relative decline in FVC >10% predicted was associated with an increased risk of death in the INBUILD trial (hazard ratio 3.64) and the INPULSIS trials (hazard ratio 3.95). These findings indicate that patients with fibrosing ILDs other than IPF, who are progressing despite management, have a subsequent clinical course similar to patients with untreated IPF, with a high risk of further ILD progression and early mortality. Introduction Idiopathic pulmonary fibrosis (IPF) is, by definition, a progressive fibrosing interstitial lung disease (ILD) [1]. In addition to IPF, there are a number of other ILDs that may develop a progressive fibrosing phenotype characterised by declining lung function, an increasing extent of fibrosis on high-resolution computed tomography (HRCT), worsening symptoms and quality of life, and early mortality [2][3][4][5]. Along with these clinical similarities, progressive fibrosing ILDs appear to share pathobiological mechanisms that may represent a common fibrotic response to tissue injury [6][7][8][9][10]. Nintedanib is an intracellular inhibitor of tyrosine kinases [10,25]. In the two INPULSIS trials in patients with IPF, nintedanib reduced the rate of decline in FVC (mL·year −1 ) over 52 weeks by about 50% compared to placebo [26]. Recently, the efficacy and safety of nintedanib in subjects with a variety of ILD diagnoses other than IPF, grouped based on the progressive clinical behaviour of their fibrosing ILD despite management deemed appropriate in clinical practice, were investigated in the INBUILD trial. The results showed that, as in the INPULSIS trials, nintedanib reduced the rate of decline in FVC (mL·year −1 ) over 52 weeks by about 50% compared to placebo [27]. We used data from subjects who received placebo in the INBUILD and INPULSIS trials to investigate the natural history of progressive fibrosing ILDs. Specifically, we wanted to compare the clinical course of IPF and other progressive fibrosing ILDs, explore whether specific ILD diagnoses were associated with different rates of progression and investigate whether a relative decline in FVC was associated with mortality in patients with IPF and other progressive fibrosing ILDs. Trial design The two INPULSIS trials and the INBUILD trial were randomised, double-blind, placebo-controlled trials with a 52-week treatment period. The trial designs have been described and the trial protocols are publicly available [26,27]. The trials were carried out in compliance with the principles of the Declaration of Helsinki and the Harmonised Tripartite Guideline for Good Clinical Practice of the International Conference on Harmonisation, and were approved by the local authorities. Subjects provided written informed consent before trial entry. In all these trials, the primary endpoint was the annual rate of decline in FVC (mL·year −1 ), assessed over 52 weeks. INPULSIS trials Briefly, subjects in the INPULSIS trials were aged ⩾40 years and had a clinical diagnosis of IPF. To be eligible for inclusion based on an HRCT scan (taken within the previous ⩽12 months), patients had to have a usual interstitial pneumonia (UIP)-like fibrotic pattern defined as meeting criteria A, B and C, A and C, or B and C defined as follows: A: definite honeycomb lung destruction with basal and peripheral predominance; B: presence of reticular abnormality and traction bronchiectasis consistent with fibrosis with basal and peripheral predominance; C: atypical features are absent, specifically nodules and consolidation. Ground glass opacity, if present, was to be less extensive than reticular opacity pattern. Subjects had an FVC ⩾50% predicted and a diffusing capacity of the lung for carbon monoxide (D LCO ) ⩾30% predicted and <80% predicted. There were no inclusion criteria regarding longitudinal disease behaviour. Subjects were randomised 3:2 to receive nintedanib or placebo. INBUILD trial Subjects in the INBUILD trial were aged ⩾18 years and had a fibrosing ILD other than IPF diagnosed by the investigator according to their usual clinical practice. Patients with IPF were actively excluded. For every subject, the investigator documented an ILD diagnosis on the case report form based on the following nine options: iNSIP, unclassifiable IIP, HP, RA-ILD, mixed connective tissue disease-associated ILD (MCTD-ILD), SSc-ILD, exposure-related ILD, sarcoidosis and other fibrosing ILD. Subjects had features of fibrosing lung disease (reticular abnormality with traction bronchiectasis with or without honeycombing) with an extent of >10% on an HRCT scan (taken within the previous ⩽12 months), confirmed by central review, FVC ⩾45% predicted and D LCO ⩾30% predicted and <80% predicted. Subjects had to meet one of the following criteria for disease progression in the 24 months before screening, as determined by the investigator, despite management as deemed appropriate in clinical practice for the individual ILD: 1) a relative decline in FVC ⩾10% predicted; 2) a relative decline in FVC ⩾5% predicted but <10% predicted and worsened respiratory symptoms; 3) a relative decline in FVC ⩾5% predicted but <10% predicted and increased extent of fibrosis on HRCT; 4) worsened respiratory symptoms and increased extent of fibrosis on HRCT. Subjects were randomised 1:1 to receive nintedanib or placebo. Randomisation was stratified according to the fibrotic pattern on HRCT (UIP-like fibrotic pattern or other fibrotic patterns). The criteria used to identify a UIP-like fibrotic pattern on HRCT in the INBUILD trial were the same as the criteria used in the INPULSIS trials. For each subject, the trial consisted of two parts: Part A, which comprised 52 weeks of treatment; and Part B, a variable treatment period beyond Week 52 during which subjects continued to receive blinded treatment until all subjects had completed Part A. Subjects who discontinued treatment were asked to attend all visits as originally planned, including an end-of-treatment visit and a follow-up visit 4 weeks later. The second database lock took place after all patients had completed the follow-up visit or had entered the open-label extension study. The protocol did not allow for use of azathioprine, cyclosporine, mycophenolate mofetil, tacrolimus, rituximab, cyclophosphamide, or oral corticosteroids >20 mg·day −1 at randomisation, but initiation of these medications was allowed after 6 months of study treatment in cases of clinically significant deterioration of ILD or connective tissue disease, at the discretion of the investigator. Analyses The course of ILD was assessed in subjects who received placebo in the INBUILD and INPULSIS trials. The following were used as measures of longitudinal disease behaviour: annual rate of decline in FVC (mL·year −1 ), observed absolute change from baseline in FVC (mL) over time, the proportions of subjects with relative declines in FVC of >5% predicted and >10% predicted at Week 52, and all-cause mortality. To address the question of similarity between IPF and other fibrosing ILDs with a progressive phenotype, data from the placebo group in the overall population in the INBUILD trial were compared with pooled data from the placebo groups of the INPULSIS trials. In addition, the subgroups of subjects with a UIP-like fibrotic pattern on HRCT and with other fibrotic patterns on HRCT in the INBUILD trial were compared with patients with IPF in the INPULSIS trials. To assess whether specific ILD diagnoses were associated with different rates of progression, the course of ILD in the placebo group of the INBUILD trial was assessed in the following five diagnostic groups: iNSIP, unclassifiable IIP, HP, autoimmune ILDs (RA-ILD, SSc-ILD, MCTD-ILD, plus subjects with an autoimmune disease noted in the "Other fibrosing ILDs" category of the case report form) and other ILDs (sarcoidosis, exposure-related ILDs and selected diagnoses from "Other fibrosing ILDs"). Nominal p-values for subgroup-by-time interaction were obtained from tests of heterogeneity across all the diagnostic groups, with no adjustment for multiplicity. The annual rate of decline in FVC (mL·year −1 ) was analysed using a similar random coefficient regression model (with random slopes and intercepts) as was used in the primary analysis of the INBUILD trial, including baseline FVC (mL) and patient population (IPF versus non-IPF) as covariates. The analysis was based on all measurements obtained over the first 52 weeks, including those from subjects who had prematurely discontinued placebo. The model allowed for missing data assuming that they were missing at random. In this paper, we present results that are representative of an "average subject" within the depicted comparison. To evaluate time to death, a log-rank test was utilised and included patient population (IPF versus non-IPF) as a covariate. A Cox proportional hazards model was used to derive the hazard ratio and 95% confidence interval (CI) between the patient populations (IPF versus non-IPF). Categorical relative declines in % predicted FVC were evaluated using a logistic regression model adjusting for the continuous covariate baseline % predicted FVC and for the patient population (IPF versus non-IPF). Adjusted odds ratios (ORs) with 95% CIs were used to quantify the effects within each patient population. Subjects with missing data at Week 52 were counted as having relative declines in FVC of >5% predicted or >10% predicted, representing a "worst case" analysis. To explore the question of whether a relative decline in FVC >10% predicted was associated with mortality, we analysed the relationship between a relative decline in FVC of >10% predicted and time to death over 52 weeks in the INBUILD trial and in the INPULSIS trials, and using data up to the second database lock in the INBUILD trial. Evaluations regarding the association of FVC decline with mortality were based on a Cox proportional hazards model, where time to FVC decline >10% predicted was included as a time-dependent variable using the programming statements method [28]. The assessment in the overall population in the INBUILD trial also included the stratification variable (UIP-like fibrotic pattern versus other fibrotic patterns on HRCT). No other variables were included in these evaluations. Subjects The baseline characteristics of subjects in the placebo groups of the INBUILD trial (n=331) and INPULSIS trials (n=423) are summarised in table 1. At baseline, mean % predicted FVC was lower in the INBUILD trial than in the INPULSIS trials (69% versus 79%). In the INBUILD trial, the annual rate of decline in FVC was similar across the five pre-specified groups by ILD diagnosis; in all the subgroups, subjects with a UIP-like fibrotic pattern on HRCT had a numerically greater annual rate of decline in FVC than those with other fibrotic patterns on HRCT (figure 3). Proportions of subjects who had categorical relative declines in % predicted forced vital capacity The proportions of subjects who had relative declines from baseline in FVC >10% predicted or >5% predicted at Week 52 were similar between the INBUILD and INPULSIS trials (figures 4a and 4b). A relative decline in FVC >10% predicted at Week 52 was observed in 48.9% of the overall population in the INBUILD trial and 48.7% of subjects in the INPULSIS trials. In the INBUILD trial, a relative decline in FVC >10% predicted at Week 52 was observed in a greater proportion of subjects with a UIP-like fibrotic pattern on HRCT than in those with other fibrotic patterns (52.4% versus 43.2%) (figure 4a). A relative decline in FVC >5% predicted at Week 52 was observed in 68.6% of the overall population in the INBUILD trial and 64.5% of subjects in the INPULSIS trials (figure 4b). In the INBUILD trial, the proportion of subjects with a relative decline in FVC >5% predicted at Week 52 was similar in subjects with a UIP-like fibrotic pattern on HRCT and in those with other fibrotic patterns (70.4% and 65.6%, respectively) (figure 4b). Mortality and its association with relative decline of >10% predicted in forced vital capacity Over 52 weeks, deaths occurred in 34 subjects in the INBUILD trial (5.1%) and 33 subjects in the INPULSIS trials (7.8%). Compared with the INPULSIS trials, the proportion of subjects who died over 52 weeks in the INBUILD trial was similar in the overall population and in subjects with a UIP-like fibrotic pattern, and lower in those with other fibrotic patterns on HRCT (hazard ratios versus INPULSIS 0.63, 0.97 and 0.10, respectively) ( In the INBUILD trial, a relative decline in FVC of >10% predicted was associated with an increased risk of death over 52 weeks in the overall population and in subjects with a UIP-like pattern on HRCT (hazard ratios 3.64 and 3.35, respectively). A similar association was observed in the INPULSIS trials (hazard ratio 3.95) (table 3). As only one death occurred over 52 weeks in subjects with other fibrotic patterns on HRCT in the INBUILD trial, the hazard ratio could not be calculated. Using data up to the second database lock (median follow-up of approximately 19 months), a relative decline in FVC of >10% predicted was associated with an increased risk of death in the overall population (hazard ratio 3.48) and in subjects with a UIP-like fibrotic pattern on HRCT (hazard ratio 3.64). A similar trend was observed in subjects with other fibrotic patterns on HRCT (hazard ratio 2.88) (table 4). Previous studies have suggested that in patients with fibrosing ILDs the presence of a UIP-like fibrotic pattern on HRCT is associated with more rapid disease progression [29][30][31][32][33]. Consistent with these findings, the rates of FVC decline and mortality over 52 weeks in the INBUILD trial were greater in subjects with a UIP-like fibrotic pattern on HRCT than in subjects with other fibrotic patterns. That said, the rate of FVC decline in subjects with other fibrotic patterns on HRCT was dramatic (160 mL·year −1 ), with almost 50% of these subjects showing a relative decline in FVC >10% predicted over 52 weeks, similar to patients with IPF. Although the INBUILD trial was not designed or powered to study the effects of nintedanib in patients with specific ILD diagnoses, our results suggest that in subjects with a >10% extent of fibrosis on HRCT and clinical signs of progression despite management, the rate of decline in FVC was similar across subgroups with different diagnoses. This suggests that although it is critical that patients receive an accurate diagnosis of ILD at the time of presentation in order to inform optimal management, observation of disease behaviour can identify a population of patients who develop a progressive fibrosing phenotype despite treatment and are therefore at high risk of further progression. Declines in FVC of >10% predicted and >5% predicted have been associated with mortality in patients with IPF [34,35] and other chronic fibrosing ILDs [19][20][21][22]. The subjects enrolled in the INBUILD trial met protocol-defined criteria for progression of ILD in the 2 years before screening. The subjects with IPF enrolled in the INPULSIS trials were not required to meet criteria for disease progression in order to enter the trial, as IPF is by definition a progressive disease [1]. In both the INBUILD and INPULSIS trials, approximately half the subjects in the placebo group had a relative decline in FVC of >10% predicted and two-thirds had a relative decline of >5% predicted over 52 weeks. In the INBUILD trial, a relative decline in FVC of >10% predicted was associated with a more than three-fold increase in the risk of death over 52 weeks, both in the overall population and in subjects with a UIP-like fibrotic pattern on HRCT, which was comparable to what was observed in the INPULSIS trials. Over a longer observation period, this association also became apparent in subjects with other fibrotic patterns on HRCT. These data suggest that, in a similar fashion to IPF, a decline in FVC is associated with an increased risk of early death in patients with non-IPF fibrosing ILDs that have progressed despite management. Previous analyses of the INBUILD trial have shown that the effect of nintedanib versus placebo on the rate of FVC decline was consistent across subpopulations by HRCT pattern [27] and by ILD diagnosis [36]. Combined with our current results, these findings provide further support for a progressive fibrosing phenotype in ILD, characterised by progressive decline in lung function and early mortality, irrespective of the underlying ILD diagnosis and fibrotic pattern on HRCT. These findings also support the "lumping" together of patients with progressive fibrosing ILDs for the purposes of investigating certain therapies for these rare diseases, as has previously been proposed [3]. In conclusion, our findings show that the subjects with non-IPF progressive fibrosing ILDs who received placebo in the INBUILD trial had a clinical course similar to patients with untreated IPF, irrespective of the underlying ILD diagnosis or the fibrotic pattern on HRCT. Patients with fibrosing ILDs other than IPF who have shown progression of ILD in the past 24 months, despite management deemed appropriate in clinical practice, are at high risk of further ILD progression and early mortality.
2020-03-29T07:15:59.715Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "eeb975721a6bad50451a09c9b2ffa308b44a51aa", "oa_license": "CCBYNC", "oa_url": "https://erj.ersjournals.com/content/erj/55/6/2000085.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5c81efbfc02e65f0fc55ccbc52b335310d195bdc", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
258588985
pes2o/s2orc
v3-fos-license
Fatal cardiac air embolism after CT-guided percutaneous needle lung biopsy: medical complication or medical malpractice? Computed tomography (CT)-guided percutaneous needle biopsy of the lung is a well-recognized and relatively safe diagnostic procedure for suspicious lung masses. Systemic air embolism (SAE) is a rare complication of transthoracic percutaneous lung biopsies. Herein, we present a case of an 81-year-old man who underwent CT-guided percutaneous needle biopsy of a suspicious nodule in the lower lobe of the right lung. Shortly after the procedure, the patient coughed up blood which prompted repeat CT imaging. He was found to have a massive cardiac air embolism. The patient became unresponsive and, despite resuscitation efforts, was pronounced dead. The pathophysiology, risk factors, clinical features, radiological evidence, and autopsy findings associated with SAE are discussed, which may, in light of the current literature, assist with the dilemma between assessing procedural complications and medical liability. Given the instances of SAE in the setting of long operative procedures despite careful technical execution, providing accurate and in-depth information, including procedure-related risks, even the rarest but potentially fatal ones, is recommended for informed consent to reduce medicolegal litigation issues. Introduction Computed tomography (CT)-guided percutaneous needle biopsy of the lung is a diagnostic procedure for suspicious lung lesions.This diagnostic approach is performed in all the cases in which the assessment of the lesion utilizing lung endoscopy is not possible.CT-guided percutaneous needle biopsy has a sensitivity of 93-98% and a specificity of 98-100% for the diagnosis of malignancy and provides critical information for further clinical and surgical management, particularly with current therapies that can be tailored to pathological features and the patient [1].The procedure is widely accepted as a safe diagnostic tool since most complications, such as pneumothorax, pulmonary bleeding, and hemoptysis, are uncommon and can usually be treated conservatively with favorable outcomes [2,3].On the other hand, systemic air embolism (SAE), which occurs when air is introduced into the coronary and/or cerebral arterial vasculature, is a rare but potentially fatal complication.The reported incidence of SAE is 0.08% [4], although the incidence may be higher (reaching almost 5%) due to missed diagnoses in asymptomatic patients [5]. Herein, we report a case of a fatal cardiac air embolism after a percutaneous CT-guided transthoracic needle biopsy performed for a suspicious lung lesion.The clinical, radiological, and pathological findings are presented and discussed.A review of the current literature is also provided, which may aid in solving the dilemma between procedural complications and medical liability. Case report An 81-year-old man was found to have a 1.6 × 1.1 cm nodule in the posterior-basal segment of the lower lobe of the right lung with associated mediastinal lymphadenopathy of CT imaging of the chest.His medical history was pertinent for a diagnosis of squamous cell carcinoma of the lung as well as an intraductal papillary mucinous neoplasm of the pancreas, both of which had been previously treated surgically.Two months after the lesion was identified on CT, the patient underwent 18 F-fluorodeoxyglucose positron emission tomography/computed tomography (18 F-FDG PET/CT), which showed hypercaptation (increased uptake) within the nodule and the left bronchial lymph nodes.After discussion among the multidisciplinary team, the patient underwent bronchoscopy with associated endobronchial ultrasoundguided transbronchial needle aspiration (EBUS-TBNA) of the enlarged lymph nodes.However, the cytological analysis was not diagnostic, with no cellular atypia identified within the submitted material. Four months later, a thoracic CT showed that the lung lesion had grown larger (now 2.7 × 1.7 cm).The clinical team decided that a CT-guided percutaneous needle lung biopsy was indicated to diagnose the lesion further.The patient consented to the procedure after a thorough discussion of the associated risks and benefits, along with alternative diagnostic methods, such as surgical lung biopsy and continued imaging surveillance.The procedure was performed in the typical fashion with the patient in the prone decubitus position.Once the lesion was localized under CT, a 17-gauge BioPince-type needle (typically used as a core biopsy instrument) was placed into the lesion on the first pass during a single inspiratory breath-hold (Fig. 1A-C).This needle remained in this position until the end of the procedure, as confirmed with multiple consecutive scans.A coaxial 18-gauge needle was then inserted through the 17-gauge needle to obtain multiple "cutting-type" biopsy samples of the lesion.The insertion of the needle and the biopsies were performed only during inspiratory breathholding.At the conclusion of the procedure, the 17-gauge needle was removed. At the end of the procedure, the patient coughed, and bright red blood was expectorated.Of note, the patient did not cough or breathe improperly during the procedure while the needles were inserted.Repeat CT imaging of the chest showed slight hemorrhage around the lesion and a massive air embolism within the left ventricular chamber with extension into the aortic root and right coronary artery (Fig. 1D-F).Unfortunately, the patient suddenly became unresponsive shortly after the CT imaging was obtained and experienced cardiac arrest.After 40 min of adequate resuscitation efforts, the patient was pronounced dead in the radiology suite. Autopsy findings Given the circumstances surrounding the death and the necessity to exclude the possibility of medical malpractice, a medicolegal autopsy was performed the following day.Evidence of previous surgical procedures was noted during the examination, including surgical scars and the absence of the upper lobe of the right lung as well as the spleen and distal pancreas.Examination of the abdominal organs did not yield evidence of further natural disease processes.There were also no noteworthy intracranial findings. Examination of the right lung confirmed evidence of the needle-guided procedure along the pleural surface of the posterior aspect of the lower lobe.Further sectioning of the lung lobe revealed a white-tan ill-defined mass measuring approximately 3 cm within an area of hemorrhage (Fig. 2A). Sections were obtained for microscopic examination, which showed areas of hemorrhage and pneumatosis foci within the lung parenchyma (consistent with the insertion of a needle) adjacent to the tumoral tissue.The tumor was consistent with adenocarcinoma of the lung (Fig. 2B, C). A specific procedure was utilized to confirm the presence of an air embolism within the heart and coronary arteries as indicated on the post-procedure CT scan.The procedure is well documented by Richter [6] and Zolotarov and Fraser [7] and includes filling the pericardial sac with water and inserting a large bore needle with attached water-filled syringe into each chamber of the heart and the great vessels while watching for bubbles to enter the syringe.In the present case, bubbles appeared in the syringe barrel when the needle tip was inserted into the left ventricle and the ascending aorta.Further examination of the heart did not reveal any significant structural pathological findings.Based on the available clinical data and autopsy findings, the death was attributed to a massive left cardiac and coronary air embolism (found within the left ventricle and extending into the right coronary artery) following the CT-guided percutaneous needle biopsy of the lung. Discussion In the case reported herein, the medical record review showed that CT-guided percutaneous needle biopsy was an indicated diagnostic procedure considering the medical history of the patient, the radiological findings, and the impossibility of formulating a diagnosis of the lung lesion through EBUS-TBNA.Unfortunately, the patient experienced a systemic air embolism (SAE) that directly lead to his death shortly after the procedure.Mortality due to systemic air embolism (SAE) following CT-guided percutaneous needle biopsy of the lung is reported to be approximately 0.0002% in the largest series [8].To the best of our knowledge and after a thorough review, only seven SAE fatal cases of air embolism have been thoroughly described in the literature [9][10][11][12][13][14][15], as summarized in Table 1.While the vast majority of SAEs are undiagnosed in asymptomatic patients, in cases of life-threatening SAE, diagnosis is generally clinically suspected based on the abrupt decline of the neurologic and/or cardiovascular condition of the patient, like that seen in the patient presented.The prognosis of SAE depends primarily on the quantity of air entering the vascular system.Brain and chest CT scans can provide a radiologic confirmation by detecting bubbles (typically seen as a void in the contrast material within the vessel) in the cerebral vascular system, ascending aorta, the left side of the heart, and pulmonary veins [16].Early treatment of a SAE consists of prompt administration of 100% oxygen and placing the patient in the left lateral decubitus position with lowering of the head in an effort to increase the intracavitary pressure of the left atrium and avoid cerebral embolization of air [17]. From a pathophysiological point of view, arterial air embolism stems from air entering the pulmonary veins during percutaneous needle biopsy of the lung.According to the literature, several different mechanisms may be responsible for air entry into the pulmonary venous system, firstly, through a hole in the pulmonary vein caused by the needle (catheter) after removal of the inner stylet, resulting in the rising of the pressure gradient between atmospheric pressure and the pulmonary venous pressure (likely occurring during inspiration).In this case, air may enter directly through the catheter.Secondly, air may be directly injected during the procedure into the pulmonary arterial circulation and then enter the pulmonary veins by crossing the pulmonary capillaries.Lastly, the needle may simultaneously penetrate the pulmonary vein and an adjacent air-containing space (i.e., alveolar space, bronchus, air cyst, cavity), creating a communicating fistula.In the latter case, Valsalva maneuvers can increase the pressure in the air space, resulting in vascular air embolism [5,12].It is worth mentioning that an air volume of 0.5-1.0mL is enough to cause cardiac arrest through coronary artery air embolism, and 2.0 mL is enough to cause a fatal stroke through cerebral air embolism [18]. The prevention of SAE includes operator and patentrelated measures.The operator must ensure the occlusion of the introducer needle by the inner stylet or own finger, while the patient should be instructed to avoid breathing deeply and coughing during the biopsy [19].Some authors suggest an increased probability of SAE when using a coaxial approach and larger needles.In fact, the coaxial method increases the risk of communication of the lung parenchyma with the atmosphere after extraction of the inner stylet, whereas larger needles show an increased risk of pulmonary vein puncture along the needle path [20].However, fatal SAE may also occur using techniques other than the coaxial method [9,10,12,13] and with smaller needles [15].The patient's position during the procedure has also been discussed as a potential risk factor for the development of fatal SAE [4].However, analysis of the literature about SAE cases has shown that a fatal event can occur regardless of the patient's position during the procedure.Additional potential risk factors mentioned in the literature are post-inflammatory changes in the portion of the lung traversed by the needle, including increased vascularity, vasculitis, or friable lung tissue.All of these could affect the physiologic hemostatic mechanisms resulting in protracted exposure of the blood flow to the airway [5].To the best of our knowledge, this is the first case of fatal SAE to be discussed within a medicolegal context and with available radiological and autopsy evidence to assist with evaluating operator behavior.In particular, the macroscopic and microscopic analyses of the lung revealed the presence of a distinct and straight pathway through the parenchyma from the skin to the lung mass.This, along with the CT scans obtained during the procedure, supports the adherence of the operator to the international procedural guidelines [17], which recommend choosing the shortest needle path to the lesion to reduce the thickness of the involved parenchyma.In this case, it was obtained by keeping the patient in the prone position and by using a coaxial method that allowed for multiple needle biopsy sampling with a single intraparenchymal path created by the core biopsy needle. The above considerations ruled out medical errors regarding indication and/or operative technique and further identified the occurrence of cardiac air emboli as an intervention complication.In our opinion, given the clinical and autopsy findings, a significant entry of air through a fistula between the air-containing space (alveolar space) and an adjacent pulmonary venous vessel created during the needle penetration likely occurred.It is likely that the Valsalva maneuver related to the cough of the patient at the end of the procedure facilitated air penetration into the vascular system.Air reached the left heart and coronary arteries through the pulmonary vein causing heart and coronary embolism with resulting myocardial ischemia, decreased myocardial function, and death. Despite the unfavorable outcome, the multidisciplinary review of the procedure indicated that in the absence of any known risk factors, alternatives such as conservative surveillance through imaging and invasive surgical options both shared significant risks when compared to CT-guided percutaneous needle biopsy.Furthermore, the consent procedure was adequate, and ultimately, the patient was able to make an informed decision regarding his care plan, correctly carried out, giving to the patient all the information needed for his choice. The medicolegal point of view proposed herein for analyzing fatal SAE cases is important because air embolism following CT-guided percutaneous needle lung biopsy is a complication that is difficult to prevent and could serve as a possible source of litigation.Although several recommendations and precautions have been suggested to reduce the risk of SAE following CT-guided percutaneous needle biopsy of the lung, this complication can occur particularly in cases with long operative exposure and despite careful technical execution.It is often worth acknowledging that in the context of personalized treatment, this diagnostic procedure represents a major trend in the future [21].On this basis, a thorough disclosure of the procedure given preferably by the operator during the consent process, including all procedure-associated risks, even the rarest but potentially fatal ones, is recommended.In fact, to ensure adequate informed consent, providing accurate and in-depth information, including alternative invasive and conservative approaches, is essential to reduce medicolegal litigation issues. Key points 1 Computed tomography (CT)-guided percutaneous needlebiopsy of the lung is a safe diagnostic procedure for suspicious lung lesions. 2 Systemic air embolism (SAE) is a rare and potentially fatal procedure-relatedcomplication. 3 An 81-year-old man died after SAE development during aCT-guided needle biopsy of the lung.4 The thorough medicolegal investigation identified SAEas a procedure-related complication excluding medical malpractice.5 Thorough disclosure of the procedure given, preferablyby the operator during the consent process, is recommended in order to avoidmedicolegal litigation issues. Funding Open access funding provided by Università degli Studi di Verona within the CRUI-CARE Agreement. Fig. 1 Fig. 1 CT scans in sequence obtained during the procedure (A, B, and C) and sequential slices of scan performed after the procedure (D, E, and F).A First image in the sequence showing the lesion (light gray mass) within the right lung lobe.B Image in the sequence showing needle being inserted through the skin of the back.C Image showing the 17-gauge BioPince-type needle within the lesion (indicated by the white arrow).D Image showing hemorrhage along the posterior aspect of the right lung (black arrows).E Image showing air within the aorta (white arrow) and right coronary artery (black arrow).F Image showing air within the left ventricle (black arrows) Table 1 Fatal cases of air embolism complicating CT-guided percutaneous needle biopsy of the lung reported in the literature N.D. not declared *similar presentation to the current case but without medicolegal context
2023-05-11T06:17:41.286Z
2023-05-09T00:00:00.000
{ "year": 2023, "sha1": "44188021bc51a59424275c8a604913b0ddc01c8a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12024-023-00639-w.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "add6f62cf65ea85c9896321e2cf4ea427a27e54f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
37730024
pes2o/s2orc
v3-fos-license
Evaluation of the periodontal regenerative properties of patterned human periodontal ligament stem cell sheets Purpose The aim of this study was to determine the effects of patterned human periodontal ligament stem cell (hPDLSC) sheets fabricated using a thermoresponsive substratum. Methods In this study, we fabricated patterned hPDLSC sheets using nanotopographical cues to modulate the alignment of the cell sheet. Results The hPDLSCs showed rapid monolayer formation on various surface pattern widths. Compared to cell sheets grown on flat surfaces, there were no significant differences in cell attachment and growth on the nanopatterned substratum. However, the patterned hPDLSC sheets showed higher periodontal ligamentogenesis-related gene expression in early stages than the unpatterned cell sheets. Conclusions This experiment confirmed that patterned cell sheets provide flexibility in designing hPDLSC sheets, and that these stem cell sheets may be candidates for application in periodontal regenerative therapy. INTRODUCTION Periodontitis, the sixth most prevalent health condition worldwide, is a common chronic inflammatory disease that results in degradation of the supporting tissues around the teeth, potentially leading to tooth loss if left untreated [1]. Periodontal regeneration, which comprises the regeneration of alveolar bone, periodontal ligament (PDL), and cementum (American Academy of Periodontology, Consensus report, 1996), is considered to be the ultimate goal of periodontal treatment. Despite the success of periodontal surgery with debridement, regeneration induced by surgical treatment is still not fully predictable [2]. Several regenerative treatment modalities, such as root biomodification, bone grafts, guided tissue regeneration, bioactive factors, and combinations thereof, have been proposed, but the majority of these therapies have shown limited success, especially in challenging clinical situations [3,4]. Thus, alternative materials or techniques that can be combined with periodontal therapy to achieve predictable periodontal regeneration are highly desirable. Tissue engineering has emerged as a potential alternative to traditional regenerative approaches, and stem cell therapies have played an important role in promoting periodontal regeneration [5]. After stem cells exhibiting self-renewal and multipotent characteristics were isolated from the PDL (periodontal ligament stem cells; PDLSCs), PDLSCs have been proposed as a viable option for periodontal regeneration [6]. Studies have found PDLSCs to exhibit a superior ability compared to bone marrow-derived mesenchymal stem cells (MSCs) to differentiate into not only PDL, but also into alveolar bone and cementum under regenerative conditions [7]. However, existing tissue engineering approaches usually require scaffolds to deliver cell matrix to the target organ. Scaffolds can potentially induce an inflammatory reaction originating from the biodegraded scaffolds [8]. The possibility of inflammatory reactions resulting from biodegradation can be avoided by using cell transplantation without a scaffold [9]. Cell sheet engineering technology using temperatureresponsive dishes has been proposed as a means of overcoming this problem of traditional tissue engineering therapy, and has played an increasingly significant role in periodontal regeneration [9,10]. Okano et al. [11] developed a cell sheet manufacturing method to control cell surface adhesion using poly(N-isopropyl-acrylamide) (PNIPAM), a surface-grafted temperatureresponsive polymer. Compared to scaffold-based tissue engineering, a cell sheet can deliver robust cells that are connected through intact cell-cell interactions and extracellular matrix (ECM) proteins [12]. Since PNIPAM transforms in response to temperature changes, harvesting cells using this method does not require matrix-digesting enzymes, which can induce cell damage. Cell sheet engineering allows the harvesting of cultured cells with an intact ECM. Preservation of the ECM proteins is a key requirement for the successful utilization of a cell sheet to construct an appropriate extracellular microenvironment for regeneration [13]. Effectively preserved cellular microenvironments, which have the various biological and mechanical properties of ECM, increase the cell survival rate and reduce cell loss during cell sheet implantation [14]. Additionally, cell sheet engineering provides the opportunity for PDLSCs to be delivered directly to the root surface. Maintenance of the integrity of the fibronectin of the cell sheet results in better adherence of the cell sheets to the denuded root surface [5]. Transplantation of cell sheets using PDLSCs has been successfully used to promote periodontal regeneration both in preclinical animal models and in clinical practice [8]. Studies of periodontal regeneration using cell sheet technology have employed sheets of randomly distributed cells. However, the native PDL has a wellorganized fiber structure. To arrange the cells, we focused on giving nanotopographical cues to human periodontal ligament stem cells (hPDLSCs) by culturing the cells upon a nanopatterned substrate. Matrix nanotopographical cues have been explored as a way of controlling various behaviors of stem cells, including their adhesion, proliferation, migration, and differentiation [15,16]. Previous studies showed that the physical interaction between nanoscale materials and cells could be used to manipulate the osteogenic differentiation of MSCs, and the function of PDLSCs has been shown to be modulated by the local nanotopography [17,18]. Therefore, by fabricating an aligned hPDLSC sheet, cell morphology and function can be modulated. Jiao et al. [19] developed a thermoresponsive and nanopatterned substratum for the fabrication of patterned cell sheets using PNIPAM. The self-organization of cells in response to the directional cues of the nanotopography https://doi.org/10.5051/jpis.2017.47.6.402 Periodontal regenerative property of patterned hPDLSC sheet happened quickly (within 2 hours of seeding), enabling the rapid formation of a patterned monolayer. In light of the benefits discussed above, the generation of patterned cell sheets using hPDLSCs may be a next-generation regenerative therapy for periodontitis. The aim of this study was to develop a favorable method for regenerating periodontal tissue using patterned PDLSC sheets. We hypothesized that the fabrication of patterned hPDLSCbased stem cell sheets using a thermoresponsive substratum would enhance the ability of these stem cells to facilitate periodontal regeneration compared to cell sheets based on a flat surface, thereby providing a new strategy for periodontal therapy. Preparation of the nanopatterned substratum The nanopatterned substratum was prepared using capillary force lithography (CFL)based techniques, as described elsewhere ( Figure 1A) [19]. Scaffolds with various nanotopographical cues (parallel ridges and grooves 200-, 400-, 800-, and 1,600-nm wide and 600-nm deep) were fabricated using a photopolymerizable poly(urethane acrylate)poly(glycidyl methacrylate) composite. In epoxy amine chemistry, amine-terminated PNIPAM covalently binds to surface epoxy groups. Scanning electron microscopy (SEM) analysis was performed to confirm the fidelity of the nanopatterns retained on the surfaces. Primary cell culture of hPDLSCs hPDLSCs were used to form cell sheets on the nanopatterned surfaces. To isolate hPDLSCs, premolars were extracted from healthy patients who visited Chonbuk National University Dental Hospital, Jeonju, Korea. The isolation of the hPDLSCs was approved by the Ethics Committee of Chonbuk National University Hospital (Approval No. CUH 2015-06-038-001). The subjects enrolled in this study provided written informed consent. Briefly, periodontal tissue samples were obtained from the root surface and were digested using 2 mg/mL of collagenase (Wako Pure Chemical Industries, Tokyo, Japan) and 1 mg/mL of dispase (Gibco, Grand Island, NY, USA). The cells were seeded in a T-75 cell culture flask (BD Falcon Labware, Franklin Lakes, NJ, USA). The normal growth medium (NGM) consisted of alpha minimum essential medium (Gibco), 15% fetal bovine serum (Gibco), 2 mmol/L of L-glutamine (Gibco), 100 μmol/L of ascorbic-acid-2-phosphate (Sigma-Aldrich, St. Louis, MO, USA), and a mixture of antibiotics consisting of 100 U/mL of penicillin and 100 μg/mL of streptomycin (Gibco). The atmosphere was kept humidified at 37°C with 5% CO 2 , and cells at passages 5 to 8 were used for the following experiments. Flow-cytometric analysis of the hPDLSC immunophenotyped hPDLSC surface markers were stained and analyzed by flow cytometry, as described previously [20]. Single-cell suspensions of hPDLSCs (P5, 5×10 5 cells) were incubated with primary antibodies for human CD19, CD44, CD105, CD146, and stromal cell surface marker-1 (STRO-1) for 1 hour at 4°C. After 2 washes, the cells were incubated with fluorescein isothiocyanate-conjugated antibodies for CD105, CD146, STRO-1, and CD19, and phycoerythrin-conjugated antibody for CD44 for 30 minutes at 4°C in the dark. The cells were subjected to flow-cytometric analysis using a Beckman Coulter CytoFLEX (Beckman Coulter, Fullerton, CA, USA). The hematopoietic marker CD19 was used as a negative control. hPDLSC patterning To form a confluent monolayer cell sheet, hPDLSCs were seeded at a high density (6.5×10 4 cells/cm 2 ) on flat and nanopatterned surfaces with various-width nanopatterns (200, 400, 800, and 1,600 nm), and allowed to attach for 48 hours at 37°C. After cell sheet formation, images were obtained using a microscope (CKX41, Olympus, Tokyo, Japan). To compare the attached cell morphology on the flat surface and an 800-nm-width nanopatterned surface, hPDLSCs (3.2×10 3 cells/cm 2 ) were seeded onto each sample type. After 12 hours of incubation, the cells were imaged, and the orientation of the hPDLSCs was analyzed. The orientation angle of a cell with respect to the direction of the surface groove was recorded using image analysis software (Scion Image, National Institutes of Health, Washington, D.C., USA). A 0° angle was parallel to the groove and a 90° angle was perpendicular to the groove. After the orientation angles were determined, the angles were binned into 9 groups representing 10° increments: 0°-10°, 11°-20°, ..., 71°-80°, and 81°-90°. hPDLSC proliferation on the nanopatterned surfaces To compare cell proliferation on the flat and nanopatterned surfaces, hPDLSCs (3.2×10 3 cells/cm 2 ) were seeded on the flat and 800-nm-width nanopatterned surface. The cell proliferation level was quantified using cell counting kit-8 (CCK-8) (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) at 1, 4, 7, and 10 days. The cultured cells were incubated with medium including 10% CCK-8 reagent for 2 hours, and an aliquot from each well (100 μL) was transferred to a 96-well plate. The absorbance was measured at 450 nm using a multimode microplate reader (SpectraMax PLUS384, Molecular Devices, Sunnyvale, CA, USA), and the reading intensity was interpreted. Quantitative real-time polymerase chain reaction (RT-PCR) After the patterned hPDLSC sheets were formed, they were cultured for 4 or 7 additional days to assess the gene expression associated with periodontal differentiation, which included osteogenic, cementogenic, and ligamentogenic differentiation. Cells cultured on a thermoresponsive flat surface served as a control. Total RNA was extracted from samples using Trizol reagent (Invitrogen, Carlsbad, CA, USA), and cDNA was generated from 2 μg of each RNA sample using GoScriptTM Reverse Transcription System (Promega Corporation, Madison, WI, USA). RT-PCR was performed using Power SYBR ® Green PCR Master Mix (Applied Biosystems, Foster City, CA, USA) with selected primers. Primer sequences with gene bank accession numbers are shown in Table 1. Experiments were carried out in triplicate for each sample, and RT-PCR experiment was repeated 3 times. Statistical analysis Results were expressed as mean±standard deviation. Statistical analyses were performed using SPSS/PC+ version 12.0 for Windows (SPSS Inc., Chicago, IL, USA). The Mann-Whitney test, a non-parametric test, was performed since the Shapiro-Wilk normality test found that the data did not follow a normal distribution (P<0.05). Differences with P values <0.05 were considered to be statistically significant. Fabrication of the thermoresponsive nanopatterned substratum A thermoresponsive nanopatterned substratum, functionalized with CFL-based nanofabrication techniques while preserving the pattern, was successfully fabricated on a culture dish surface ( Figure 1A). A representative macroscopic image of a large-area, scalable thermoresponsive substratum ( Figure 1B) and an SEM image confirmed the nanopattern fidelity of the substratum ( Figure 1C). After incubation on the thermoresponsive substratum at 37°C for 48 hours, hPDLSCs were aligned with the direction of the nanopatterns, and formed well-connected confluent monolayer tissue ( Figure 1D). To visualize the thermoresponsive spontaneous detachment, the aligned hPDLSC sheet was incubated in Dulbecco's phosphate-buffered saline at room temperature. Within 20 minutes, the cell sheet began to detach and retract inward from the edges, contracting along the longitudinal axis, and the whole sheet was separated from the substratum ( Figure 1E). Isolation and characterization of hPDLSCs The effect of nanotopography on hPDLSC patterning was assessed. The isolated cells formed single colonies (Figure 2A) and exhibited the ability of engage in multi-differentiation when cultured in osteogenic or adipogenic induction media. The formation of mineralized nodules and lipid droplets in the PDLSCs was shown by alizarin red S ( Figure 2B) and oil red O ( Figure 2C) staining, respectively. The results of a flow-cytometry assay revealed a marker pattern typical for MSCs (positive for CD44, CD105, CD146, and STRO-1 and negative for CD19), indicating that the hPDLSCs contained a stem cell-like population ( Figure 2D). Formation of hPDLSC sheets Cells were cultured on surfaces containing patterns of various widths (200, 400, 800, and 1,600 nm), and on flat surface culture dishes, which were used as a control. The hPDLSCs demonstrated varying degrees of confluent monolayer formation ( Figure 3A). The cells attached randomly in a polygonal shape and showed cytoskeleton extensions in multiple directions on the flat surface. In contrast, on the nanopatterned surface, the cells extended and aligned in the direction of the groove, with a spindle-like shape. All 4 experimental groups showed better monolayer formation with increasing pattern width until a width of 800 nm was reached. The 800-nm pattern showed very well-organized cell monolayer formation. The 1,600-nm pattern also showed aligned cells, but had less confluent monolayers with few cell-cell contacts. The effect of the nanotopography on the initial cellular adhesion angle was quantified ( Figure 3B). On the flat surface, the angles were randomly distributed. However, on the nanopatterned surface, the cells aligned almost parallel to the direction of the grooves. Proliferation of hPDLSCs on the nanopatterned substratum The cell proliferation level was evaluated for up to 10 days ( Figure 4A). There were no significant differences between the groups. During the quantification of cell proliferation, the effects of the nanotopography on the morphological behavior of hPDLSCs were monitored ( Figure 4B). The hPDLSCs showed a typical fibroblast-like appearance with a polygonal shape and grew cytoskeleton extensions in multiple directions on the flat surface. However, on the nanopatterned surface, the cells extended and aligned in the direction of groove orientation with a spindle-like shape, and grew parallel to the direction of the grooves during the culture period. To determine the effect of nanotopography on the periodontal tissue regenerative capacity of hPDLSCs, differences in periodontal differentiation-related gene expression between flat and nanopatterned surfaces were analyzed using quantitative RT-PCR at early time points (days 4 and 7) ( Figure 5). hPDLSCs seeded on nanopatterned surfaces exhibited significant upregulation of the mRNA of the periodontal tissue-specific markers periostin (POSTN), α-smooth muscle actin (α-SMA), and alkaline phosphatase (ALP) compared to cells grown on flat surfaces after 7 days of culture in growth medium. Elevated expression of runtrelated transcription factor 2 (RUNX2) mRNA was noted on both days 4 and 7. There were no significant expression changes in mRNA of cementum protein 1 (CEMP1), osteocalcin (OCN), or scleraxis (SCLX) (P<0.05). DISCUSSION In the present study, we engineered a reproducible hPDLSC sheet using patterned cell sheet engineering technology with a thermoresponsive substratum to improve periodontal tissue regeneration. The ultimate goal of periodontal therapy is to achieve regeneration of the alveolar bone, cementum, and PDL [21]. However, traditional periodontal therapies remain insufficient to achieve complete periodontal regeneration. They arrest the disease process and form a weak attachment, resulting in a condition termed long weak junctional epithelium [22]. Based on recent progress in tissue engineering, ex vivo expanded PDLSCs have been used [23]. In this study, we isolated hPDLSCs expressing stem cell surface markers from the root surface for use as cell sources for cell sheet engineering. The expression of CD44 (HCAM), CD105 (endoglin), CD146 (a perivascular cell marker), and STRO-1 (a stem cell marker) indicated that the hPDLSCs used in this study retained their stem cell properties [24]. In particular, cells expressing CD146 are known to be endogenous progenitors that can maintain the homeostasis of the connective attachment underlying the new junctional epithelium [25]. Furthermore, the finding of strong colony formation ability with adipogenic and osteogenic potential demonstrated that hPDLSCs are likely to be a promising candidate for stem cell-mediated periodontal regeneration [26]. In stem cell transplantation for therapeutic purposes, it is important to develop a delivery method that promotes the efficiency of the cells such as a low survival rate, poor engraftment, and unpredictable differentiation under in vivo conditions [28]. To avoid these shortcomings, cell sheet technology was designed to provide proper ECM conditions while maintaining differentiated stem cells in the target lineage-specific tissues [29,30]. This technology has been successfully applied to promote periodontal regeneration by delivering cultured cells with an intact ECM [8]. Randomly oriented cell sheets have previously been used. However, in this study, we evaluated the possibility of using a patterned hPDLSC sheet for periodontal regeneration. Nanotopographical cues, consisting of parallel ridges and grooves 200-, 400-, 800-, and 1,600-nm wide and 600-nm deep, were fabricated to modulate the alignment of the hPDLSC sheet. PDL tissue has a highly organized ECM with collagenous fibers. The effect of the nanotopography on patterning PDLSCs enhances the subsequent matrix synthesis and promotes differentiation, which can be a powerful approach as a periodontal therapeutic [31]. A previous study by Jiang et al. [32] demonstrated that by mimicking the natural state of the PDL structure, the aligned PDLSCs improved the expression of collagen type I, which contributes to maturation and mechanical stability, and collagen type III, which presents during the early phase of wound healing, as well as enhancing the expression of POSTN, a matricellular protein that plays a role in collagen cross-linking. Consequently, patterning PDLSCs can upregulate the stability and maturation of the PDL during the wound healing process. To fabricate a cell sheet with biomimetic geometry, nanopatterns of various widths were compared to each other. Each nanopattern displayed a different cell morphology and alignment ( Figure 3A). This result indicates that hPDLSCs are sensitive to nanoscale topographic cues, and the nanotopography affected cell adhesion and alignment [19]. Additionally, these results demonstrate that the patterning of a cell sheet can be controlled by modifying the nanoscale topography of the substratum ( Figure 3B). To determine the proliferative effect of the nanotopographical changes of the substratum, hPDLSCs were cultured on a nanopatterned surface with grooves that were 800-nm wide and 600-nm deep, and the proliferation level was monitored on days 1, 4, 7, and 10 ( Figure 4A). hPDLSCs are known to expand and retract their cytoplasm depending on what they sense in the surrounding environment [33]. When grown on a flat surface, the cells had multiple widely spread cytoskeletal extensions, but when grown on the nanopatterned surface, the hPDLSCs were aligned along the nanopatterns ( Figure 4B). It has previously been shown that the nanopattern on the substrate did not interfere with or limit the growth of hPDLSCs, and consequently no variability in cell proliferation by degree of alignment was observed [19]. The substrate topology, which affects the orientation of PDL cells, has been reported to play an important role in maintaining a balanced metabolism between hard and soft tissue turnover during PDL cell differentiation [34]. These findings indicate that our results, with respect to the periodontal regenerative effects of the patterned cell sheets, were not affected by variation in overall cell growth. In the present study, we also investigated the expression of genes associated with periodontal differentiation to determine the potential effects of hPDLSC patterning while forming a cell sheet ( Figure 5). The expression of genes coding for ligamentogenic (POSTN, α-SMA, and SCLX) and osteogenic and cementogenic proteins (RUNX2, ALP, OCN, and CEMP1) was evaluated to quantify the characteristics of the resultant cell sheets. Patterned and unpatterned hPDLSC sheets exhibited different mRNA expression profiles after the formation of cell sheets. The expression of ligamentogenic genes tended to increase after patterned cell sheet formation. Nanotopography influenced the differentiation behaviors of the hPDLSCs, and resulted in advantageous periodontal gene expression relative to the control group. Our present findings revealed upregulation of periodontal ligamentogenesis genes (POSTN and α-SMA at day 7) in hPDLSCs when cultured on the nanopatterned surface. POSTN, which is an essential factor for tissue integrity and maturation, modulates PDL homeostasis by promoting the formation of highly stiff collagen [14]. POSTN is intensively localized on the Sharpey's fibers, which are essential for dispersing the mechanical forces loaded on the PDL [18]. It has been demonstrated that α-SMA-expressing cells from the perivascular regions of the PDL can differentiate into osteoblasts, cementoblasts, and fibroblasts [35]. However, α-SMA is less highly expressed in mineralized areas, and is not expressed in fully differentiated cells of the osteoblast lineage [36]. These results revealed that nanotopography influenced the early gene expression of the hPDLSC sheet, especially with respect to PDL formation. The expression of genes for osteogenic proteins (ALP and RUNX2) was upregulated in the patterned hPDLSC sheet. Both genes are known to be expressed during osteogenic and cementogenic differentiation [37]. ALP, an early marker for osteogenic differentiation, plays a role in bone matrix mineralization through regulation of the Wnt signaling pathway [38]. Upregulation of ALP means that the fabricated patterned cell sheet would promote the osteogenic and cementogenic induction potential of hPDLSCs. RUNX2 is essential for initiating osteogenesis and cementogenesis [39]. Upregulation of RUNX2 indicates that cementogenic and osteogenic induction was promoted in the patterned hPDLSC sheets. The expression of other genes related to hard tissue maturation (OCN and CEMP1) and tendon attachment (SCLX) were not upregulated. mRNA of OCN, a late indicator of osteoblast differentiation, and CEMP1, an indicator of cementoblast differentiation, were not upregulated until day 7. As a tendon-specific transcription factor, SCLX is required for the formation and maturation of tendons. SCLX is modulated by mechanical forces in tendons in vivo and in vitro [40]. Consequently, the patterned hPDLSC sheet did not upregulate genes https://doi.org/10.5051/jpis.2017.47.6.402 Periodontal regenerative property of patterned hPDLSC sheet associated with periodontal tissue maturation in the absence of an osteogenic supplement. Most periodontal regeneration studies have focused on osteogenic induction, whereas this study focused on the advantages of patterned hPDLSC sheets for both periodontal osteogenic and ligamentogenic induction, which contribute to the regeneration of periodontal tissue [38]. The results of this study may indicate that nanotopography-induced patterning of the hPDLSC sheet has significant implications for improving the early periodontal tissue formation potential. Additionally, the microenvironment provided by the patterned hPDLSC sheet could provide optimal conditions for periodontal tissue regeneration. More complicated conditions may be needed to study periodontal tissue maturation. For instance, the retention of the nanotopography-induced maturation of hPDLSCs after sheet transfer will still need to be investigated. However, a previous study showed that patterned morphological phenotypes were retained even after cell sheet transfer, which implies that the phenotypical changes induced by nanotopography might be retained after becoming scaffold-free [19]. In conclusion, a nanopatterned substratum functionalized with thermoresponsive polymers facilitated the reproducible and robust fabrication of patterned cell sheets using hPDLSCs. The hPDLSCs showed rapid and spontaneous monolayer formation, as well as improved gene expression patterns related to PDL tissue regeneration compared to the control group. Based on these results, methods of forming cell sheets using a thermoresponsive and nanopatterned substratum could be utilized for the facile and reproducible fabrication of controllable structures. Patterning PDLSCs while fabricating cell sheets would be advantageous for improving periodontal regenerative function.
2018-04-03T04:07:43.942Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "0b51baafb6ddce131cec09c5d12b59cb2f525f35", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5764766?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0b51baafb6ddce131cec09c5d12b59cb2f525f35", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
104431267
pes2o/s2orc
v3-fos-license
Determination of Forsythin in Yinqiao Jiedu Pills by Capillary Electrophoresis This paper developed the determination of forsythia content in Yinqiao Jiedu Pills by high performance capillary electrophoresis (HPCE) method. The borax solution was chosen as buffer solution, and its concentration was 10 mmol at a constant voltage of 20kV and injecting time of 10s. The content of forsythia in Yinqiao Jiedu Pills was 4.615 mg/g (RSD = 10.1%) (n = 6). This method is suitable for the detection of the content of forsythia in Yinqiao Jiedu Pills. Introduction Yinqiao Jiedu Pills are consists of honeysuckle flower, weeping forsythia capsule, peppermint, fine leaf schizonepeta herb, fermented soybean, great burdock achene, platy codon root, common lophatherum herb and liquoric root nine Chinese drug. It has the effect of eliminating headache and refreshing, clearing heat and detoxifying [1]. Zhu [2] established a method for the determination content of forsythia glycosides in Yinqiao Jiedu Tablets by HPLC. The HPLC was performed on C18 column (4.6 mm×250 mm, 5 μm) with acetonitrile-water(25:75)as mobile phase. The wavelength of UV detector was 277 nm. Zhong et al [3] established the HPLC determination method of chromogenic acid for developing the quality standard of Yinqiao Jiedu Oral Liquid. The HPLC separation was performed on an E. Merck Lichrospher 100RP-10 column (4 mm×250 mm, 5 μm). The mobile phase was composed of acetonitrile-0.4% phosphoric acid (10:100). The flow rate was 1.0 ml/min. The wavelength of UV detector was 327 nm. Song et al [4] established a method for determination of arctiin and forsythia in Yinqiao Jiedu Granules by HPLC. The analytical column C18 (4.6 mm×250 mm, 5 μm) was used. The mobile phase was composed of acetonitrile and water (27: 73) at a flow rate of 1.0 ml/min. The detection wavelength was 280 nm and column temperature was 30°C. Quan et al [5] established an HPLC method for the determination of chromogenic acid and arctiin in Yinqiao Jiedu Pills. The separation was performed on a Thermo ODS-2 Hyperil column (4.6 mm×250 mm, 5 μm), the mobile phase was composed of acetonitrile-0.1% phosphoric acid with gradient elution. The flow rate was 1.0 ml/min. The wavelength of UV detector was 280 nm and column temperature was 30°C. Zhang et al [6] established a high performance liquid chromatographic method for the determination of forsythoside An in Yinqiao Jiedu Tablets from different drug manufacturers. The HPLC method was performed on a Kromasil 100A C18 column (4.6 mm×150 mm, 5 μm), with a mobile phase as acetonitrile-0.4% acetic acid (13:87) at a flow rate of 1.0 mL/min. The detection wavelength was 330 nm and the column temperature was 30°C. The injection volume was 10 μL. Chen et al [7] established the method of the determination of arctiin content in yinqiao jiedu pills by RP-HPLC so as to provide a rapid and effective analytic approach for the quality 2 control. Chromatographic column was Shimadzu C18 (4.6 mm×250 mm). The mobile phase was methanol-water (47:53). The flow rate was 1 mL/min. The detection wavelength was 280 nm. The external standard method was used to investigate the chromatographic peak of the drug under the chromatographic condition. Simultaneously, the specificity, linear relationship, accuracy and the others were observed. Eight-wavelength HPLC fingerprints of Yinqiao Jiedu Pills were established by Sun et al [8] to comprehensively evaluate the quality of Yinqiao Jiedu Pills. Information entropy-weight method was applied to integrate the qualitative and quantitative information of Yinqiao Jiedu Pills at eight-wavelength fingerprints, in which the quality of 15 batches of Yinqiao Jiedu Pills was assessed by the 6-level systematically quantified fingerprint method. In this paper, the forsythia content in Yinqiao Jiedu Pills was determined by High Performance Capillary Electrophoresis. Forsythin (Chinese Drugs and Biological Products); Yinqiao Jiedu Pills (Shanxi huakang pharmaceutical Co., Ltd.); other reagents used in the experiments were all analytical grade; Doubledistilled water was used. Experimental Methods Before the start of the experiment, capillary was successively washed with 1 mol·L -1 hydrochloric acid solution, double-distilled water, 1 mol·L -1 sodium hydroxide solution, double-distilled water, buffer solution, each for 8 min. After three times running, capillary was cleaned again using the above method. Measurements were carded out at 20 kV voltage and experimental temperature at 20°C. UV detection wavelength was 277 nm. Injection time was 10s (7.5 cm height difference). Sample Preparation Yinqiao Jiedu Pills sample solution: Yinqiao Jiedu Pills powder was accurately weighed 1.8914 g, added 40 mL water with 30% ethanol, extracted time of 3h at 60°C, filtered, washed and set the volume to 50 mL that was the Yinqiao Jiedu Pills sample solution. Selection electrophoresis conditions Based on past experiment experience, we chose 10 mmol/L borax solution as a running buffer solution. According to the literature, forsythia maximum absorption wavelength was at 277 nm, so we chose the 277 nm detection wavelength. Quantitative analysis 3.2.1. Standard curve. First, forsythia standard solution that the concentration were 0.45, 0.225, 0.112, 0.056, 0.028, 0.014 mg/mL was prepared. Each standard solution was run for three times under the above electrophoresis conditions and the results averaged. The chromatogram of forsythia standard solution was showed in Figure 1. Taking concentration as the abscissa and peak area as the ordinate, the standard curve was drew. Linear regression equation of forsythia (peak area: y μV•s, density: x mg/mL) and the linear range was as follows: y= -927.3+36773.4x (r=0.987), 0.014 -0.45 mg/mL. Precision test. A forsythia standard solution precisely drew and continuously injected for sixt times under electrophoretic separation conditions, the RSD of forsythia peak area were 1.08%, indicating good precision.
2019-04-10T13:12:42.592Z
2018-12-12T00:00:00.000
{ "year": 2018, "sha1": "cc10df353c01f7e49538a52756c6ac4ff7f5627a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/452/2/022122", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "eb793764e0e2a4df30b4f0518fca5ef76d29f1cd", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
174820772
pes2o/s2orc
v3-fos-license
Collecting and Analyzing Multidimensional Data with Local Differential Privacy Local differential privacy (LDP) is a recently proposed privacy standard for collecting and analyzing data, which has been used, e.g., in the Chrome browser, iOS and macOS. In LDP, each user perturbs her information locally, and only sends the randomized version to an aggregator who performs analyses, which protects both the users and the aggregator against private information leaks. Although LDP has attracted much research attention in recent years, the majority of existing work focuses on applying LDP to complex data and/or analysis tasks. In this paper, we point out that the fundamental problem of collecting multidimensional data under LDP has not been addressed sufficiently, and there remains much room for improvement even for basic tasks such as computing the mean value over a single numeric attribute under LDP. Motivated by this, we first propose novel LDP mechanisms for collecting a numeric attribute, whose accuracy is at least no worse (and usually better) than existing solutions in terms of worst-case noise variance. Then, we extend these mechanisms to multidimensional data that can contain both numeric and categorical attributes, where our mechanisms always outperform existing solutions regarding worst-case noise variance. As a case study, we apply our solutions to build an LDP-compliant stochastic gradient descent algorithm (SGD), which powers many important machine learning tasks. Experiments using real datasets confirm the effectiveness of our methods, and their advantages over existing solutions. Abstract-Local differential privacy (LDP) is a recently proposed privacy standard for collecting and analyzing data, which has been used, e.g., in the Chrome browser, iOS and macOS. In LDP, each user perturbs her information locally, and only sends the randomized version to an aggregator who performs analyses, which protects both the users and the aggregator against private information leaks. Although LDP has attracted much research attention in recent years, the majority of existing work focuses on applying LDP to complex data and/or analysis tasks. In this paper, we point out that the fundamental problem of collecting multidimensional data under LDP has not been addressed sufficiently, and there remains much room for improvement even for basic tasks such as computing the mean value over a single numeric attribute under LDP. Motivated by this, we first propose novel LDP mechanisms for collecting a numeric attribute, whose accuracy is at least no worse (and usually better) than existing solutions in terms of worst-case noise variance. Then, we extend these mechanisms to multidimensional data that can contain both numeric and categorical attributes, where our mechanisms always outperform existing solutions regarding worst-case noise variance. As a case study, we apply our solutions to build an LDP-compliant stochastic gradient descent algorithm (SGD), which powers many important machine learning tasks. Experiments using real datasets confirm the effectiveness of our methods, and their advantages over existing solutions. Index Terms-Local differential privacy, multidimensional data, stochastic gradient descent. I. INTRODUCTION Local differential privacy (LDP), which has been used in well-known systems such as Google Chrome [18], Apple iOS and macOS [36], and Microsoft Windows Insiders [12], is a rigorous privacy protection scheme for collecting and analyzing sensitive data from individual users. Specifically, in LDP, each user perturbs her data record locally to satisfy differential privacy [16], and sends only the randomized, differentially private version of the record to an aggregator. The latter then performs computations on the collected noisy data to estimate statistical analysis results on the original data. For instance, in [18], Google as an aggregator collects perturbed usage information from users of the Chrome browser, and estimates, e.g., the proportion of users running a particular operating system. Compared with traditional privacy standards such as differential privacy in the centralized setting [16], which typically assume a trusted data curator who possesses a set of sensitive records, LDP provides a stronger privacy assurance to users, as the true values of private records never leave their local devices. Meanwhile, LDP also protects the aggregator against potential leakage of users' private information (which happened to AOL 1 and Netflix 2 with serious consequences), since the aggregator never collects exact private information in the first place. In addition, LDP satisfies the strong and rigorous privacy guarantees of differential privacy; i.e., the adversary (which includes the aggregator in LDP) cannot infer sensitive information of an individual with high confidence, regardless of the adversary's background knowledge. Although LDP has attracted much attention in recent years, the majority of existing solutions focus on applying LDP to complex data types and/or data analysis tasks, as reviewed in Section VII. Notably, the fundamental problem of collecting numeric data has not been addressed sufficiently. As we explain in Section III-A, in order to release a numeric value in the range [−1, 1] under LDP, currently the user has only two options: (i) the classic Laplace mechanism [16], which injects unbounded noise to the exact data value, and (ii) a recent proposal by Duchi et al. [14], which releases a perturbed value that always falls outside the original data domain, i.e., [−1, 1]. Further, it is non-trivial to extend these methods to handle multidimensional data. As elaborated in Section IV, a straightforward extension of a single-attribute mechanism, using the composition property of differential privacy, leads to suboptimal result accuracy. Meanwhile, the multidimensional version of [14], though asymptotically optimal in terms of worst-case error, is complicated and involves a large constant. Finally, to our knowledge, there is no existing solution that [14]. The terms MaxVar PM , MaxVar HM , and MaxVar Du denote the worst-case noise variance of these three methods, respectively, for perturbing a d-dimensional numeric tuple under -local differential privacy (elaborated in Section II). In addition, # = ln 7+4 can perturb multidimensional data containing both numeric and categorical data with optimal worst-case error. This paper addresses the above challenges and makes several major contributions. First, we propose two novel mechanisms, namely Piecewise Mechanism (PM) and Hybrid Mechanism (HM), for collecting a single numeric attribute under LDP, which obtain higher result accuracy compared to existing methods. In particular, HM is built upon PM, and has a worse-case noise variance that is at least no worse (and usually better) than existing solutions. Then, we extend both PM and HM to multidimensional data with both numeric and categorical attributes with an elegant technique that achieves asymptotic optimal error, while remaining conceptually simple and easy to implement. Further, our fine-grained analysis reveals that although both [14] and the proposed methods obtain asymptotically optimal error bound on multidimensional numeric data, the former involves a larger constant than our solutions. Table I summarizes the main theoretical results in this paper, which are confirmed in our experiments. As a case study, using the proposed mechanisms as building blocks, we present an LDP-compliant algorithm for stochastic gradient descent (SGD), which can be applied to train a broad class of machine learning models based on empirical risk minimization, e.g., linear regression, logistic regression and SVM classification. Specifically, SGD iteratively updates the model based on gradients of the objective function, which are collected from individuals under LDP. Experiments using several real datasets confirm the high utility of the proposed methods for various types of data analysis tasks. In the following, Section II provides the necessary background on LDP. Sections III presents the proposed fundamental mechanisms for collecting a single numeric attribute under LDP, while Section IV describes our solution for collecting and analyzing multidimensional data with both numeric and categorical attributes. Section V applies our solution to common data analytics tasks based on SGD, including linear regression, logistic regression, and support vector machines (SVM) classification. Section VI contains an extensive set of experiments. Section VII reviews related work. Finally, Section VIII concludes the paper. II. PRELIMINARIES In the problem setting, an aggregator collects data from a set of users, and computes statistical models based on the collected data. The goal is to maximize the accuracy of these statistical models, while preserving the privacy of the users. Following the local differential privacy model [5], [14], [18], we assume that the aggregator already knows the identities of the users, but not their private data. Formally, let n be the total number of users, and u i (1 ≤ i ≤ n) denote the i-th user. Each user u i 's private data is represented by a tuple t i , which contains d attributes A 1 , A 2 , . . . , A d . These attributes can be either numeric or categorical. Without loss of generality, we assume that each numeric attribute has a domain [−1, 1], and each categorical attribute with k distinct values has a discrete domain {1, 2, . . . , k}. To protect privacy, each user u i first perturbs her tuple t i using a randomized perturbation function f . Then, she sends the perturbed data f (t i ) to the aggregator instead of her true data record t i . Given a privacy parameter > 0 that controls the privacy-utility tradeoff, we require that f satisfies -local differential privacy ( -LDP) [18], defined as follows: Definition 1 ( -local differential privacy). A randomized function f satisfies -local differential privacy if and only if for any two input tuples t and t in the domain of f , and for any output t * of f , we have: The notation Pr[·] means probability. If f 's output is continuous, Pr[·] in (1) is replaced by the probability density function. Basically, local differential privacy is a special case of differential privacy [17] where the random perturbation is performed by the users, not by the aggregator. According to the above definition, the aggregator, who receives the perturbed tuple t * , cannot distinguish whether the true tuple is t or another tuple t with high confidence (controlled by parameter ), regardless of the background information of the aggregator. This provides plausible deniability to the user [9]. We aim to support two types of analytics tasks under -LDP: (i) mean value and frequency estimation and (ii) machine learning models based on empirical risk minimization. In the former, for each numerical attribute A j , we aim to estimate the mean value of A j over all n users, 1 For each categorical attribute A j , we aim to estimate the frequency of each possible value of A j . Note that value frequencies in a categorical attribute A j can be transformed to mean values once we expand A j into k binary attributes using one-hot encoding. Regarding empirical risk minimization, we focus on three common analysis tasks: linear regression, logistic regression, and support vector machines (SVM) [11]. Unless otherwise specified, all expectations in this paper are taken over the random choices made by the algorithms considered. We use E[·] and Var[·] to denote a random variable's expected value and variance, respectively. This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. III. COLLECTING A SINGLE NUMERIC ATTRIBUTE This section focuses on the problem of estimating the mean value of a numeric attribute by collecting data from individuals under -LDP. Section III-A reviews two existing methods, Laplace Mechanism [16] and Duchi et al.'s solution [14], and discusses their deficiencies. Then, Section III-B describes a novel solution, called Piecewise Mechanism (PM), that addresses these deficiencies and usually leads to higher (or at least comparable) accuracy than existing solutions. Section III-C presents our main proposal, called Hybrid Mechanism (HM), whose worst-case result accuracy is no worse than PM and existing methods, and is often better than all of them. A. Existing Solutions Laplace mechanism and its variants. A classic mechanism for enforcing differential privacy is the Laplace Mechanism [16], which can be applied to the LDP setting as follows. For simplicity, assume that each user u i 's data record t i contains a single numeric attribute whose value lies in range [−1, 1]. In the following, we abuse the notation by using t i to denote this attribute value. Then, we define a randomized function that outputs a perturbed record t * i = t i + Lap 2 , where Lap(λ) denotes a random variable that follows a Laplace distribution of scale λ, with the following probability density function: Clearly, this estimate t * i is unbiased, since the injected Laplace noise Lap 2 in each t * i has zero mean. In addition, the variance in t * i is 8 2 . Once the aggregator receives all perturbed tuples, it simply computes their average 1 n n i=1 t * i as an estimate of the mean with error scale O 1 √ n . Soria-Comas and Domingo-Ferrer [35] propose a more sophisticated variant of Laplace mechanism, hereafter referred to as SCDF, that obtains improved result accuracy for multi-dimensional data. Later, Geng et al. [21] propose Staircase mechanism, which achieves optimal performance for unbounded input values (e.g., from a domain of (−∞, +∞)). Specifically, for a single attribute value t i , both methods inject random noise n i drawn from the following piece-wise constant probability distribution: (2) and a(m) = 4 ; in Staircase mechanism, m = 2 1+e /2 and a(m) = 1−e − 2m+4e − −2me − . Note that the optimality result in [21] does not apply to the case with bounded inputs. We experimentally compare the proposed solutions with both SCDF and Staircase in Section VI. Duchi et al.'s solution. Duchi et al. [14] propose a method to perturb multidimensional numeric tuples under LDP. Algorithm 1 illustrates Duchi et al.'s solution [14] for the onedimensional case. (We discuss the multidimensional case in or − e +1 e −1 , with the following probabilities: Duchi et al. prove that t * i is an unbiased estimator of the input value t i . In addition, the variance of t * i is: Therefore, the worst-case variance of t * i equals e +1 e −1 2 , and it occurs when t i = 0. Upon receiving the perturbed tuples output by Algorithm 1, the aggregator simply computes the average value of the attribute over all users to obtain an estimated mean. Deficiencies of existing solutions. e −1 , even when the input tuple t i = 0. As such, the noisy value t * i output by Duchi et al.'s solution always has an absolute value e +1 e −1 > 1, due to which t * i 's variance is always larger than 1 when t i = 0, regardless of how large the privacy budget is. In contrast, the Laplace mechanism incurs a noise variance of 8/ 2 , which decreases quadratically with the increase of , due to which it is preferable when is large. However, when is small, the relatively "fat" tail of the Laplace distribution leads to a large noise variance, whereas Duchi et al.'s solution does not suffer from this issue since it confines t * i within a relatively small range − e +1 e −1 , e +1 e −1 . This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. , and should allow t * i to be close to t i with reasonably large probability (as the Laplace mechanism does). In what follows, we will present a new perturbation method based on this intuition. B. Piecewise Mechanism Our first proposal, referred to as the Piecewise Mechanism (PM), takes as input a value t i ∈ [−1, 1], and outputs a The probability density function (pdf) of t * i is a piecewise constant function as follows: Fig. 2 illustrates pdf(t * i ) for the cases of t i = 0, t i = 0.5, and t i = 1. Observe that when t i = 0, pdf(t * i ) is symmetric and consists of three "pieces", among which the center piece (i.e., t * i ∈ [ (t i ), r(t i )]) has a higher probability than the other two. When t i increases from 0 to 1, the length of the center piece remains unchanged (since r(t i )− (t i ) = C −1), but the length Algorithm 2: Piecewise Mechanism for One-Dimensional Numeric Data. input : tuple t i ∈ [−1, 1] and privacy parameter . ) decreases, and is reduced to 0 when t i = 1. The case when t i < 0 can be illustrated in a similar manner. Algorithm 2 shows the pseudo-code of PM, assuming the input domain is i to the server, where t * i denotes the noisy value output by Algorithm 2. It can be verified that r · t * i is an unbiased estimator of t i . The above method requires that the user knows r, which is a common assumption in the literature, e.g., in Duchi et al.'s work [14]. The following lemmas establish the theoretical guarantees of Algorithm 2. Lemma 1. Algorithm 2 satisfies -local differential privacy. In addition, given an input value t i , it returns a noisy value The proof appears in the full version [2]. By Lemma 1, PM returns a noisy value t * i whose variance is at most The purple line in Fig. 1 illustrates this worst-case variance of PM as a function of . Observe that PM's worst-case variance is considerably smaller than that of Duchi et al.'s solution when ≥ 1.29, and is only slightly larger than the latter when < 1.29, where 1.29 is x-coordinate of the point that the Duchi et al.' solution curve intersects that of PM in Fig. 1. Furthermore, it can be proved that PM's worst-case variance is strictly smaller than Laplace mechanism's, regardless of the value of . This makes PM be a more preferable choice than both the Laplace mechanism and Duchi et al.'s solution. Furthermore, Lemma 1 also shows that the variance of t * i in PM monotonically decreases with the decrease of |t i |, which makes PM particularly effective when the distribution of the input data is skewed towards small-magnitude values. (In Section VI, we show that |t i | tends to be small in a large This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. We omit the proof of Lemma 2 as it is a special case of Lemma 5 to be presented in Section IV-B. Remark. PM bears some similarities to SCDF [35] and Staircase mechanism [21] described in Section III-A, in the sense that the added noise in PM also follows a piecewise constant distribution, as in SCDF and Staircase. On the other hand, there are two crucial differences between PM and SCDF/Staircase. First, SCDF and Staircase mechanism assume an unbounded input, and produce an unbounded output (i.e., with range (−∞, +∞)) accordingly. In contrast, PM has both bounded input (with domain [−1, 1]) and output (with range [−C, C]). Second, the noise distribution of SCDF/Staircase consists of an infinite number of "pieces" that are data independent, whereas the output distribution of the piecewise mechanism consists of up to three "pieces" whose lengths and positions depend on the input data. C. Hybrid Mechanism As discussed in Section III-B, the worst-case result accuracy of PM dominates that of the Laplace mechanism, and yet it can still be (slightly) worse than Duchi et al's solution, since the noise variance incurred by the former (resp. latter) decreases (resp. increases) with the decrease of |t i |. Can we construct a method that that preserves the advantages of PM and is at the same time always no worse than Duchi et al's solution? The answer turns out to be positive: that we can combine PM and Duchi et al's solution into a new Hybrid Mechanism (HM). Further, the combination used in HM is non-trivial; as a result, the noise variance of HM is often smaller than both PM and Duchi et al's solution, as shown in Fig. 1 on Page 4. In particular, given an input value t i , HM flips a coin whose head probability equals a constant α; if the coin shows a head (resp. tail), then we invoke PM (resp. Duchi et al.'s solution) to perturb t i . Given t i and , the noise variance incurred by HM is denote the noise variance incurred by PM and Duchi et al.'s solution, respectively, when given t i and as input. We have the following lemma. Lemma 3. Let * be defined as: The proof appears in the full version [2]. By Lemma 3, when α satisfies Equation 7, the worst-case noise variance of HM is: The red line in Fig. 1 on Page 4 shows the worst-case noise variance incurred by HM, which is consistently no higher than This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. IV. COLLECTING MULTIPLE ATTRIBUTES We now consider the case where each user's data record contains d > 1 attributes. In this case, a straightforward solution is to collect each attribute separately using a singleattribute perturbation algorithm, such that every attribute is given a privacy budget /d. Then, by the composition theorem [17], the collection of all attributes satisfies -LDP. This solution, however, offers inferior data utility. For example, suppose that all d attributes are numeric, and we process each attribute using PM, setting the privacy budget to /d. Then, by Lemma 2, the amount of noise in the estimated mean of each , which is super-linear to d, and hence, can be excessive when d is large. To address the problem, the first and only existing solution that we are aware of is by Duchi et al. [14] for the case of multiple numeric attributes, presented in Section IV-A. A. Existing Solution for Multiple Numeric Attributes Algorithm 3 shows the pseudo-code of Duchi et al.'s solution for multidimensional numeric data. It takes as input a tuple t i ∈ [−1, 1] d of user u i and a privacy parameter , and outputs a perturbed vector t * i ∈ {−B, B} d , where B is a constant decided by d and . Upon receiving the perturbed tuples, the aggregator simply computes the average value for each attribute over all users, and outputs these averages as the estimates of the mean values for their corresponding attributes. Next, we focus on the calculation of B, which is rather complicated. Essentially, B is a scaling factor to ensure that the expected value of a perturbed attribute is the same as that of the exact attribute value. First, we compute: Feed t i [A j ] and k as input to PM or HM, and obtain a noisy value x i,j ; , otherwise. Then, B is calculated by: is an unbiased estimator of the mean of A j , and which is asymptotically optimal [14]. Although Duchi et al's method can provide strong privacy assurance and asymptotic error bound, it is rather sophisticated, and it cannot handle the case that a tuple contains numeric attributes as well as categorical attributes. To address this issue, we present extensions of PM and HM that (i) are much simpler than Duchi et al.'s solution but achieve the same privacy assurance and asymptotic error bound, and (ii) can handle any combination of numeric and categorical attributes. For ease of exposition, we first extend PM and HM for the case when each t i contains only numeric attributes in Section IV-B, and then discuss the case of arbitrary attributes in Section IV-C. B. Extending PM and HM for Multiple Numeric Attributes Algorithm 4 shows the pseudo-code of our extension of PM and HM for multidimensional numeric data. Given a tuple t i ∈ [−1, 1] d , the algorithm returns a perturbed tuple t * i that has non-zero value on k attributes, where k = max 1, min d, 2.5 . In particular, each A j of those k attributes is selected uniformly at random (without replacement) from all d attributes of t i , and t * i [A j ] is set to d k · x, where x is generated by PM or HM given t i [A j ] and k as input. This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. The intuition of Algorithm 4 is as follows. By requiring each user to submit k (instead of d) attributes, it increases the privacy budget for each attribute from /d to /k, which in turn reduces the noise variance incurred. As a trade-off, sampling k out of d attributes entails additional estimation error, but this trade-off can be balanced by setting k to an appropriate value, which is shown in Equation 12. We derive the setting of k by minimizing the worst-case noise variance of Algorithm 4 when it utilizes PM (resp. HM) 3 . Lemma 4. Algorithm 4 satisfies -local differential privacy. In addition, given an input tuple t i , it outputs a noisy tuple t * i , such that for any j ∈ . The proof appears in the full version [2]. By Lemma 4, the aggregator can use 1 n n i=1 t * [A j ] as an unbiased estimator of the mean of A j . The following lemma shows that the accuracy guarantee of this estimator matches that of Duchi et al.'s solution for multidimensional numeric data (see Equation 11), which has been proved to be asymptotically optimal [14]. This indicates that Algorithm 4's accuracy guarantee is also optimal in the asymptotic sense. Lemma 5. For any where C d is defined by Equation 9. Meanwhile, the variance of t * i [A j ] induced by PM is and the variance of t * i [A j ] induced by HM is where * is defined by Equation 6. From Equations 13, 14, and 15, we can prove the following: 3 We discuss how to obtain the value of k in the full version [2]. To illustrate Corollary 2, Fig. 3 shows the worst-case variance of PM (resp. HM) as a fraction of the worst-case variance of Duchi et al.'s solution, for various d and privacy budget . Observe that for d = 5, 10, 20, 40, the wort-case variance of HM is at most 77% of that of Duchi et al.'s solution, and PM's worst-case variance is also smaller than the latter. In our experiments, we demonstrate that both HM and PM outperform Duchi et al.'s solution in terms of the empirical accuracy for multidimensional numeric data. C. Handling Categorical Attributes So far our discussion is limited to numeric attributes. Next we extend Algorithm 4 to handle data with both numeric and categorical attributes. Recall from Section II that for each categorical attribute A, our objective is to estimate the frequency of each value v in A over all users. We note that most existing LDP algorithms (e.g., [5], [18], [38]) for categorical data are designed for this purpose, albeit limited to a single categorical attribute. Formally, we assume that we are given an algorithm f that takes an input a privacy budget and a one-dimensional tuple t i with a categorical attribute A, and outputs a perturbed tuple t * i while ensuring -LDP. In addition, we assume there is a function g(x, y) that g(x, y) = 1 if x = y; and 0, otherwise. Then for any value v ∈ A, encoding (OUE) protocol of Wang et al. [38] to perturb a single categorical attribute, which is the current state of the art to our knowledge. V. STOCHASTIC GRADIENT DESCENT UNDER LOCAL DIFFERENTIAL PRIVACY This section investigates building a class of machine learning models under -LDP that can be expressed as empirical risk minimization, and solved by stochastic gradient descent (SGD). In particular, we focus on three common types of learning tasks: linear regression, logistic regression, and support vector machines (SVM) classification. Suppose that each user u i has a pair x i , y i , where x i ∈ [−1, 1] d and y i ∈ [−1, 1] (for linear regression) or y i ∈ {−1, 1} (for logistic regression and SVM classification). Let (·) be a loss function that maps a d-dimensional parameter vector β into a real number, and is parameterized by x i and y i . We aim to identify a parameter vector β * such that where λ > 0 is a regularization parameter. We consider three specific loss functions: 1) Linear regression: (β; x i , y i ) = (x T i β − y i ) 2 ; 2) Logistic regression: (β; x i , y i ) = log 1 + e −yix T i β ; 3) SVM (hinge loss): (β; x i , y i ) = max 0, 1 − y i x T i β . For convenience, we define The proposed approach solves β * using SGD, which starts from an initial parameter vector β 0 , and iteratively updates it into β 1 , β 2 , . . . based on the following equation: where x, y is the data record of a randomly selected user, ∇ (β t ; x, y) is the gradient of at β t , and γ t is called the learning rate at the t-th iteration. The learning rate γ t is commonly set by a function (called the learning schedule) of the iteration number t; a popular learning schedule is In the non-private setting, SGD terminates when the difference between β t+1 and β t is sufficiently small. Under -LDP, however, ∇ is not directly available to the aggregator, and needs to be collected in a private manner. Towards this end, existing studies [14], [22] have suggested that the aggregator asks the selected user in each iteration to submit a noisy version of ∇ , by using the Laplace mechanism or Duchi et al.'s solution (i.e., Algorithm 3). Our baseline approach is based on this idea, and improves these existing methods by perturbing ∇ using Algorithm 4. In particular, in each iteration, we involve a group G of users, and ask each of them to submit a noisy version of the gradient using Algorithm 4. Here, if any entry of ∇ i is greater than 1 (resp. smaller than −1), then the user should clip it to 1 (resp. −1) before perturbation, where ∇ i is the gradient generated by the i-th user in group G. That is a common technique referred to as "gradient clipping" in the deep learning literature. After that, we update the parameter vector β t with the mean of the noisy gradients, i.e., where ∇ * i is the noisy gradient submitted by the i-th user in group G. This helps because the amount of noise in the , which could be acceptable if |G| = Ω d(log d)/ 2 . Note that in the non-private case, the aggregator often allows each user to participate in multiple iterations (say m iterations) to improve the accuracy of the model. But it does not work in the local differential privacy setting. To explain this, suppose that the i-th (i ∈ [1, m]) gradient returned by the user satisfies i -differential privacy. By the composition property of differential privacy [29], if we enforce -differential privacy for the user's data, we should have m i=1 i ≤ . Consider that we set i = /m. Then, the amount of noise in each gradient becomes O m √ d log d ; accordingly, the group size becomes |G| = Ω m 2 d log d/ 2 , which is m 2 times larger compared to the case where each user only participates in at most one iteration. It then follows that the total number of iterations in the algorithm is inverse proportional to 1/m; i.e., setting m > 1 only degrades the performance of the algorithm. VI. EXPERIMENTS We have implemented the proposed methods and evaluated them using two public datasets extracted from the Integrated Public Use Microdata Series [1], BR and MX, which contains This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. census records from Brazil and Mexico, respectively. BR contains 4M tuples and 16 attributes, among which 6 are numerical (e.g., age) and 10 are categorical (e.g., gender); MX has 4M records and 19 attributes, among which 5 are numerical and 14 are categorical. Both datasets contain a numerical attribute "total income", which we use as the dependent attribute in linear regression, logistic regression, and SVM (explained further in Section VI-B). We normalize the domain of each numerical attribute into [−1, 1]. In all experiments, we report average results over 100 runs. A. Results on Mean Value / Frequency Estimation In the first set of experiments, we consider the task of collecting a noisy, multidimensional tuple from each user, in order to estimate the mean of each numerical attribute and the frequency of each categorical value. Since no existing solution can directly support this task, we take the following best-effort approach combining state-of-the-art solutions through the composition property of differential privacy [29]. Specifically, let t be a tuple with d n numeric attributes and d c categorical attributes. Given total privacy budget , we allocate d n /d budget to the numeric attributes, and d c /d to the categorical ones, respectively. Then, for the numeric attributes, we estimate the mean value for each of them using either (i) Duchi et al.'s solution (i.e., Algorithm 3), which directly handles multiple numeric attributes, (ii) the Laplace mechanism or (iii) SCDF [35], which is applied to each numeric attribute individually using /d budget. The Staircase mechanism leads to similar performance as SCDF, and we omit its results for brevity. Regarding categorical attributes, since no previous solution addresses the multidimensional case, we apply the optimized unary encoding (OUE) protocol of Wang et al. [38], the state of the art for frequency estimation on a single categorical attribute, to each attribute independently with /d budget. Clearly, by the composition property of differential privacy [29], the above approach satisfies -LDP. We evaluate both the above best-effort approach using existing methods, and the proposed solution in Section IV, on the two real datasets BR and MX. For each method, we measure the mean square error (MSE) in the estimated mean values (for numeric attributes) and value frequencies (for categorical attributes). Fig. 4 plots the MSE results as a function of the total privacy budget . Overall, the proposed solution consistently and significantly outperforms the besteffort approach combining existing methods. One major reason is that the estimation error of the proposed solution is asymptotically optimal, which scales sublinearly to the data dimensionality d; in contrast, the best-effort combination of existing approaches involves privacy budget splitting, which is sub-optimal. For instance, on the categorical attributes, applying OUE [38] on each attribute individually leads to O d √ n error (where n is the number of users), which grows linearly with data dimensionality d. This also explains the consistent performance gap between Duchi et al.'s solution [14] and the Laplace mechanism (SCDF mechanism) on numeric attributes. Note that the main advantage of HM over PM is that on a single numeric attribute, HM is never worse than Duchi et al., whereas PM does not have this guarantee. We repeat the experiments on two additional synthetic datasets with the same properties as the first synthetic one, except that their attribute values are drawn from different distributions. One follows the uniform distribution where each attribute value is sampled from [−1, 1] uniformly; the other one follows the power law distribution where each attribute value x is sampled from [−1, 1] with probability proportional to c · (x + 2) −10 . Fig. 6 presents the results, which lead to similar conclusions as the results on real and Gaussiandistributed data. Lastly, Figs. 7 and 8 show the result accuracy in terms of MSE with varying the number of users n and dimensionality d on the MX dataset. Observe that more users and lower dimensionality both lead to more accurate results, which agrees with the theoretical analysis in Lemma 5. Meanwhile, in all settings the proposed solutions consistently outperform their competitors by clear margins. In the next subsection, we omit the results for SCDF, which are comparable to that of the Laplace mechanism. B. Results on Empirical Risk Minimization In the second set of experiments, we evaluate the accuracy performance of the proposed methods for linear regression, logistic regression, and SVM classification on BR and MX. For both datasets, we use the numeric attribute "total income" as the dependent variable, and all other attributes as independent variables. Following common practice, we transform each categorical attribute A j with k values into k − 1 binary attributes with a domain {0, 1}, such that (i) the l-th (l < k) value in A j is represented by 1 on the l-th binary attribute and 0 on each of the remaining k − 2 attributes, and (ii) the k-th value in A j is represented by 0 on all binary attributes. After this transformation, the dimensionality of BR (resp. MX) becomes 90 (resp. 94). For logistic regression and SVM, we also covert "total income" into a binary attribute by mapping the values larger than the mean value to 1, and 0 otherwise. Since each user sends gradients to the aggregator, which are all numeric, the experiment involves the 4 competitors in Section VI-A for numeric data: PM, HM, Duchi et al. [14], and the Laplace mechanism applied to each attribute independently with equally split privacy budget (i.e., /d for each attribute). Additionally, we also include the result in the non-private setting. For all methods, we set the regularization factor λ = 10 −4 . On each dataset, we use 10-fold cross validation 5 times to assess the performance of each method. Fig. 9 and Fig. 10 show the misclassification rate of each method for logistic regression and SVM classification, respectively, with varying values of the privacy budget . Similar to the results in Section VI-A, the Laplace mechanism leads to significantly higher than the other three solutions, due to the fact that its error rate is sub-optimal. The proposed algorithms PM and HM consistently outperform Duchi et al.'s solution with clear margins, since (i) the former two have smaller constant as analyzed in Section IV, and (ii) the gradient of each user often consists of elements whose absolute values are small, for which PM and HM are particularly effective, as we mention in Section III-B. Further, in some settings such as SVM with ≥ 2 on BR, the accuracy of PM and HM approaches that of the non-private method. Comparing the results with those in Section VI-A, we observe that the misclassification rates for logistic regression and SVM classification do not drop as quickly with increasing privacy budget as in the case of MSE for mean values and frequency estimates. This is due to the inherent stochastic nature of SGD: that accuracy in gradients does not have a direct effect on the accuracy of the model. For the same reason, there is This full paper appears in the Proceedings of the IEEE 35th Annual International Conference on Data Engineering (ICDE), held in April 2019. Fig. 11 demonstrates the mean squared error (MSE) of the linear regression model generated by each method with varying . We omit the MSE results for the Laplace mechanism, since they are far higher than the other three methods. The proposed solutions PM and HM once again consistently outperform Duchi et al.'s solution. Overall, our experimental results demonstrate the effectiveness of PM and HM for empirical risk minimization under local differential privacy, and their consistent performance advantage over existing approaches. VII. RELATED WORK Differential privacy [16] is a strong privacy standard that provides semantic, information-theoretic guarantees on individuals' privacy, which has attracted much attention from various fields, including data management [8], [10], [27], machine learning [4], theory [5], [13], [25], and systems [6]. Earlier models of differential privacy [16], [17], [29] rely on a trusted data curator, who collects and manages the exact private information of individuals, and releases statistics derived from the data under differential privacy requirements. Recently, much attention has been shifted to the local differential privacy (LDP) model (e.g., [13], [25]), which eliminates the data curator and the collection of exact private information. LDP can be connected to the classical randomized response technique in surveys [41]. Erlingsson et al. [18] propose the RAPPOR framework, which is based on the randomized response mechanism for publishing a value for binary attributes under LDP. They use this mechanism with a Bloom filter, which intuitively adds another level of protection and increases the difficulty for the adversary to infer private information. A follow-up paper [19] extends RAPPOR to more complex statistics such as joint-distributions and association testing, as well as categorical attributes that contain a large number of potential values, such as a user's home page. Wang et al. [38] investigate the same problem, and propose a different method: they transform k possible values into a noisy vector with k elements, and send the latter to curator. Bassily and Smith [5] propose an asymptotically optimal solution for building succinct histograms over a large categorical domain under LDP. Note that all of the above methods focus on a single categorical attribute, and, thus, are orthogonal to our work on multidimensional data including numeric attributes. Ren et al. [34] investigate the problem of publishing multiple attributes, and employ the idea of ksized vector, similar to [38]. This approach, however, incurs rather high communication costs between the aggregator and the users, since it involves the transmission of multiple k-sized vectors. Duchi et al. [13] propose the minimax framework for LDP based on information theory, prove upper and lower error bounds of LDP-compliant methods, and analyze the trade-off between privacy and accuracy. Besides, Kairouz et al. [24] propose the extremal mechanisms, which are a family of LDP mechanisms for data with discrete inputs, i.e., each input domain X contains a finite number of possible values. These mechanisms have an output distribution pdf with a key property: for any input x ∈ X and any output y, P r[y | x] has only two possible values that differ by a factor of exp( ). Kairouz et al. show that for any given utility measure, there exists an extremal mechanism with optimal utility under this measure, using a linear program with 2 |X | variables. It is unclear how to apply extremal mechanisms to continuous input domains with an infinite number of possible values, which is the focus on this paper. Finally, a recent work [3] introduces a hybrid model that involves both centralized and local differential privacy. Bittau et al. [6] evaluate real-world implementations of LDP. Also, LDP has been considered in several applications including the collection of indoor positioning data [26], inference control on mobile sensing [28], and the publication of crowdsourced data [34]. held in April 2019. VIII. CONCLUSION This work systematically investigates the problem of collecting and analyzing users' personal data under -local differential privacy, in which the aggregator only collects randomized data from the users, and computes statistics based on such data. The proposed solution is able to collect data records that contain multiple numerical and categorical attributes, and compute accurate statistics from simple ones such as mean and frequency to complex machine learning models such as linear regression, logistic regression and SVM classification. Our solution achieves both optimal asymptotic error bound and high accuracy in practice. In the next step, we plan to apply the proposed solution to more complex data analysis tasks such as deep neural networks.
2019-06-10T18:38:26.305Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "2fbb4ba85869a333637f2b5d762ad69fb58bc7af", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1907.00782", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8a51d0a74e7087d4c6cc069b98c133ffcc6ef265", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
233272016
pes2o/s2orc
v3-fos-license
Involvement of Cloud Computing and IoT in the Field of Health Care - The rapid growth of the Internet of Things (IoT) and cloud computing is finding is a big part of our lifestyle, planning to upgrade personal satisfaction by partnering with various shrewd gadgets, innovations, and applications. Generally, the fields of IoT and cloud computing considers the mechanization of everything around the world. Presently scientists have discovered that there is a reasonably big utilization of IoT and cloud in the field of the health care industry. In between the panoply of uses empowered by the Internet of Things (IoT), brilliant and associated health care services play a significant role . The arrangement of network sensors, either exhausted on the body or incorporated into our living environment which makes the processing of information demonstrative of our physical and emotional well-being. Such data can achieve a positive disruptive shift in the human services, collected consistently, totaled, and adequately mined. This study exhibits the significance of the Internet of Things and Cloud in the field of the Healthcare Industry. The beneficiary of the Internet of Things can be huge in the field of medicinal services applications. During the utilization of IoT based restorative administrations, sensors work for screening and quantifying different wellbeing parameters in the human body. These gadgets can be mainly centered over the monitoring condition of a patient when they are distant from everyone else or when they are not hospitalized. Thusly, a consistent input to the authority, relatives, or the patient is provided by them. Various wearable used for detecting gadgets in the market and they are fitted out with medicinal sensors to monitor various requirements, for example, body temperature, pulse, circulatory strain, breath rate, heartbeat and so on. Healthcare applications create a free-living environment that is conceivable and increasingly appropriate for the elderly and patients with serious health conditions. The use of IoT sensors is efficiently used in the current scenario to monitor and screen well-being conditions and transmit cautions if any phenomenal signs are detected. In addition, if there is a case of a mild problem, the IoT application has a course of action to administer a drug to patients. IoT-cloud excellently observes the crisis response and idleness for data trading. The data is transferred among edge servers, cloud, and client gadgets which straightforwardly influences the demonstration. To accomplish this objective edge figuring appeared and distributed computing stays on the edge of the framework to help constrained transmission capacity, dormancy, and system clog. The edge registration was also developed to assist dispersed and virtual innovation to provide cloud and IoT gadgets such as sensors with device support [2]. The majority of the proposed cloud-based IoT medicinal services observing structures have three parts such as information obtaining for utilizing wearable sensors, transmission information which administers constant sending of the information to the server of the social support association in a safe manner, and the cloud preparing for information such as stockpiling, investigation, and perception [3]. For the most part, the e-Healthcare control system consists of: ii. Wireless Body Area Network (WBAN) that is dependable upon IoT correspondence to enable Machine-to-Machine (M2M), and also correspondence in between Elements or empowering doctors to slightly communicate with medical server; iii. Medical server over the Cloud used for information stockpiling, preparing, and analytical examination; iv. Medical clinics pertain to doctors (e.g., specialists) who can get data remotely from the medicinal server in the Cloud [4]. These days, the research examination in this field approaches the reconciliation of Cloud Computing and the Internet of Things [5], [6]. This amalgamation is alluded to as Cloud of Things (CoT) which is another worldview used to endeavors the reconciliation of two remarkable and prominent advances such as the Internet of Things and the Cloud, and the advancement of Futuristic opportunity for Internet applications [7]. Although both IoT and Cloud are two unique and autonomous technological developments, there is a need to organize them to complement each other and have the option of assisting with inevitable and omnipresent figures [8] [9]. In this article, the main discussion is about the integration of cloud computing and the Internet of Things and further about the importance of cloud computing and finally, about the internet of things in the field of healthcare services. II. INTEGRATION OF CLOUD COMPUTING AND IOT Some specialists have developed a subject known as the Cloud of Things (CoT) to provide reconciliation between the cloud and the IoT [10]. CoT paradigm planned for using the IoT along with the Cloud. When implementing CoT, Cloud is a middleware that creates a straightforward link between obje cts and customers/applications (i.e. decreases the unpredictability that facilitates the enhancemen t of uses that handle excellent objects) [11]. Cloud will help IoT with its boundless capacity and figuring assets, while IoT allows the Cloud the opportunity to expand its administrations to certifiable objects [12]. Numerous endeavors have been made to advance the pattern toward this combination. Sensor-Cloud is one of the most relevant of these initiatives and is related to the mixing of sensors into the cloud server farm and providing arranged access to sensor information and assets for management [13]. For example, by misusing the joining between Cloud and IoT as pursuits, numerous benefits can be remarkable. The incorporation with Cloud improves IoT handling and calculation by including more capacities that are not permitted at the IoT end, and vitality sparing by empowering errand of stacking [15], [7], [12], [16]. Finally, the Cloud model fulfills the needs of IoT through its for all intents and purposes boundless preparing and on-request utilization, which empowers simple investigation of IoT information. iii. Cloud provides a knowledgeable and easy solution to enable IoT to observe and supervise protests anywhere without a need to impart expensive committed equipment. Besides, it gives an effective arrangement structure a maturing the created information of Things [7]. iv. In the field of IoT, there are some restrictions in numerous regions, for example, versatility, interoperability, and effectiveness because of the high heterogeneity of its gadgets, advancements, and conventions. Cloud can encourage the progression of IoT information gathering and prepare just as facilitate the procedure of incorporation of new things while lessening the expense of the organization and complex information handling [7]. v. CoT provides progress in new magnificent administrations and applications that affect the growth of the Cloud by objects that operates new issues [7][12] when addressing scalability. III. SYSTEM ARCHITECTURE Knowledge on the real world and possibility of data generated by wearable therapeutic gadgets with associated electronic well-being records can be highly beneficial for parental figures and a strong hotspot for scientists, as they have the ability to collect discreet information and data effectively outside the office dividers [17]. This can be achieved adequately by identifying restorative gadgets that can persistently dissect and securely store well-being-related data to a cloud, which gives unending data advances to extended timeframes, to achieve genuine perceptive social insurance, and even to find conceivable unfortunate propensities. In the relevant well-being frameworks, it is remarkable to understand how a Personal Health Devices (PHD) transmits wellbeing information, how the information is stored by the use of Personal Health Health Record (PHR) and how the information is securely stored on the web by Personal Health Managers (PHM). A. Personal Health Devices-Here, wearable or not, Personal Health Devices are resources with obligated assets that allow you to securely accumulate, store, preserve, and partition your own and your family's well-being information to a door with the ultimate purpose of capturing, displaying, and transmitting [18]. Low-control correspondence innovation like Bluetooth ought to be a characteristic decision for the organization of individual e-Health frameworks and gadgets. A passage will transfer information for telephone assistance to a human services administration population for the ultimate purpose of additional analysis and usage data from different fields, such as well-being and health, disease control or a free age estimate gadget as shown in Figure 1. The correspondence between a PHD and the door is called a virtual point-to-point relationship [19]. For the most part, PHD articulates the necessary with a solitary passage at a specific moment. Doors will interface with most of the PHDs using different point-to-point associations at the same time [20]. incorporation in the setting of human services can fundamentally add to building effective social insurance applications for overseeing and proficiently observing clinics and patients for asset sharing and cost consumptions. According to the research work of S. Luo and B. Ren, the author described in the aspects of Cloud and IoT, fragile health care checking observation devices and the data administrations can effectively provide early recognition and treatment of constant illnesses that significantly affect an individual's well being. Such that, IoT body sensors collect the necessary data from a person (i.e. implantable or wearable) and then the data can be decomposed and prepared in the Cloud [21]. The utilization of CoT in the health care space offers new chances to restorative IT foundation and can improve medicinal services administrations mentioned by author L. Catarinucci et al. [22]. In addition, CoT strengthens the types of medicinal services and the essence of real social security benefits by rearranging the way to the imperative knowledge of patients and transmitting them to a therapeutic emphasis for capability and planning purposes on Cloud [7].According to the work of author G. Fortino et al. this can be put forward that the utilization of CoT in Body Sensor Network (BSN) dependent medicinal services help during the time spent away from accumulated information, this is similar to preparing and dissecting them in an adaptable style [16]. With CoT, human services sensors can be overseen productively in a straightforward way just as make any managing hazardous medicinal services benefits that are highly effective mentioned by author M. To accomplish productive medicinal services administrations concerning delay-touchy and vitality utilization, haze registering can assume a crucial job by Cloud weight reduction as a localized stockpiling of IoT gadgets and its ability to manage information [24][25] [26] [26]. Since patients' information is hazardous, guidelines do not enable them to be prepared outside the association of the medicinal service [27]. Fog Computing is expected to fill this hole along these lines by carrying planning capabilities nearer to social insurance suppliers (e.g., clinic). After doing this, a lot of advantages can be obtained, such as decreased idleness and decreased use of vitality, as well as increased use of transfer speed and protection of knowledge. is a CoT-based venture subsidized by Spanish National R&D planned to provide creative social insurance administrations to reliant and individuals with serious illnesses issues. With the implementation of CoT, the VCC task means to increase social and innovative goals that improve the nature of the new administrations being offered to the traditional models. These destinations extend from making stage engineeringthat is in charge of social event physiological parameters from anyplace and convey them to the Cloud for capacity and preparing purposes. To support the elderly to do their physical planning procedures and to assist parental figures in a professional way to monitor and screen the elderly remotely. According to the work of author Danilo, et al. [19] regarding the expanding accessibility of associated Personal Health Devices (PHDs) empowers another sort of data to be accessible on the Internet: wellbeing data. The majority of these gadgets have explicit approaches to associate and share data to the Internet through doors or wellbeing directors, and making vertical arrangements where one gadget just converses with one wellbeing administration. In this unique situation, this article proposes a design that suggests about the utilization of various sorts of wellbeing supervisors however keeping interoperability by the utilization of generally embraced models. The primary commitment of this work is the diffusion of well-being managers in various fields, such as mobile phones and cloud applications, allowing the use of solitary well-being management for different types of PHDs. V. CLOUD AND IOT GADGETS FOR HEALTH CARE The IoT's empowerment influences are genius gadgets that track and remotely monitor constantly following imaginable and cloud-based administrations that power networks. Numerous applications discuss verbally with the patients in regards to their wellbeing routine [29]. The application conveys warnings to the patients. There is a particular gathering of patients who often practice these devices, one such gathering is those with standard age-related diseases such as blood weights and blood sugars, and patients with heftiness are another gathering [30]. A few instances of each combined with the IoT to help proactive social insurance. A. Insulin Injection Trackers-Extreme insulin infusion tracker causes diabetic patients to deal with their wellbeing. The infusion tracker is a mechanized top that suits most insulin pens available. It remotely transmits a diabetic's insulin infusion information to an advanced operating application. patient is taking his drug on the calendar. At the point, when the pill arrives at the patient's stomach and is divulged to stomach liquid, it transmits a sign through the claim body tissue of the client to a fix on the patient's skin. The fix at that point communicates its information to a PDA, where it can be imparted to doctors and parental figures. B. Prescription Pills-Physician C. Asthma Nitric Oxide (NO) monitor-It is a minor, hand-held gadget that takes ceaseless estimations of NO in a patient's breath, which can help the administration of asthma and other fiery aviation route conditions. At the point, when a patient breathes into the mouthpiece of the gadget, the degrees of NO in their breath are estimated to choose if there are aviation route irritation and the best medicine to treat it. The information that is gathered can be synchronized with electronic therapeutic record frameworks. D. Hypothermia-Hypothermia locator is a savvy liquid administration gadget that naturally measures pee yield and Core Body Temperature (CBT) for patients whose bladder is embedded with a urinary catheter. By checking the crucial signs, care can be started right on time for heart disappointment, kidney damage, irresistible sickness, sepsis, prostate tumors, diabetes, and consume patients. Estimating CBT can likewise connote contamination or hypothermia. At the point when utilized in medical clinics, this gadget gives fill level and CBT information legitimately to a nursing station or to a screen remotely. VI. CONCLUSION AND FUTURE WORK Progression in empowering brilliant advancements, for example, IoT, enormous information, mist, and distributed computing and so forth., savvy city and applications are expanding zone of research these days. Intelligent social insurance is a basic interest and the quality of administration necessity of medicinal services application is low idleness or start to finish postponement of the wellbeing information from the purpose of cause to preparing an input. A few human services systems dependent on cloud and IoT have been accounted for since sensors create information and cloud can bolster assets required to store, process, and recover data on interest premises. It is likewise recognized that IoT coordinated with the cloud is getting profound established with a wide selection of associated gadgets in the human services industry. Consequently, it is
2021-04-16T19:54:46.267Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "49475c1e78ebc8aef89a60556395bf4f3b9d55f3", "oa_license": null, "oa_url": "https://doi.org/10.36548/jtcsst.2021.1.001", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "49475c1e78ebc8aef89a60556395bf4f3b9d55f3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
144427365
pes2o/s2orc
v3-fos-license
The role of continuing training motivation for work ability and the desire to work past retirement age Germany, relying on a pay-as-you-go pension system has increased regular retirement age to 67 due to its ageing population caused by decreasing birth rates and increasing life expectancy. Using data from the nationally representative ‘Survey on continuing in employment in pensionable age’, we investigate the relevance of training motivation for work ability and the desire to work past retirement age and whether differences between social groups reflect inequalities in training participation. Results show significant positive correlations between continuing training motivation and work ability and desire to work past retirement age. Differentiated for selected respondent groups the level of qualification has a significant influence. This effect was stronger than any differences with regard to gender or employment participation. Results imply external conditions only partly explain older workers’ work ability or desire to work past retirement age. Compared to inequalities in training participation, motivation for continuing training is high across analysed subgroups. Introduction Due to low fertility and increasing life expectancy in Germany the population is ageing as well as shrinking (Börsch-Supan & Wilke, 2009).The Statistisches Bundesamt (Federal Statistical Office in Germany) foresees that the percentage of older people (65+) will increase from around 20% in 2010 to 28% in 2030 (Statistisches Bundesamt, 2009).At the same time, the population of working age will be reduced by 6.5 million until the year 2025 (Bundesagentur für Arbeit, 2013).Since these changes will have consequences for the economy and social security system the German government developed strategies to compensate these effects.Especially the pension system, organised as a pay-as-you-go system, needed reforms.Hence, in 2008 it was decided that starting in 2012 retirement age will be raised stepwise from 65 to 67.Furthermore, the German government put into action the first demographic strategy called 'every age counts' with six fields of action.One of these fields is 'Keeping workers motivated, skilled and healthy' (Federal Ministry of the Interior, 2012).This accompanies the paradigm shift from widespread pre-retirement regulations to a prolonged working life.As a consequence, it is important to understand if individuals of the age of 55 and older would be willing and able to continue working even beyond retirement age.It can be shown that continuing training helps to keep people employable (e.g., Kenny, English & Kilmartin, 2007;Staudinger & Heidemeier, 2009) and that upon reaching retirement age, individuals are still in rather good health with years of active time to spend (Tesch-Römer, Heribert & Wurm, 2006). This observation is reflected in our data source, the nationally representative survey on older workers' attitudes towards working life conducted by the German Federal Institute of Population Research in 2008 (fully described in Büsch, Dorbritz, Heien & Micheel, 2010) that found 47.3% of respondents aged between 55 and 64 years prepared to work past traditional retirement age.As this figure leaves room for improvement we seek a deeper understanding of factors determining the wish to work past retirement.Earlier work (e.g., Blancke, Roth & Schmid, 2000;Bretschneider, 2007) points to lifelong training as an important determinant allowing for individual task and job mobility, and for leading an independent working life.Another closely related factor is work ability, enabling individuals to maintain and update knowledge and skills, thus staying employable.Therefore, this paper aims to give insight into the role of continuing training motivation for work ability and the desire to work past work retirement age.As both work ability and training participation have been shown to differ across social groups we also analyse possible group differences. This paper makes a new contribution to the literature because it highlights the role of training motivation for staying employed at a later age.As a consequence, organisations and policy-makers are challenged to establish motivation-enhancing work environments that follow a life span approach to instilling and promoting learning and training motivation.The paper is structured as follows: First a general description of continuing training and participation in Germany is given, with insight into motivational aspects, outcomes of continuing training (work ability and desire to work past retirement age) and a brief discussion of social heterogeneity in participation.Second, we conduct multivariate analyses to test our proposed relationships and discuss results with regard to previous findings on the subject.Third, the paper concludes with suggestions for organisations and policy-makers. Continuing training Individuals are increasingly expected to become active on their own behalf, displaying the ability to self-organize themselves as an indicator of their professional competences (Dienel & Willke, 2004).This 'lifelong learning', or 'self-directed learning' (Garrison, 1997) is one precondition to achieve and retain 'employability' on the labour market (Europäische Kommission, 1995).Thus, an individual's affinity towards continuing training has become a point of interest in e.g.job interviews and is perceived as an important factor in holding a job (Vollmer, 2012). Continuing training motivation is determined by contextual as well as personal factors (Mathieu & Martineau, 1997;Colquitt, LePine & Noe, 2000) such as achievement motivation (Mathieu, Martineau & Tannenbaum, 1993), self-efficacy (Van Erde & Thierry, 1996) or job-related personal factors such as job involvement, organizational and career commitment (Colquitt et al., 2000).Generally, interest in continuing training is high, but decreases with age (see Berg, Elders & Burdorf, 2010;Schröder & Gilberg, 2005;Hansen & Nielsen, 2006).For individuals 50 years or older, the most important reason for not participating in training is lack of obligation (Huber, 2009), implying, possibly, a lack of motivation.It has been surmised that older workers experience diminishing learning skills, negatively affecting their learning motivation and perceived self-efficacy (Dworschak, Buck & Schletz, 2006).But as there is hardly any decline in cognitive functioning in healthy adults under 65 years (see Baltes et al., 2006), declining learning abilities do not seem to explain this motivational drop.Expecting a poor pay-off for training may also contribute to lack of training motivation among workers with decreased work ability, as they are more at risk of premature departure from working life and thus feel less motivated to invest in their career (Berg et al., 2010). Training motivation strongly influences training outcomes (Schiefele & Schreyer, 1994).As participants' motivation to learn is 'influenced by beliefs concerning effortperformance and performance-outcome relationships, career/job attitudes, and reactions to skill needs assessment' (Noe, 1986, p. 743), training participants with similar abilities are likely to be more successful at acquiring knowledge, being able to change behaviour, and effectively using that knowledge in their work if they are motivated (Noe, 1986).This implies that training has a stronger effect on work ability if individuals are motivated. Outcomes of continuing training motivation In our analysis we focus on work ability and desire to work past retirement age as outcomes of continuing training motivation.As workers need both physical and mental abilities that match job demands to perform their tasks successfully, the term 'work ability' depicts a balance between job requirements and individual characteristics, such as health, knowledge, skills or motivation (Berg et al., 2010).Work ability seeks to measure 'How good is the worker at present, in the near future, and how able is he or she to do his or her work with respect to the work demands, health and mental resources' (Ilmarinen, Tuomi, & Seitsamo, 2005, p.3). Follow-up studies (von Bonsdorff, Huuhtanen, Tuomi & Seitsamo, 2010) found that lower work ability predicts earlier retirement between ages of 55 and 65 (see Sell, 2009;Hopsu, Leppänen, Ranta & Louhevaara, 2005), and the reverse (Salonen, Arola, Nygård, Huhtala & Koivisto, 2003). On average, work ability declines with age, although with decreasing stability (Ilmarinen et al., 2005).On an individual level, this effect is due to different personal biographies, health, training level or individual coping strategies employed to counter age effects.Additionally, there is the effect of the different organisations on workers throughout their occupational biographies (Dworschak et al., 2006).For older adults aged 55-64 health and functional capacities as well as work factors influence work ability most, while competences, values and attitudes play a lesser role that further decreases with age.Gender differences are minor (Ilmarinen et al., 2005), but there is a difference between individuals working physically as opposed to cognitive workers, with the latter enjoying higher work ability (see Tuomi, Huuhtanen, Nykyri, & Ilmarinen, 2001). Education science has brought forth various theories on self-directed learning and learning motivation, focussing on goal-and content-related conditions as well as interest-related aspects of learning.Within the latter, person-object-theory focuses on an individual's interest that is directed towards a certain subject, motivating the person to learn more about it and gain relevant skills and abilities (see Krapp, 2005).With this interest comes a positive emotional association, reinforcing the learning process.Similarly, self-determination theory hypothesizes that intrinsic or extrinsic motivation lead to different outcomes in terms of quality of emotional experience as well as differing quality of knowledge acquired.As training motivation activates people to seek out training, learn and apply training contents to their work environment (see Beier & Kanfer, 2009;Noe, 1986), we expect continuing training motivation to be positively related to work ability: H1: Individuals who are highly motivated to train also ascribe to themselves higher work ability.While work ability is a desired outcome of continuing training it is also a necessary precondition for working past retirement age, along with the desire to do so.The general desire to work past retirement age is high: almost half of those aged between 55 to 64 can well or rather well envision working past retirement age (Büsch et al., 2010).Studies indicate that upon reaching retirement age, individuals are still in rather good health with years of active time to spend (Tesch-Römer et al., 2006).Still, blue-collarworkers in physically challenging jobs go into pension on average 8 years before whitecollar workers.They also give health as the main factor for leaving work life, while the latter usually work until legal retirement age (Statistisches Bundesamt, 2014;Berg et al., 2010). Retirement is less an event but a process that starts long before the actual act takes place.It is rooted in environmental factors, such as job characteristics (see Brusch & Büsch, 2012) or marital life and personal factors such as physical well-being, financial and skills status (Beehr, 1986;Shacklock, Brunetto, & Nelson, 2009).Indeed, there is evidence that those motivated to work longer years can be broadly separated into two groups, those who need to work longer due to financial needs and those who enjoy their work so much that they do not wish to stop (at least not completely, see McNair, 2006).Here recognition and management and team support play major roles (Saba & Guerin, 2005;Van Dam, van der Vorst & van der Heijden, 2009).Those who see themselves working past retirement age wish to pass on knowledge and experience to younger workers, they also cite fun at work as a main reason and that it helps them to stay fit.They feel strongly connected to their workplace and tend to feel too young to retire.Those who do not want to continue working give physical hazard at work and hard or monotonous labour, stress and bad health as the main reasons.Organisational offers such as training opportunities and special age-friendly work equipment do not seem to have a profound effect on prolonging working life (Boockmann, Fries & Göbel, 2013).It would seem likely that a person with high continuing training motivation would also express the desire to work past retirement age as it may bring new experiences and knowledge and provide the opportunity to apply or transfer knowledge and gain recognition.We thus posit: H2: Individuals who are highly motivated to train also feel more inclined to work past retirement age.In line with our argumentation we also propose that the quantity of trainings taken is not relevant for the desire to work past retirement age (see also Boockmann et al., 2013): H3: The actual number of continuing trainings taken has no effect on the desire to work past retirement age. Nevertheless, the desire to continue working past retirement age may not always result in an opportunity to do so: many organisations rather lay off older workers or do not even hire them -negative age-stereotypes are still in place and hard to eradicate, even if proven wrong (see Baltes et al., 2006;Schulz & Stamov Roßnagel, 2010). Social heterogeneity in training participation As the workforce in Germany is ageing steadily and pension age has been raised so as to sustain the pension system, a substantial amount of research is focused on organisational and political barriers and drivers of older workers' inclusion in continuing training in order to maintain their employability.In Germany, sociodemographic and socio-cultural characteristics still influence educational participation and outcomes, so educational inequalities are present (see Hradil, 1999), as can be seen in numerous group-related incidents of inequality in continuing training participation, e.g.employment-related, gender-or age-related (e.g.Bilger et al., 2013).Early educational (dis)advantages often permeate individual life-spans, influencing further educational and career paths and life choices (see OECD, 2002). Analyses against the backdrop of social stratification seek to understand 'who gets what and why' (Alexander, 2001, p. 169).The concept of social stratification involves the 'classification of people into groups based on shared socio-economic conditions' (Barker, 2003, p. 436) and the development of a vertical and horizontal differentiation between these groups with varying access to resources.Although the reality and the beliefs about this structure are passed on between generations, they are indeed changeable (Macionis & Gerber, 2010). According to Tippelt and von Hippel (2005), different social milieus and social strata show differences when it comes to continuing training.Social circumstances and behaviour lead to different lifestyles, which can be understood as a framework for individuals' behaviour and identity, characterised by relative stability (on lifestyle sociology see also Lüdtke, 1989).The upper/middle social strata, represented by postmaterial and modern performer milieus is rather well-educated with good incometheir training participation and learning motivation are the highest.The lower/middle social strata, represented by consumption-materialists and hedonist milieus with generally lower income and less education perceive learning as more of a strain, often based on previous negative experiences in their education, but also due to often unfavourable working conditions (e.g.shift-work) or financial limitations. While there is evidence that e.g.belonging to a lower social stratum is negatively related to participation (and sometimes success) in education or training, the effect on training motivation may be the opposite (see e.g.Walter & Stanat, 2008), as training or education might be e.g.perceived as a means of improving one's less advantageous position in society or the workplace. Job-related continuing training is usually offered and (at least partly) paid for by the employer, so the question of employers' selection criteria of training participants also needs to be examined.Human capital theory provides a framework explaining why an employer might hesitate to invest resources in e.g.older or female employees as the pay-off of that investment might seem risky -e.g., women might get pregnant and leave their job, temporary workers might soon move on to their next job, older employees are facing their retirement.Closely connected to that, negative discrimination and stereotyping with regard to age, gender or other socio-demographic variables are still prevalent in the workplace (on age stereotyping see e.g.Amrhein & Backes, 2007).For older workers this might be the belief that their learning abilities and motivation have diminished, part-time workers are suspected that they do not invest as much energy or commitment in their work as full-time workers and so on.These attitudes and beliefs may have been proven wrong, but as they have been formed over decades, they seem just as hard to change. Furthermore, participation may also depend on other factors such as informal obligation, social pressure, or legal, union or company regulations that come along with a particular status, level of qualification, making participation in continuing training more or less likely (see Wittpoth, 2009).Finally, it needs to be questioned if training participation is a positive end in itself, meaning that lower participation is generally perceived negatively and in need of improvement.Arguably, non-participation can be found in all socio-demographic groups, implying that people have different ways to handle their work and life environments, with classical classroom-based vocational training being only one possible way and, e.g.learning by doing another (Görlitz et al., 2012). Just how much of the differences in participation are rooted in involuntary exclusion may be approached by assessing the existence of corresponding group differences in continuing training motivation.According to lifestyle theory, learning motivation is more prevalent in milieus of the upper stratum, characterised by e.g. higher levels of qualification.Thus, this analysis also looks into the moderating role of socio-demographic variables, such as gender, employment participation (e.g.working hours and contract duration) or level of qualification: H4: Continuing training motivation varies among different socio-demographic groups.We also expect significant group differences when it comes to the relationship between continuing training motivation with work ability and with the desire to continue working past retirement age.Since groups with less employment participation -who are often women (Kümmerling, Jansen & Lehndorff, 2008) -will face larger barriers to training participation than others, lack of training opportunities will lead to lower work ability, even if they are highly motivated to train.With regard to the desire to continue work after retirement age, part-time workers may already face less recognition at work and their financial gain through work is comparably low.For temporary workers, lack of recognition but also lack of opportunity may be detrimental to continuing work.We thus hypothesise: H5: Social group influences moderate the relationship of continuing training motivation with work ability and the desire to work past retirement age. Empirical Investigation Our empirical analysis is based on interviews collected as part of a larger study ('Weiterbeschäftigungssurvey', or 'Weiterbeschäftigung im Rentenalter -Wünsche, Bedingungen, Möglichkeiten', [Survey on continuing in employment in pensionable age]), commissioned in 2008 by the German Ministry of the Interior and conducted by the German Federal Institute of Population Research, which is fully described elsewhere (see Büsch et al., 2010).By means of methods of multivariate analyses, we test the influence of continuing training motivation on work ability and the desire to continue work after reaching retirement age.Additionally, we test if there are significant group differences with regard to gender, level of qualification, working hours or contract duration. Data set and collection The 'Weiterbeschäftigungssurvey' aims to provide insights on factors that play a part in working past retirement age.For the survey 1,500 employed individuals (workers, employees, civil servants, the marginally employed and those in job-creating and structural adjustment measures) aged 55 to 64 were voluntarily and anonymously questioned on work, health and retirement via computer-assisted telephone interviews.The survey excluded pensioners, the unemployed, seasonal workers, short-term-workers and workers in part-time employment prior to retirement who are already released.The sample was selected from a population of 3.8 million people, representing 40.6% of this age group, 7.4% of all persons aged 18 to 64 and 4.7% of the total German population in the annual average of 2006.The realised sample is not representative for the older population in Germany, although intended.This is most apparent with regard to disposable income: Most male respondents (37.5%) belong to the highest income group (3,000€ and more), female respondents also find themselves in higher income groups (28.9% with a monthly disposable income of 2,000€ -less than 3,000€).Male median income is 2,620€ and female median income is 1,980€. The following analysis focuses on white-collar workers only, so a subsample of 953 employees will be used for our further analysis.From the survey we obtained three items to measure continuing training motivation: 'Continually learning new things is very important in my life', 'I shall always strive to continually train', and 'I like to attend continuing training classes' rated on a 5-point Likert-type scale (with 1='fully applicable' to 5='not at all applicable'), achieving an acceptable Cronbach's α of roundabout 0.7.Work ability is directly measured by asking respondents to self-assess their current work ability, their work ability five years ago, and predicted work ability five years from now.All three items used again a 5-point Likert-type scale (with 1='very high' to 5='very low') and achieved a Cronbach's α of 0.628.Desire to work past retirement age was measured by the single item: 'Would you like to be working after reaching official retirement age, e.g. in minor employment?' Respondents answered on a 5-point Likert-type scale (1)=yes, (2)=rather yes, (3)=don't know, (4) rather no, (5)=no.As control variables we used gender, working hours, contract duration and level of qualification. Working hours as per contract are captured by asking respondents to categorise themselves as either part-time (15-35 hours/week), full-time (35 hours/week or more), marginally employed (less than 15 hours) or unemployed.Contract duration was measured by asking, 'Is your work contract temporary?' with respondents answering either yes or no.Level of qualification was also self-rated, answering the question, 'What is your level of qualification?', selecting among the options 'no vocational graduation', 'Apprenticeship or similar', 'Master craftsmen/ technicians or similar', 'Graduates of universities or of universities of applied sciences', or 'Other graduation'.Respondents were also asked to give the number of continuing trainings taken during the past three years. Data analysis and results In Table 1, the main results regarding the desire to work past retirement age, work ability, and continuing training motivation are given as mean values (standard deviation; 'std.dev.').Here, the analyses are separated for different respondent groups and are enhanced through a test of significance for the most important intrinsic training motivation (with a t-test in case of two group levels and an F-test in case of more than two group levels). In general, continuing training motivation is quite high (with an overall mean of about 1.8) -in contrast to a moderate work ability (about 2.2) and desire to work past retirement age (about 2.5).The high training motivation of our older sample seems in accordance with the theory of age-related motivational maintenance which posits that in the course of a life-time learning motivation does not necessarily decline but stays high or even increases (see Gegenfurtner & Vauras, 2012).In addition, heterogeneity of continuing training motivation is quite low (standard deviation much lower than for the desire to work past retirement age).Analysing different groups of respondents, male full-time employees with a fixed-term contract and the highest qualification level are most strongly motivated to train.However, only the difference for the differentiation with respect to qualification level is relevant from a statistical point of view (p<.001). Trait Work For the total sample our analysis yields a weak significant positive correlation between continuing training motivation and work ability.A weaker significant correlation shows for continuing training motivation and the desire to work past retirement age.On group level, some small differences could be observed.Correlations for men are stronger than for women.Full-time employment is correlated to both work ability and desire to work past retirement age, whereas no correlations are found for part-time workers.Similarly, individuals with permanent contract again show weak significant correlations while there is no such effect for individuals with fixed-term contracts.With regard to qualification level the picture is more complex.For individuals without vocational graduation no correlations could be observed.While both correlations are found for the qualification level of apprenticeship, for master craftsmen only a very weak significant correlation for continuing training motivation with work ability is found.The strongest correlation with regard to qualification level is found for university graduates, also only for work ability. Considering this data, a detailed analysis for men seems to be reasonable.Here, the analyses of the relationships of continuing training motivation with work ability leads to a Pearson correlation of 0.309 (p<.01) and of continuing training motivation with the desire to work past retirement age to a Spearman correlation of 0.169 (p<.05).But further analyses, e.g.linear regression analyses, showed no relevant relationships (i.e.very low r² values). Confirming previous research (Boockmann et al., 2013), the actual number of trainings taken seems to have no effect on the desire to work past retirement age, as can be seen in Table 3 All in all, results confirm hypotheses H1-3, establishing a positive correlation between continuing training motivation and factors work ability and desire to work past retirement age, also giving renewed support to the relative unimportance of actual trainings taken for the desire to work past retirement age.Differentiated for selected respondent groups the level of qualification has a significant influence on continuing training motivation, giving support to H4.This effect was stronger than any differences with regard to gender, weekly working hours or contract duration.It is also apparent that group differences moderate the relationships posited in H1 and H2, thus supporting H5.As surmised, only full-time and permanent employees motivated to train also feel inclined to work longer years and feel higher work ability.Impact of qualification level seems limited to moderating the strength of the correlation of continuing training motivation and work ability, with the strongest effect for university graduates. Conclusion Focussing on white-collar employees aged 55 to 64 in Germany, the present study adds a motivational viewpoint to the literature of determinants of work ability and the desire to work past retirement age, also addressing issues of social inequalities and discrimination (with regard to e.g.gender and qualification level).First, our study shows continuing training motivation to be high, also across all respondent groups, with university-educated individuals being slightly more motivated, supporting Tippelt and von Hippel's (2005) findings.We show a weak significant correlation between continuing training motivation and self-assessed work ability, suggesting work ability as a possible outcome of training motivation similar to the findings of Krapp (2005) and Beier and Kanfer (2009).With regard to work ability the strongest correlation with continuing training motivation can be found for men, followed by individuals with university degrees.Methodically, one explanation for the relevance of qualification level might be that self-assessed work ability as measured in this study can be understood to mean both physical and mental ability to work.Less qualified workers might be working in more physically challenging tasks, so they possibility think more about their physical work ability when answering this question.Thus whether he or she likes to train and learn might have less effect on their work ability.It also lends support to findings on higher work ability for cognitive workers (Tuomi et al., 2001). Furthermore, we show that it is indeed rather motivation for continuing training than actual participation that positively influences the desire to work past retirement age. Here, the strongest effect is also for men, but, interestingly, not for the universityeducated group.This effect was stronger than any differences with regard to gender, level of qualification, working hours or contract duration for the three analysed constructs of work ability, continuing training motivation and desire to work past retirement age.Thus, our study shows that with higher qualification level the importance of continuing training increases.Hence, we can say that the stronger the culture of life-accompanying learning is set up for the purpose of managing the aging process and not age, the higher the ability as well as the desire to work past retirement age (see also Schulz et al., 2010). Even though results show the meaning of continuing training participation to be low for the desire to work past retirement age, without continuing training it can be assumed that the ability to prolong working life is low.In addition to this, positive training experiences can further increase training motivation.Hence, new methods and settings that accommodate the needs and expectations of older employees should be developed.Training contents for older training participants should be more applicationoriented (Lehr, 2000) and more focussed on eliciting positive affect to increase motivation (Kanfer & Ackerman, 2004) as older individuals tend to direct their motivation more on personally meaningful and socially rewarding behaviours (Mather & Carstensen, 2005).Thus, trainings that further social contact and interaction have a positive effect on motivation (Gegenfurtner et al., 2012).Additionally, the 'Weiterbeschäftigungssurvey' shows that for older employees a longer distance to the learning site leads to a lower continuing training motivation.Finally, it seems important that there is a continuing positive learning experience starting at a much earlier age, as learning histories and memories do influence training perceptions and behaviour (see Tippelt & von Hippel, 2005).Thus, organisations would be wise to strengthen employees' training motivation by boosting their feeling of self-efficacy and valence not just in trainings but also generally at work and over a longer period of time (see Torraco, 1999). While literature shows different socio-demographical groups to have different shares in continuing training participation we could show that generally, continuing training motivation is rather high (mean of 1.80), with hardly any differences between groups (only the difference for qualification level is statistically significant).This could imply that inequalities in participation are less a result of varying motivation among these groups, but of other barriers.As a first step, continuing training concepts should accommodate differences in interests and barriers of social milieus, as well as different learning backgrounds and expectations (see Tippelt & von Hippel, 2005).Negative stereotypes and discrimination need to be addressed, too, in order to create a supportive and appreciative organisational climate that fosters a learning culture.Consistent with the lifelong learning approach, it seems necessary to develop a life span approach to instilling and promoting learning and training motivation and avoid longer periods of non-training that may decrease learning abilities (Dworschak et al., 2006). Finally, we address the limitations of our study.First, the realised sample is not representative for the older population (although intended), so it would be unwise to apply results to older employees in general.Second, causalities remain unclear.It could be argued for example, that someone who wishes or needs to stay employed after reaching legal retirement age feels motivated to train because he or she feels the necessity of continuing training for keeping the job -rather than assuming that individuals who like to train are also e.g. more interested per se in working longer years.Third, our measures lack a focus on any particular type of continuing training, so we cannot safely assume that any continuous training motivation measured is actually aimed at on-the-job training.As reliability of the scales used in the survey is modest, our correlation results could be biased and the effect size may be higher.Furthermore, answers to questions about work ability, continuing training motivation and desire to work past retirement age are subject to social desirability and may not represent the true attitude of the respondent.For future research it would also be helpful to analyse longitudinal data to understand how these relationships develop in the long run- Table 1 . Main results differentiated for selected respondent groups.In a next step, the relationship between continuing training motivation and the other two traits, work ability and desire to work past retirement age, is analysed (Table2). Table 2 . Correlation of continuing training motivation with work ability and with desire to work past retirement age for selected respondent groups (**…significant correlations at the p<.01 level, *…at the p<.05 level). Table 3 . . Correlation of actual trainings taken (within past three years) with work ability and with desire to work past retirement age (*…significant correlations at the p<.05 level).
2018-12-02T20:47:45.479Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "ddd3f5e38169194de995db563557b8f0d9fa0dc6", "oa_license": "CCBY", "oa_url": "https://rela.ep.liu.se/article/download/3797/2894", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "088025eb22d8fa1ea37f0af96d1c694665f9d2c3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Psychology" ] }
235784290
pes2o/s2orc
v3-fos-license
Effects of Vanadium on the Structural and Optical Properties of Borate Glasses Containing Er3+ and Silver Nanoparticles The erbium-vanadium co-doped borate glasses, embedded with silver nanoparticles (Ag NPs), were prepared to improve their optical properties for potential optical fiber and glass laser application. The borate glasses with composition (59.5–x) B2O3–20Na2O–20CaO–xV2O5–Er2O3–0.5AgCl (x = 0–2.5 mol%) were successfully prepared by conventional melt-quenching method. The structural properties of glass samples were investigated by XRD, TEM and by Fourier transform infrared (FTIR) spectroscopy while optical properties were carried out by UV–Vis spectroscopy by measuring optical absorption and the emission properties were investigated by photoluminescence spectroscopy. The XRD patterns confirmed the amorphous nature of the prepared glass samples whilst the FTIR confirmed the presence of VO4, VO5, BO3 and BO4 vibrations. UV–Vis–NIR absorption spectra reveal eight bands which were located at 450, 490, 519, 540, 660, 780, 980, and 1550 nm corresponding to transition of 4F5/2, 4F7/2, 2H11/2, 4S3/2, 4F9/2, 4I9/2, 4I11/2, and 4I13/2, respectively. The optical band gap (Eopt), Urbach energy and refractive index were observed to decrease, increase and increase, respectively, to the addition of vanadium. Under 800 nm excitation, three emission bands were observed at 516, 580 and 673 nm, which are represented by 2H11/2–4I15/2, 4S3/2–4I15/2 and 4F15/2–4I15/2, respectively. The excellent features of achieved results suggest that our findings may provide useful information toward the development of functional glasses. Introduction Boron-based oxides have unique properties, including high transparency, low melting point, good rare-earth ion solubility, low viscosity, high dielectric constant, low cost, large phonon energy (~1300-1500 cm −1 ) and easy preparation in bulk form. They also feature vibration resistance, low refractive index, low cation size and high bond strength [1][2][3][4][5][6]. These properties qualify boron-based glass as a suitable material for noble optical devices [1][2][3][4][5][6]. Boroxol is a form of pure boron B 2 O 3 . However, if B 2 O 3 is added with some modifier oxides, BO 3 (nonbridge oxide) units transform into BO 4 (bridge oxide), which comprises some weakly attached BO 3 triangles, BO 4 tetrahedrons, some BO 4 − units without the formation of nonbridge oxygen (NBO) [7,8], and a variety of superstructural units, such as tri-, penta-, tetra-, di-, pyro-and orthoborate. For example, when alkali or alkaline-earth metal oxide is added into glass as a modifier elastic, boron glass shows borate anomaly [5]. Such borate anomaly is explained by considering the transformation of three-to four-fold coordinated boron during the initial addition of a modifier oxide, but the high content of the modifier creates NBO [6]. Meanwhile, the incorporation of two dissimilar former glass produces a phenomenon called the mixed glass former effect (MGFE) [7]. When vanadium (V 2 O 5 ) is introduced into boron (B 2 O 3 ), borovanadate glass, which consists of a mixed network of the former, is formed. Vanadium is required in forming glass with the addition of other components through the conventional quenching method. However, the role of vanadium is dependent on its concentration [5]. At a high concentration, it can be considered as a former glass; at a low concentration, vanadium can be considered as a modifier [7,8]. The phase shifts due to vanadium have an influence on optical, structural, and electrical characteristics of borate glass [9,10]. The MGFE composition has attracted interest among researchers because of its intriguing structural and physical properties. Vanadium is also used to overcome any clustering caused by high concentrations of erbium. A high concentration of erbium is normally required to achieve a strong emission [11], but a high doping level of erbium may cause clustering, which leads to luminescence quenching and large nonradiative losses [12]. Several approaches can be adopted to overcome this issue, and they include producing glass ceramic via heat treatment, introducing metallic nanoparticles (NPs) [11] and co-doping with various rare-earth ions or transition metals [13,14]. Previous reports confirmed that the emission intensity of system erbium ion-doped glass co-doped with other rare-earth ions, such as Tm 3+ , Nd 3+ and Yb 3+ is stronger than that of single erbium ion-doped glass. Thus, emission could be achieved by co-doping rare-earth ions with transition and metallic NPs. The participation of vanadium ion in radiative transitions within the glass network has been studied in the emission spectra of 40Na 2 O-54SiO 2 -(5-x) ZrO 2 -Ho 2 O 3 -xV 2 O 5 [13] and 40Na 2 O-54SiO 2 -(5-x) ZrO 2 -Sm 2 O 3 -xV 2 O 5 glasses [15]. An additional band appears at 636 and 1095 nm because of the transition 2 B 2 → 2 B g and 2 B 2 → 2 E g when V 2 O 5 is added. The addition of V 2 O 5 in a host matrix has been suggested to improve luminescence efficiency and reduce phonon energies. Hence, the concentration of V 2 O 5 is believed to be the local environment of rare-earth ions in the oxide glass and is not strictly dependent on the composition of the host matrix. Thus, the concentration of vanadium may contribute to different crystal field strengths. Several studies have explored the emission properties of V 2 O 5 -doped rare-earth glass, but further research is needed to facilitate a rich understanding of the role of V 2 O 5 in glass modification. In the current work, the effects of vanadium on the structural and optical properties of (59.5-x) B 2 O 3 -20Na 2 O-20CaO-xV 2 O 5 -Er 2 O 3 -0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) glasses were studied. The structural properties were examined by means of X-ray diffraction (XRD), Fourier transform infrared (FTIR) and transmission electron microscopy (TEM). Meanwhile, an ultraviolet-visible (UV-Vis) spectrometer and photoluminescence (PL) spectrometer were used to study the absorption and luminesce spectra of the glass samples that are required in the calculation of the Judd-Ofelt intensity parameter. Furthermore, the radiative properties, including the effective bandwidth, radiative transition probability, radiative lifetime, and branching ratio, were measured and analyzed. Preparation of Glasses Glass samples with the composition of (59.5-x) B 2 O 3 -20Na 2 O-20CaO-xV 2 O 5 -Er 2 O 3 -0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) were prepared through the conventional meltquenching method. This composition enables the formation of transparent glass which is suitable for optical applications [7]. The appropriate amount of analytical-grade commercial powder boron oxide (B 2 O 3 ), sodium carbonate (Na 2 CO 3 ), calcium carbonate (CaCO 3 ), vanadium oxide (V 2 O 5 ), erbium oxide (Er 2 O 3 ) and silver chloride (AgCl) (purity ≥99%) were mixed and weighed homogeneously. At 1150 • C, the homogeneous mixture was melted in alumina crucibles for 2 h. Then, the samples were quenched into a stainless plate and molded. Thereafter, the samples were annealed at 300 • C for 5 h in another furnace. After 5 h, the furnace automatically stopped the process and reduced the temperature gradually until room temperature was reached. The glass samples were then polished using sandpaper to obtain a parallel opposite surface with a thickness of approximately 5 mm for optical absorption and photoluminescent spectroscopy. The glass samples were powderized for XRD, TEM and infrared (IR) absorption characterization. Characterization of Glasses The XRD analysis of the glasses was conducted using X'Pert Pro Panalytical diffraction to confirm the amorphous property of the samples. The formation of a crystalline plane in the silver NPs was confirmed through TEM analysis. A small amount of powder samples was dispersed into acetone liquid by using an ultrasonic bath. The solution was then placed onto a copper grid to dry before it was ready for characterization. In determining the density of the glass samples, the Archimedes principle was applied [14]; here, the immersion medium was toluene (0.8669 g cm −3 ) [15,16] at room temperature. Meanwhile, the values of molar volume (V a ) were calculated using the following equation: where M V is the molar mass of the samples. A Perkin Elmer UV-Vis-NIR spectrophotometer in the range of 200-1000 nm was used to record absorption spectra of the glass samples. A Perkin Elmer model Spectrum One FTIR was used to investigate the IR absorption spectra of the glass samples; the functional group was within the range of 400-1600 cm −1 . In the process, the powdered glass samples were mixed with KBr at a fixed ratio of 1:80. The mixture was then hand pressed into a pallet. The visible up-conversion emission measurement was performed in the wavelength region of 200-900 nm at room temperature by using the Perkin LS-55 luminesce spectrometer, in which a pulsed xenon lamp operated as the source of excitation. XRD, TEM and Physical Properties The XRD patterns of the (59. mm for optical absorption and photoluminescent spectroscopy. The glass samples were powderized for XRD, TEM and infrared (IR) absorption characterization. Characterization of Glasses The XRD analysis of the glasses was conducted using X'Pert Pro Panalytical diffraction to confirm the amorphous property of the samples. The formation of a crystalline plane in the silver NPs was confirmed through TEM analysis. A small amount of powder samples was dispersed into acetone liquid by using an ultrasonic bath. The solution was then placed onto a copper grid to dry before it was ready for characterization. In determining the density of the glass samples, the Archimedes principle was applied [14]; here, the immersion medium was toluene (0.8669 g cm −3 ) [15,16] at room temperature. Meanwhile, the values of molar volume (Va) were calculated using the following equation: where is the molar mass of the samples. A Perkin Elmer UV-Vis-NIR spectrophotometer in the range of 200-1000 nm was used to record absorption spectra of the glass samples. A Perkin Elmer model Spectrum One FTIR was used to investigate the IR absorption spectra of the glass samples; the functional group was within the range of 400-1600 cm −1 . In the process, the powdered glass samples were mixed with KBr at a fixed ratio of 1:80. The mixture was then hand pressed into a pallet. The visible up-conversion emission measurement was performed in the wavelength region of 200-900 nm at room temperature by using the Perkin LS-55 luminesce spectrometer, in which a pulsed xenon lamp operated as the source of excitation. Samples (mol%) Density ( Table 1 provides the values of density, molar volume, and refractive index for (59.5-x) B 2 O 3 -20Na 2 O-20CaO-xV 2 O 5 -Er 2 O 3 -0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) glass samples. Figure 3 shows the variations in density and molar volume with the concentration of vanadium for the glass samples. The density of the samples displayed a nonlinear increment whilst the molar volume of the samples exhibited a monotonic increment. These sample patterns demonstrated harmonious concurrences relative to previous reports [5]. The density values were within 2.494 and 2.521 g cm −3 whilst the molar volumes were between 27.898 and 28.709 cm 3 mol −1 with the addition of V 2 O 5 into the glass samples. These density values were smaller than those of (60-x) B 2 O 3 -20Na 2 O-20CaO-xV 2 O 5 (2.537-2.550 g cm −3 ), but the molar volume was greater than that of (60-x) Samples (mol%) Density (g cm −3 ) Molar Volume (cm 3 mol −1 ) Changes in molar mass and molar volume affect glass density. Moreover, density and molar volume usually show contradicting behaviors. In the present work, the density and molar volume displayed similar behavior, that is, both values increased with the addition of vanadium. The same behavior was reported for other borate glass systems [5,17]. The mass of B 2 O 3 (M = 69.63 g mol −1 ) was lower than that of V 2 O 5 (M = 181.88 g mol −1 ). Thus, the increase in density was due to the replacement of a lighter molecular component (B 2 O 3 ) with a heavier molecular component (V 2 O 5 ). Thus, the NBO increased with increasing V 2 O 5 content. The borate group consists of many B-O bonds, and vanadate groups contain various V-O bonds. The bonds in the borate group are shorter than the bonds in the vanadate group. According to [5], the bond lengths of BO 3 Changes in molar mass and molar volume affect glass density. Moreover, density and molar volume usually show contradicting behaviors. In the present work, the density and molar volume displayed similar behavior, that is, both values increased with the addition of vanadium. The same behavior was reported for other borate glass systems [5,17]. The mass of B2O3 (M = 69.63 g mol −1 ) was lower than that of V2O5 (M = 181.88 g mol −1 ). Thus, the increase in density was due to the replacement of a lighter molecular component (B2O3) with a heavier molecular component (V2O5). Thus, the NBO increased with increasing V2O5 content. The borate group consists of many B-O bonds, and vanadate groups contain various V-O bonds. The bonds in the borate group are shorter than the bonds in the vanadate group. According to [5], the bond lengths of BO3 and BO4 are 1.36 and 1.47 Å, respectively. By contrast, previous classical molecular (MD) stimulation research reported that the bond length of V 5+ -O (1.81-1.92 Å) was slightly longer than that of V 4+ -O (1.74-1.85 Å) [6]. Thus, the replacement of a short B-O bond length with a long V-O bond length can be expected to increase the molar volume and open the network structure of glass samples. IR Spectra Three active IR regions were observed in B2O3-V2O5 [18,19]. The first group of bands at around 500-750 cm −1 was due to the bending of the B-O-B linkages in the borate network. The second group of bands between 800 and 1200 cm −1 was due to the asymmetric vibration of BO4 units. The third group of bands at 1200-1450 cm −1 was due to the asymmetric stretching relaxation of the B-O band of the triagonal BO3 units. Other studies also reported the same results [17]. The IR absorption spectra for (59.5-x) B2O3-20Na2O-20CaO-xV2O5-Er2O3-0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) in the 400-1600 cm −1 region at room temperature were recorded, and the results are shown in Figure 4a. Figure 4b shows the deconvolution of the glass sample spectrum at x = 1.0 mol%. As shown in Figure 4, the vibrational modes of the borate network agreed with the results of previous research [5]. For the first region, the band between 1391 and 1407 cm −1 was attributed to the stretching vibration of the borate NBO bond which is BO3. The band correlated with the stretching vibration of B-O bonds in the group of BO4 units from tetraborate, pentaborate and tetraborate was located at about 927-1205 cm −1 . Mohamed et al. [5] reported that the region at 990-1024 cm −1 overlapped with the vibration from the VO5 trigonal bipyramid unit of V = O. They assumed that the band of vibration for the isolated group B-O-V bridging bonds or V = O was located at 1000 cm −1 . The region in 511-536 cm −1 was ascribed to the in-plane bending of IR Spectra Three active IR regions were observed in B 2 O 3 -V 2 O 5 [18,19]. The first group of bands at around 500-750 cm −1 was due to the bending of the B-O-B linkages in the borate network. The second group of bands between 800 and 1200 cm −1 was due to the asymmetric vibration of BO 4 units. The third group of bands at 1200-1450 cm −1 was due to the asymmetric stretching relaxation of the B-O band of the triagonal BO 3 units. Other studies also reported the same results [17]. The IR absorption spectra for (59 5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) in the 400-1600 cm −1 region at room temperature were recorded, and the results are shown in Figure 4a. Figure 4b shows the deconvolution of the glass sample spectrum at x = 1.0 mol%. As shown in Figure 4, the vibrational modes of the borate network agreed with the results of previous research [5]. For the first region, the band between 1391 and 1407 cm −1 was attributed to the stretching vibration of the borate NBO bond which is BO 3 . The band correlated with the stretching vibration of B-O bonds in the group of BO 4 units from tetraborate, pentaborate and tetraborate was located at about 927-1205 cm −1 . Mohamed et al. [5] reported that the region at 990-1024 cm −1 overlapped with the vibration from the VO 5 trigonal bipyramid unit of V = O. They assumed that the band of vibration for the isolated group B-O-V bridging bonds or V = O was located at 1000 cm −1 . The region in 511-536 cm −1 was ascribed to the in-plane bending of B-O, and the IR band at around 740-751 cm −1 was ascribed to the B-O-B bending vibration of BO 4 and BO 3 [5]. In the evaluation of the impact of vanadium on the borate structure, the relative area of the BO 4 /V = O bands was normalized by the area of BO 4 at x = 0 mol%. An addition of 0.5 mol% vanadium increased the relative area of the BO 3 functional group. However, further addition of vanadium x > 0.5 mol% resulted in a decrease in the relative area of the BO 3 functional group. The addition of 0.5 mol% V 2 O 5 decreased the normalized plot of BO 4 /V = O, which then increased as the vanadium concentration increased in the glass samples. Increasing BO 4 /V = O and decreasing BO 3 resulted in the formation of NBO, whereas increasing BO 3 and decreasing in normalized BO 4 /V = O revealed an increasing BO [20]. In this study, the NBO increased when the concentration of vanadium x > 1.0 mol%. In the evaluation of the impact of vanadium on the borate structure, the relative area of the BO4/V = O bands was normalized by the area of BO4 at x = 0 mol%. An addition of 0.5 mol% vanadium increased the relative area of the BO3 functional group. However, further addition of vanadium x > 0.5 mol% resulted in a decrease in the relative area of the BO3 functional group. The addition of 0.5 mol% V2O5 decreased the normalized plot of BO4/V = O, which then increased as the vanadium concentration increased in the glass samples. Increasing BO4/V = O and decreasing BO3 resulted in the formation of NBO, whereas increasing BO3 and decreasing in normalized BO4/V = O revealed an increasing BO [20]. In this study, the NBO increased when the concentration of vanadium x > 1.0 mol%. Figure 5 shows the absorption spectra for (59.5-x) B2O3-20Na2O-20CaO-xV2O5-Er2O3-0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%). The absorption spectra contained six bands at 490, 520, 540, 660, 800 and 980 nm. In other findings, all the peaks were ascribed to the erbium absorption from the ground state 4 I15/2 to the excited states 4 F7/2, 2 H11/2, 4 S3/2, 4 F9/2, 4 19/2 and 4 111/2 [7]. The comparison of the peaks showed that the transition of 4 I15/2 → 2 H11/2 with a wavelength of 520 nm presented the highest peak. No new band emerged with the addition of vanadium in the samples. This result was either due to the vanadyl ion not being observed in the recorded spectra or because it overlapped with the dominant erbium ion intensity band. The surface plasmon resonance (SPR) band contributed by silver NPs was also not observed. Previous studies reported that the SPR band is located at around 400-500 nm [21,22]. The SPR frequency depends on the refractive index (n ̴ 2 for borate glass) and the dielectric function of silver [23]. In other findings, all the peaks were ascribed to the erbium absorption from the ground state 4 I 15/2 to the excited states 4 F 7/2 , 2 H 11/2, 4 S 3/2 , 4 F 9/2 , 4 1 9/2 and 4 1 11/2 [7]. The comparison of the peaks showed that the transition of 4 I 15/2 → 2 H 11/2 with a wavelength of 520 nm presented the highest peak. No new band emerged with the addition of vanadium in the samples. This result was either due to the vanadyl ion not being observed in the recorded spectra or because it overlapped with the dominant erbium ion intensity band. The surface plasmon resonance (SPR) band contributed by silver NPs was also not observed. Previous studies reported that the SPR band is located at around 400-500 nm [21,22]. The SPR frequency depends on the refractive index (n~2 for borate glass) and the dielectric function of silver [23]. Optical Properties The optical properties of amorphous materials can be studied based on the electronic band structure and optical transition. If an electron in the valence band has enough energy, the electron rises across the band gap toward the conduction band. The energy required to cross the band gap is closely related to the optical energy band gap (Eopt). The optical absorption edge is used to investigate the electronic transition during absorption. The absorption coefficient (α) can be calculated at various wavelengths by using Optical Properties The optical properties of amorphous materials can be studied based on the electronic band structure and optical transition. If an electron in the valence band has enough energy, the electron rises across the band gap toward the conduction band. The energy required to cross the band gap is closely related to the optical energy band gap (E opt ). The optical absorption edge is used to investigate the electronic transition during absorption. The absorption coefficient (α) can be calculated at various wavelengths by using the Beer-Lambert Law [3]. A Tauc plot ( Figure 6) was drawn according to the Davis and Mott relation with α [3]: where h is the Planck constant, v is the photon frequency, A is a constant and n is a constant determining the types of transition. The n constant is equal to 2, 3, 1/2 or 1/3, which respectively denote indirect allowed, indirect forbidden, direct allowed and direct forbidden transition [9]. The value of n for the oxide glass is 2 [3]. In this study, the graph hv against (αhv) 2 was plotted ( Figure 6) and used to measure the optical band gap. The optical band gap is the intersection of the straight line of a curve at x-axis when (αhv) 2 = 0 [24]. The changes in band gap energy can be explained by the increase and decrease disorder in the material. Through Urbach's equation, disorder can be calculated as [3] where α is a constant and Eu is the Urbach energy. The graph of ln (α) versus hv is plotted in Figure 7. The reciprocal of the slope of the linear curve represents the Urbach energy [3]. Urbach energy depends on several factors, including temperature, average photon energy, induced disorder, static disorder, thermal vibration in the lattice and strong ionic bond. Table 2 shows the values of indirect energy band gap, Urbach energy and refractive index of all the samples. The energy band gap (E opt ) values were in the range of 3.143-1.752 eV. E opt decreased with an increase in vanadium concentration. As shown in Figure 8, the graphs of E opt and n against V 2 O 5 concentration showed contrasting behavior in which n increased when the vanadium concentration increased. This behavior was also found in previous research [3]. The n values were in the range of 2.360-2.852. In the studied glass samples, the reduction of the band gap with increased V 2 O 5 was due to the structural evolution. For x < 1.0 mol%, the band gap decreased because of the increasing NBO in the borate triangular BO 3 at a low concentration of vanadium. In contrast to that of BO, the creation of NBO opened the glass structure and resulted in an easier excitation of the electron because the electron showed a loose bond in NBO. The band gap being most likely constant for x = 1.5 mol% could be explained by the new role of V 2 O 5 as a former oxide. Vanadium acts as a network modifier at a low concentration and as a network forming component at a high concentration [10]. The structural revolution that occurred by increasing the vanadium concentration caused the contrasting behavior of n. The changes in band gap energy can be explained by the increase and decrease disorder in the material. Through Urbach's equation, disorder can be calculated as [3] where α is a constant and Eu is the Urbach energy. The graph of ln (α) versus hv is plotted in Figure 7. The reciprocal of the slope of the linear curve represents the Urbach energy [3]. Urbach energy depends on several factors, including temperature, average photon Table 2 shows the values of indirect energy band gap, Urbach energy and refractive index of all the samples. The energy band gap (Eopt) values were in the range of 3.143-1.752 eV. Eopt decreased with an increase in vanadium concentration. As shown in Figure 8, the graphs of Eopt and n against V2O5 concentration showed contrasting behavior in which n increased when the vanadium concentration increased. This behavior was also found in previous research [3]. The n values were in the range of 2.360-2.852. In the studied glass samples, the reduction of the band gap with increased V2O5 was due to the structural evolution. For x < 1.0 mol%, the band gap decreased because of the increasing NBO Figure 8. Optical band gap (Eopt) and refractive index of (59.5-x) B2O3-20Na2O-20CaO -xV2O5-Er2O3-0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%). The EU values of the glass increased sharply from 0.298 eV to 0.482 eV (x = 0 mol%-0.5 mol%), followed by a sharp decrease when 1.0 mol%-1.5 mol% of vanadium was added into the glass samples. The EU values gradually increased to 0.762 (x = 2.5 mol%) with further increase in V2O5 content ( Figure 9). The addition of vanadium x = 0.5 mol% increased the Urbach energy value, and this effect indicated the increased tendency of weak bonds to become defects. For x = 0.5 mol%, the concentration of the defect increased in the glass network with the increment of NBO. The Urbach energy value decreased for x = 1.0 and 1.5 mol%. This decrement suggested that the degree of disorder present in the glass decreased. However, for x > 1.5 mol%, the increase in Urbach energy reduced the energy band gap, whereas the decrease in Urbach energy increased the energy band gap. The E U values of the glass increased sharply from 0.298 eV to 0.482 eV (x = 0 mol%-0.5 mol%), followed by a sharp decrease when 1.0 mol%-1.5 mol% of vanadium was added into the glass samples. The E U values gradually increased to 0.762 (x = 2.5 mol%) with further increase in V 2 O 5 content (Figure 9). The addition of vanadium x = 0.5 mol% increased the Urbach energy value, and this effect indicated the increased tendency of weak bonds to become defects. For x = 0.5 mol%, the concentration of the defect increased in the glass network with the increment of NBO. The Urbach energy value decreased for x = 1.0 and 1.5 mol%. This decrement suggested that the degree of disorder present in the glass decreased. However, for x > 1.5 mol%, the increase in Urbach energy reduced the energy band gap, whereas the decrease in Urbach energy increased the energy band gap. Judd-Ofelt Analysis Judd-Ofelt theory provides the information of transition behavior between 4f-4f electronic configuration and the calculation of oscillation strength, intensity parameter (Ω2, Ω4, Ω6), transition probabilities and branching ratio [25,26]. Judd-Ofelt theory is the best method to investigate and analyze the spectral properties of borate glass systems containing rare-earth ions (erbium ion). The absorption spectral data of all samples containing different concentrations of vanadium were used to calculate the Judd-Ofelt parameters. The precise integrated absorption cross-section measurement over the range of the wavelength and transition state of excitation is needed to analyze the Judd-Ofelt theory. The area under the absorption band was used to determine the experimental oscillator strength. The experimental oscillator strength can be calculated via the following rela- Judd-Ofelt Analysis Judd-Ofelt theory provides the information of transition behavior between 4f-4f electronic configuration and the calculation of oscillation strength, intensity parameter (Ω 2 , Ω 4 , Ω 6 ), transition probabilities and branching ratio [25,26]. Judd-Ofelt theory is the best method to investigate and analyze the spectral properties of borate glass systems containing rare-earth ions (erbium ion). The absorption spectral data of all samples containing different concentrations of vanadium were used to calculate the Judd-Ofelt parameters. The precise integrated absorption cross-section measurement over the range of the wavelength and transition state of excitation is needed to analyze the Judd-Ofelt theory. The area under the absorption band was used to determine the experimental oscillator strength. The experimental oscillator strength can be calculated via the following relation: where m is the electron mass, c is the velocity of light in vacuum, N 0 is Avogadro's number and ε(v) is the molar extinction coefficient. The molar extinction coefficient was obtained from the measured absorbance of the samples calculated from the Beer-Lambert law as follows: where C RE is the concentration of the rare-earth ion (erbium) (mol/1000 cm 3 ), t is the thickness of the samples in cm and log 0 I 0 I t is calculated from the absorbance from the wave number v (cm −1 ). The estimation of theoretical oscillator strength for a transition from the ground state to an excitation state of erbium ion within the 4f configuration according to Judd-Ofelt theory is as follows: where v is the wave number of the transition in cm −1 , h is a Planck constant and J is the total angular momentum of the lowest state. The factor of (n 2 + 2)/9n represents the electric field correction of Lorentz, and n is the refractive index of the samples. Ω λ is the Judd-Ofelt intensity parameter, where the λ values are 2, 4 and 6. ||U (λ) || representing the doublereduced square matrix elements of the unit tensor operator of rank λ = 2, 4 and 6 was calculated using the intermediate coupling approximation method for the transition from the lowest state to the highest state. The reduced matrix elements Ω λ |< aJ|U (λ) |bJ >| 2 were calculated following the work of Carnall et al. [27]. To evaluate the accuracy of the Judd-Ofelt parameter, this study identified the quality of fit by using the root mean square (rms) deviation relation given by where ξ denotes the number of spectral bands analyzed (i.e., 3). The rms values indicated the quality of fit between the experimental and calculated oscillator strengths. These values also showed the accuracy of the Judd-Ofelt parameter. Table 3 shows the calculated and experimental rms and oscillator strength of all the glass samples. The indirect data on the symmetry and bonding of rare-earth ions within the matrix were provided by the oscillator strength. The highest oscillator force attributed to the hypersensitive transition was reflected by the transition band at 4 I 15/2 → 2 H 11/2 . Such hypersensitive transitions were sensitive to the changes in the local structure of the glass network. These hypersensitive transitions complied with ∆S = 0, |∆J| ≤ 2 and |∆L| ≤ 2 selection rules and reflected the interaction strength of erbium ions in the local network with the host glass. In the glass samples, the increase in vanadium content increased the oscillator strength of the hypersensitive transition. These changes revealed strong covalency with the presence of low symmetry around erbium ions. These values were found to equate to a higher oscillator strength than those of phosphate glass [28], boroaluminosilicate glass [10] and tellurite glass [18,29]. In addition, the values of rms were in the range of 1-2 × 10. These considerably small rms values confirmed the accuracy of the data [30]. The rms values of all the glass samples conformed to those of a previous study [31]. Table 4 shows the values and trend of the Judd-Ofelt intensity parameters (Ω 2, 4, 6 ) along with their spectroscopic properties (χ) for all the glass samples. The data of Judd-Ofelt parameters from the literature were compared with those of the current glass samples [17]. Glass composition determined the values of the Judd-Ofelt parameters. The increase in vanadium ion concentration from 0 mol% to 1.0 mol% decreased the Ω 2 and Ω 6 values from 3.19 × 10 −20 to 2.43 × 10 −20 and from 8.45 × 10 −21 to 6.53 × 10 −21 , respectively. In addition, the trend of Ω λ was found to be Ω 2 > Ω 4 > Ω 6 for all the prepared glass samples. The samples with Ω 2 and Ω 4 had higher intensity parameters than the samples with Ω 6 and were regarded as good glass hosts because of the high luminescence intensity ratio and high covalent bond between the erbium ions and local environment ligands [31,32]. The values of Ω 2 and Ω 4 were smaller for the borate glass containing erbium ions co-doped with vanadium than for the borate glass containing erbium ions only [17]. The parameters Ω 2 and Ω 4 were highly sensitive to the rare-earth ion's local environment symmetry. The small values of Ω 2 and Ω 4 indicated the lower asymmetric nature of the local environment around the erbium ions in the glass system [21]. Ω 6 contradicted Ω 2 and Ω 4 , with Ω 6 not being dependent on the local structure [33]; generally, the rigidity of glass is correlated with these parameters [31]. The glass without vanadium concentration (x = 0) was more rigid than the other glass samples (x = 0.5 mol%-2.5 mol%). As the value of Ω 6 of the glass without vanadium was greater than those of the other samples with vanadium, the addition of vanadium ions from 0 mol% to 1.0 mol% led to the decrease of Ω 6 values from 8.45 × 10 −21 to 6.53 × 10 −21 . The result could be explained by the NBO which was created around the host matrix, where it caused high covalency and led to the production of high electron density for the ligand ions. The values of the Ω 4 and Ω 6 parameters were used to determine the spectroscopic quality factor (χ) [34]. χ defines the efficiency of laser transition. Therefore, it can be used to predict the stimulated emission of the laser. The values of χ for all the glass samples in this work were in the range of 1.70781-1.95143. These values were greater than those of erbium in tellurite glass systems [18]. The bigger the value of χ, the higher the efficiency of the laser transition because according to [35], the higher the value of χ, the more intense the laser transition. In this study, the glass with 1.0 mol% of vanadium was optically better than the other glass samples. The values of Ω λ were used to calculate the radiative properties, such as the spontaneous emission rate (A R ), branching ratio (β R ) and lifetime of radiative transition (τ rad ). The emission probabilities were called the Einstein coefficient for radiative transition A R (aJ, bJ'). The different transitions were calculated as. where A ed is the electric dipole and A md is the magnetic dipole; both were calculated using Equations (9) and (10), respectively. χ ed and χ md denote the local-field correction for the electric dipole and magnetic dipole transition, respectively. Both were obtained using the following relations: The line strengths of the magnetic dipole and electric dipole transition are represented by S md and S ed , respectively. Both were calculated by the following relations: S md (aJ, bJ ) = e 2 4m 2 c 2 |< aJ|L + 2S|bJ >| 2 (14) S ed (aJ, bJ ) = e 2 ∑ λ=2,4,6 As shown in Table 5, the 4 I 15/2 → 2 H 11/2 and 4 I 15/2 → 4 F 7/2 transitions had high A R values. In addition, 2 H 11/2 and 4 F 7/2 showed increased A R values when the concentration of vanadium in the glass samples increased. This result indicated that the transitions of 2 H 11/2 and 4 F 7/2 benefitted the green and blue emissions, which are suitable for lasers [17]. Table 5. Values of A R (s −1 ), β R (%) and τ (ms) of all prepared glass samples. x = 0 mol% x = 0.5 mol% x = 1.0 mol% x = 1.5 mol% x = 2.0 mol% x = 2.5 mol% 4 Radiative lifetime carries important data for laser and optical amplifiers. The radiative lifetime (τ rad ) of the prepared glass was a reciprocal of the total transition probabilities of emission state ∑ bJ A(aJ, bJ ) (sum of transition probabilities of all the transitions from the highest state to the various lower states). The emission branching ratio is given as β R (aJ, aJ ) = A R (aJ, bJ ) ∑ bJ A R (aJ, bJ ) The obtained values of τ and β R are listed in Table 5. The probability of simulated emission acquisition can be determined by the values of the branching ratio for a specific transition [21]. The branching ratio for the transitions of 4 I 15/2 → 2 H 11/2 and 4 I 15/2 → 4 F 7/2 was 99%, and the value of the radiative ratio was in the range of 0.007-0.005. The value of the present glass was shorter than those of other glass systems. The short radiative lifetime of this transition was beneficial for the control of the strong emission of erbium ions within the prepared glass system and the suppression of the nonradiative process. Figure 10 reveals the PL emission spectra of the (59.5-x) B 2 O 3 -20Na 2 O-20CaO-xV 2 O 5 -Er 2 O 3 -0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) glass samples in the wavelength ranging from 400 nm to 800 nm. The excitation wavelength was 800 nm. The emission spectra of the Er 3+ ions exhibited three dominant peaks at 516, 580 and 673 nm. These peaks were ascribed to 2 H 11/2 -4 I 15/2 , 4 S 3/2 -4 I 15/2 and 4 F 15/2 -4 I 15/2 . The bands at 516, 580 and 673 nm were due to the stark splitting effects resulting from the low symmetry of the local environment around the erbium ions [36]. The intensity increased when the concentration of V 2 O 5 increased from 0 mol% to 1.5 mol%. However, when the concentration of vanadium exceeded 1.5 mol%, the intensity decreased. Thus, the decrement was due to concentration quenching [37,38]. At a high amount of vanadium, the excess vanadium ions produce structural defects that cause nonradiative recombination. Herein, the transition from the excited state to the visible wavelength was related to the emission of peaks at 516, 580 and 673 nm [36]. environment around the erbium ions [36]. The intensity increased when the concentration of V2O5 increased from 0 mol% to 1.5 mol%. However, when the concentration of vanadium exceeded 1.5 mol%, the intensity decreased. Thus, the decrement was due to concentration quenching [37,38]. At a high amount of vanadium, the excess vanadium ions produce structural defects that cause nonradiative recombination. Herein, the transition from the excited state to the visible wavelength was related to the emission of peaks at 516, 580 and 673 nm [36]. Conclusions The effects of V2O5 substitution on the structural and optical properties of (59.5-x) B2O3-20Na2O-20CaO-xV2O5-Er2O3-0.5AgCl (x = 0, 0.5, 1.0, 1.5, 2.0 and 2.5 mol%) glasses were investigated. The glass samples were successfully prepared via the melt-quenching method. The structural properties of the samples based on XRD indicated their amorphous nature without any crystalline phase following the addition of vanadium. Meanwhile, the FTIR revealed the structural units in the glass network. Specifically, the FTIR confirmed the presence of B-O-B stretching in the borate network, bending vibration from the O-V-O of the VO4 tetrahedral group, vibration of the B-O-V bridging bond and stretching vibration of the B-O bond belonging to BO3. The addition of at least 1.0 mol% vanadium created additional NBO in the glass structure. The Vis-NIR spectra exhibited eight absorption bands at 490, 520, 540, 660, 800 and 980 nm. All the peaks denoted the erbium absorption from the ground state 4 I15/2 to the excited states 4 F7/2, 2 H11/2, 4 S3/2, 4 F9/2, 4 19/2 and 4 111/2. The most intense peak was centered at 540 nm and was called the hypersensitive transition. The SPR band and vanadium band were not observed in the recorded spectra because of the low concentration of silver NPs and the dominance of the erbium band. The increase in vanadium was found to reduce the band gap energy and increase the re-Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-07-11T05:30:52.767Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "72f3ff21ceea7ec6a11c5bf0eb38333baa765fa4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/13/3710/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72f3ff21ceea7ec6a11c5bf0eb38333baa765fa4", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
53773869
pes2o/s2orc
v3-fos-license
The Female Heart: Sex Differences in the Dynamics of ECG in Response to Stress Sex differences in the study of the human physiological response to mental stress are often erroneously ignored. To this end, we set out to show that our understanding of the stress response is fundamentally altered once sex differences are taken into account. This is achieved by comparing the heart rate variability (HRV) signals acquired during mental maths tests from ten females and ten males of similar maths ability; all females were in the follicular phase of their menstrual cycle. For rigor, the HRV signals from this pilot study were analyzed using temporal, spectral and nonlinear signal processing techniques, which all revealed significant statistical differences between the sexes, with the stress-induced increases in the heart rates from the males being significantly larger than those from the females (p-value = 4.4 × 10−4). In addition, mental stress produced an overall increase in the power of the low frequency component of HRV in the males, but caused an overall decrease in the females. The stress-induced changes in the power of the high frequency component were even more profound; it greatly decreased in the males, but increased in the females. We also show that mental stress was followed by the expected decrease in sample entropy, a nonlinear measure of signal regularity, computed from the males' HRV signals, while overall, stress manifested in an increase in the sample entropy computed from the females' HRV signals. This finding is significant, since mental stress is commonly understood to be manifested in the decreased entropy of HRV signals. The significant difference (p-value = 2.1 × 10−9) between the changes in the entropies from the males and females highlights the pitfalls in ignoring sex in the formation of a physiological hypothesis. Furthermore, it has been argued that estrogen attenuates the effect of catecholamine stress hormones; the findings from this investigation suggest for the first time that the conventionally cited cardiac changes, attributed to the fight-or-flight stress response, are not universally applicable to females. Instead, this pilot study provides an alternative interpretation of cardiac responses to stress in females, which indicates a closer alignment to the evolutionary tend-and-befriend response. Sex differences in the study of the human physiological response to mental stress are often erroneously ignored. To this end, we set out to show that our understanding of the stress response is fundamentally altered once sex differences are taken into account. This is achieved by comparing the heart rate variability (HRV) signals acquired during mental maths tests from ten females and ten males of similar maths ability; all females were in the follicular phase of their menstrual cycle. For rigor, the HRV signals from this pilot study were analyzed using temporal, spectral and nonlinear signal processing techniques, which all revealed significant statistical differences between the sexes, with the stress-induced increases in the heart rates from the males being significantly larger than those from the females (p-value = 4.4 × 10 −4 ). In addition, mental stress produced an overall increase in the power of the low frequency component of HRV in the males, but caused an overall decrease in the females. The stress-induced changes in the power of the high frequency component were even more profound; it greatly decreased in the males, but increased in the females. We also show that mental stress was followed by the expected decrease in sample entropy, a nonlinear measure of signal regularity, computed from the males' HRV signals, while overall, stress manifested in an increase in the sample entropy computed from the females' HRV signals. This finding is significant, since mental stress is commonly understood to be manifested in the decreased entropy of HRV signals. The significant difference (p-value = 2.1 × 10 −9 ) between the changes in the entropies from the males and females highlights the pitfalls in ignoring sex in the formation of a physiological hypothesis. Furthermore, it has been argued that estrogen attenuates the effect of catecholamine stress hormones; the findings from this investigation suggest for the first time that the conventionally cited cardiac changes, attributed to the fight-or-flight stress response, are not universally applicable to females. Instead, this pilot study provides an alternative interpretation of cardiac responses to stress in females, which indicates a closer alignment to the evolutionary tend-and-befriend response. INTRODUCTION The effects of stress on heart rate (HR) and heart rate variability (HRV) are considered to be well defined and long established. General physiological stress is understood to cause an increase in HR, consequently decreasing HRV (Houtveen et al., 2002), whilst physical stress has been widely reported to cause an increase in the power of the low frequency (LF) component of HRV signals, and a decrease in the power of the high frequency component (Montano et al., 1994). It is therefore commonly conjectured, though not without controversy, that the LF component of HRV (0.04-0.15 Hz) reflects the activity of the sympathetic nervous system (SNS), while the HF component (0.15-0.4 Hz) reflects the activity of the parasympathetic nervous system (PNS). The controversy surrounding this conjecture was highlighted in several studies where the effects of mental stress on the LF and HF components of HRV were found to be neither consistent, nor correlate with the trends seen in physical stress (Berntson et al., 1994). However, despite the uncertainty over the physiological interpretations of the LF and HF components, and whether or not they, respectively, represent the activities of the SNS and PNS (Berntson et al., 1994;Eckberg, 1997;Berntson and Cacioppo, 2004;Billman, 2013;von Rosenberg et al., 2017), it comes as a surprise that until recently, the theories regarding the relationships between stress, HR and HRV were developed without accounting for, or appreciating sex differences. To this end, this work aims to quantify and demystify the effects of sex differences on the dynamics of ECG, in a proof-of-concept study based on ECG recordings from participants who have been subjected to a mental stressor. The commonly adopted theory of the stress response stems from a seminal book by Walter Cannon, in which it was hypothesized that, irrespective of sex, the evolutionary purpose of the human stress response is to prepare the body for fight or flight when faced with a threat (Cannon, 1915). It is generally accepted that reaching either the fight or flight stage requires the energization of the body through stress hormones, for example, by increasing HR, arousal and the respiratory rate. However, long-term exposure to these hormones is known to lead to pathologies, such as heart disease, insomnia and hyperventilation (NHS Derbyshire Health Psychology Service, 2012). Chronic stress is also linked to depression; it is known that chronic stress causes structural degeneration in the pre-frontal cortex, which is a major risk factor for depression (Mah et al., 2016). It would therefore be expected that the correlations between many stress induced pathologies, such as heart disease and depression, would be high; instead, the statistics indicate no such correlation (Åhs et al., 2009). Whilst females have the highest incidence of depression, the highest incidence of cardiac pathologies is seen in males. For example, in 2014, 4.3% of UK females had experienced depressive episodes, compared to 3.2% of males (NHS Digital, 2016). In contrast, in the year 2013/14, the number of UK males admitted to hospital with heart related diseases was 30.3% higher than the corresponding number of females (British Heart Foundation, 2015). Such stark statistical differences have led to further investigation into the causes of this disparity. An insightful overview of sex differences in the response to stress by Verma et al. (2011) reports many hormonal, neuroanatomical and cognitive differences between the sexes; it concludes that males and females exhibit distinct psychological and biological differences in their responses to stress. Similarly, Ramaekers et al. (1998) assessed sex differences in cardiovascular dynamics in response to stress, by analysing 24-h-long HRV signals, recorded from 135 females and 141 males aged between 18 and 71 years. They reported that the absolute powers of the LF components of HRV from the males, regardless of age, were significantly larger than those from the females aged under 40, but were not significantly larger than those from the females aged over 40 (Ramaekers et al., 1998). This age-dependence has indicated that the possible sex difference in cardiovascular dynamics is due to the effects of the menstrual cycle, and in particular, estrogen (Ramaekers et al., 1998). It is now hypothesized that estrogen enhances parasympathetic control of the heart (Dart et al., 2002), which in turn means that premenopausal females will experience enhanced parasympathetic control compared to males and postmenopausal females. This gives a wide scope for the study of the effect of estrogen on cardiovascular dynamics, which promises to dramatically alter the understanding of the stress response, a subject of this work. At present, the effect of hormones on the cardiovascular response to stress is considered to be related to the activation of the sympathetic nervous system (SNS), which mediates the release of catecholamines and glucocorticoids (Lundeberg, 2005). Many researchers have posited that once a stressor diminishes, the parasympathetic nervous system (PNS) takes dominance over the SNS (Figueroa-Fankhanel, 2014); the PNS is known to restore vital functions to their rest state ). However, the theory that the SNS and PNS have an antagonistic or reciprocal relationship has also become controversial (Billman, 2013). An often cited review by Eckberg (1997) assesses SNS and PNS dynamics, and reports many parallel activations of the SNS and PNS, and that there is no definitive physiological evidence to suggest that the SNS and PNS must behave reciprocally. Eckberg (1997) describes the tendency to assign a reciprocal relationship to the SNS and PNS as philosophical, as opposed to physiological. Another study often cited to support nonreciprocal SNS and PNS dynamics is Berntson et al. (1994), who assessed the SNS and PNS responses in 10 participants subjected to mental stress, and reported slightly positive correlations between the activities of the SNS and PNS. They concluded that certain stressors may elicit autonomic responses which are specific to individuals, whilst others elicit common SNS and PNS responses. Yet, despite the popularity of the paper, it is often overlooked that all 10 of the subjects recruited by Berntson et al. (1994) were young females. In light of the work by Dart et al. (2002), a study cohort made up entirely of young females, without an account of their menstrual cycle phase, could explain why the results reported by Berntson et al. (1994) differed from similar studies. In summary, conflicting findings regarding the cardiovascular responses to mental stress point to a lack of clarity in the understanding of the stress response; this motivated us to ask whether this lack of clarity was simply due to conflating the male and female responses to stress? To provide a conclusive answer to this question, we set out to assess the cardiovascular dynamics of young males and young females during a mental stress task; this was achieved in a rigorous, quantitative, and reproducible way, by applying stateof-the-art signal processing techniques to HRV signals. We make no attempt to correlate the LF and HF components of HRV to the SNS and PNS, but to identify differences in the temporal, spectral and nonlinear characteristics of the recorded signals. The changes in heart rate will be used to characterize signals temporally, whilst spectral characterization is achieved through the computation of the powers of the LF and HF components of HRV, and nonlinear characterization is accomplished using the nonparametric sample entropy (SE) method. Our results show conclusively that males and females in the follicular phase of their menstrual cycle exhibit significantly different cardiovascular responses to mental stress. Subjects Our study cohort consisted of 10 males and 10 females, with respective mean ages of 28.6 (standard deviation: 5.6, range: 23-38) and 23.3 (standard deviation: 1.3, range: 21-26). The size of the dataset used is validated by a statistical study (Ristic-Djurovic et al., 2018), which reported that nine is the minimum recommended number of subjects to draw statistically significant conclusions from a biomedical study. The calendar method was used to confirm that all female subjects were in the follicular phase of their menstrual cycle; the follicular phase precedes ovulation, and is characterized by increasing estrogen levels (Dart et al., 2002). The experimental procedure was explained to the subjects, both verbally and in writing, and all subjects gave their full consent to take part in the study. Ethics approval was granted by the Joint Research Office at Imperial College London (reference IRREC_12_1_1). Experimental Procedure The electrocardiogram (ECG) was recorded from each subject as they sat at rest for 15 min, and during a 15-min mental maths test in pairs (subjects competed against sex-matched opponents with similar mathematical abilities and proficiency in the English language). There was a 1-min interval between the rest and test periods, and to induce further mental stress, the subjects were told that their performance in the maths test would be recorded. Data Acquisition The subjects' ECGs were recorded by a custom-made data logger, called the iAmp (Kanna et al., 2018), using adhesive surface electrodes placed in the Lead I ECG configuration, and at a sampling frequency of 1,000 Hz. All signal processing was performed in the Matlab programming environment. The ECG signals from the subjects were segmented into epochs, corresponding to the rest and mental maths test, before HRV was derived. The R-waves in the ECGs were extracted and interpolated at 4 Hz to derive HRV, using the robust algorithm introduced in Chanwimalueang et al. (2015). Temporal, spectral and nonlinear signal processing techniques were applied to the epochs of HRV to extract the four cardiac metrics described below. Temporal Analyses The HR from the HRV signals was computed from a 1-min sliding window, with a one-second increment, where x i denotes an HRV data point in the windowed signal, and n designates the number of data points in the window, to yield (1) Spectral Analyses The powers of the LF and HF bands in HRV were computed as the powers of the 0.04-0.15 Hz and 0.15-0.4 Hz bands, respectively, and were normalized by the power of the 0.04-0.5 Hz band, N p , to give where the respective powers of the LF and HF bands are denoted by LF p and HF p . The analysis was performed within a 5-min sliding window, with a one-second increment. This window duration is the minimum recommended length to fully capture spectral cardiac dynamics (Malik et al., 1996). This new N p power band of interest was used in Equations (2) and (3) for the normalization, instead of the total power for the following reasons: (i) it has long been known that the frequencies below 0.04 Hz have no clear physiological interpretation (Malik et al., 1996), and (ii) heart beats are produced at a maximal rate of 1 Hz, which means the effective sampling frequency of HRV is also 1 Hz; the Nyquist theorem therefore suggests all useful information in HRV is contained below 0.5 Hz (Kuusela, 2004). The mean normalized powers of the LF and HF bands were computed for both the rest and mental maths epochs, for each subject. The mean normalized powers of the LF and HF bands will be simply referred to as LF and HF, respectively. Nonlinear Analyses Numerous studies have investigated the effect of stress on the structural complexity within HRV signals. Central to the analysis of signal complexity is the complexity loss theory as introduced by Goldberger et al. (2002), which posits that any perturbations within a physiological system, such as those caused by stress, constrain the system. Goldberger et al. (2002) hypothesize that the signals recorded from a constrained physiological system will exhibit reduced structural complexity. Signal complexity is interpreted through the regularity of a signal and is measured using entropy algorithms; equating structural complexity to signal regularity justifies the use of entropy to analyse signal complexity, as entropy algorithms are widely used measures of regularity. However, a truly complex system is neither completely regular nor irregular (Tononi et al., 1998), which possibly renders entropy an inadequate measure of complexity . Nevertheless, many studies which use entropy to assess the effects of stress on signal complexity have concluded that stress reduces the complexity within cardiac signals, supporting the complexity loss theory. Vuksanovic and Gal (2007), Williamon et al. (2013), and Chanwimalueang et al. (2016) employed the sample entropy method to analyse the effect of mental stress on the structural complexity in HRV signals, and found that mental stress led to a decrease in entropy. Bornas et al. (2006) employed sample entropy to analyse ECG signals from flight phobics, and found that sitting in a flight simulator resulted in a reduction in the entropy of their ECG. Given the choice of the sample entropy method over other entropy measures in these relevant studies, we here apply sample entropy to assess sex differences in the effects of stress on the structural dynamics of HRV signals. The sample entropy algorithm was introduced by Richman and Moorman (2000), and is the negative natural logarithm of the likelihood that two similar segments of data will remain similar, within a given tolerance, if the lengths of the segments are increased by one data point; see Step 1 to Step 7 in Algorithm 1 (Costa et al., 2002(Costa et al., , 2005Song et al., 2012). The sample entropy analyses in this study were undertaken in a 5-min sliding window, with a one-second increment; mean SE values for every subject for the two experiment epochs were computed. Statistical Analyses The percentage changes in the above described cardiac metrics, from the rest to maths epochs, were used to compare the male and female cardiovascular reactions to mental stress. The use of percentage changes enables inter-subject statistical comparisons, whilst the choice of the statistical test employed was dependent on the distribution of percentage changes. For example, Student's t-test was used to compare the results which followed a normal distribution, and the Wilcoxon rank-sum was used to compare the results which did not follow a normal distribution. A significance level of 0.01 was assumed in all tests. Figure 1A shows that mental stress induced a significantly greater increase in HR in the males, compared to the females (Wilcoxon rank-sum: p-value = 4.4×10 −4 ). The median increase in HR in the males was 13%, with a range of +11% to +25%; in contrast, the median percentage change in the females was an increase of only 5%. The changes in HR in the females were also less consistent, with a range of −7 to +13%. The median changes in the cardiac metrics and the p-values indicating the significance of the differences between the male and female responses are shown in Table 1. RESULTS Figures 1B,C illustrate the sex differences in the cardiac responses to mental stress. The results from the spectral analyses show a non-reciprocal relationship between LF and HF; the median changes in LF and HF in the males were a respective 5% increase and a 34% decrease, whilst the corresponding changes in the females were a 4% decrease in LF and a 16% increase in HF. The percentage changes in LF from the males were significantly different to those from the females (t-test: p-value = 4.6 × 10 −4 ), and were more varied, with a range of −2 to +12%; those from the females ranged from −7 to +3%. In addition, the percentage changes in HF were far more distinguishing (Wilcoxon rank-sum: p-value = 1.8 × 10 −4 ). The changes in HF in the males were more concentrated, with a range of −40 to −30%, whereas the changes in the females ranged from +7 to +34%. Figure 1D shows the findings from the nonlinear analysis, which revealed the most significant difference between the male and female cardiac responses to mental stress (t-test: p-value = 2.1 × 10 −9 ). The median percentage change in SE in the males was a 17% decrease, with a range of −20 to −13%. The median percentage change in the females was a 6% increase in SE, with a range of a +4 to +9%. DISCUSSION The changes in HR shown in Figure 1A provide the first conclusive evidence that the expected increase in HR in response to stress is not universal; while every male experienced an increase in HR of at least 11%, only half of the females experienced increases of at least 5%. The results from this study therefore confirm that the effect of stress on cardiac dynamics can differ substantially between males and females; this calls for a re-evaluation of our understanding of how stressful events affect cardiac dynamics in females in the follicular phase of their menstrual cycle. A similar sex difference has previously been reported in Tousignant-Laflamme et al. (2005) in relation to the effects of pain (a form of physiological stress) on HR. It was found that whilst the correlation between pain intensity and HR was positive in males, no such correlation existed in the females (Tousignant-Laflamme et al., 2005). However, Tousignant-Laflamme et al. (2005) did not ascertain the menstrual cycle phase of their subjects, and hence, were not able to make inferences regarding the effect of sex hormones on the cardiac responses to pain. In addition, a meta-analysis into sex differences in HRV was conducted by Koenig and Thayer (2016), and it was concluded that HRV in females contains more power in the high frequency component (the effect of stress on HRV was not included in the investigation). The effect of estrogen on the action of catecholamines has been widely investigated in mammals. In a review of the effect of estrogen on the stress response, Ueyama et al. (2008) reported that HR increases which were induced by stress in ovariectomised rats were greater than those experienced by ovariectomised rats who were supplemented with estrogen. Ueyama et al. (2008) hypothesized that their results were due to estrogen reducing the sympathoadrenal outflow of stress hormones from the central nervous system. Ueyama et al. (2008) also reported that estrogen reduced the reactivity of the heart to catecholamines, protecting the heart from the effects of stress. If applied to humans, these findings from rats would explain why females in this present study experienced smaller increases in HR when stressed. Algorithm 1: Sample Entropy 1 A normalized signal, x, of length N is split using an embedding dimension, m, to create (N − m + 1) segments. Each segment, X m , is of length m. In this study, m was defined as m = 2. 2 A tolerance level of r = 0.15 × std, where std is the standard deviation of the segment, is defined. 3 The maximum difference, d max , between the elements of two consecutive segments, X m (i) and X m (j), is computed as 4 For each X m (i), the event d max < r is defined as a match, and a count of such matches is denoted by A i . The probability of matches, A m i (r), for X m (i) is calculated as Note: The denominator, N − m − 1, ensures that when m is increased to m + 1, the segment X m+1 (i) is accounted for. 5 Then the sum of the probability of matches for all segments, , is defined as The embedding dimension m is increased to (m + 1), and Step 1 to Step 5 are repeated; the sum of the probability of matches for all segments when m = m + 1 are defined as , and the sample entropy, SE(m, r), is computed as 7 The SE for x is computed such that one SE value is obtained for each windowed signal. Frontiers in Physiology | www.frontiersin.org Furthermore, the results from the spectral analyses in this study also reveal sex differences. Not only did the males and females exhibit contrasting changes in LF and HF, but the changes in HF were considerably larger than the corresponding changes in LF (see Table 1). The overall stress-induced decrease in LF and the increase in HF in the females is a finding that contradicts the conventional understanding of the relationships between stress and the low frequency and high frequency components of HRV. As already mentioned, studies such as Berntson et al. (1994) have previously indicated a lack of consistency in the LF and HF responses to stress after comparing their findings from allfemale study cohorts to findings from studies with male cohorts. It is notable that these conclusions were drawn at a time when there was little awareness of sex differences in cardiac dynamics. Therefore, irrespective of the physiological interpretations of LF and HF, the opposing LF and HF trends in males and females, discovered in our pilot study, suggest that the inconsistencies reported by Berntson et al. (1994) were probably due to sex differences, and not a redundancy in LF and HF as stress metrics. It can also not be ignored that the controversy over the physiological interpretation of LF and HF remains largely unresolved. In summary, LF has been speculated to: (i) represent the modulation of both the SNS and PNS (Malik et al., 1996), (ii) be influenced by the frequencies of slow breathing (Brown et al., 1993), and (iii) be influenced by the frequency of muscle contractions in blood vessels (Kenwright et al., 2009). Similarly, HF has also been suggested to be influenced by typical respiratory frequencies (Kenwright et al., 2009). In conclusion, the physiological interpretation of LF and HF cannot be verified without conducting an extensive endocrinological study into the relationship between mental stress, LF, and HF, in which respiration is controlled (Stacey et al., 2018). Also, the spectral results reported here support the LF and HF normalization method employed in this study. The more common normalization methods are shown in Equations (8) Observe that both of these normalizations would contain the same information, as in this way LF p = 1 − HF p (Burr, 2007). In contrast, Figures 1B,C establish that the normalized LF and HF metrics proposed in this study contain different information, whereby relatively small changes in LF can be seen alongside large changes in HF. It is evident that the analysis of HRV via LF and HF provides an additional degree of freedom, in comparison to the single degree of freedom offered by HR analysis; without the two degrees of freedom, the sex-specific trends in LF and HF would not have been seen. The results from the nonlinear analyses in this study also shed new light on the sex differences in the dynamics of ECG. Bornas et al. (2006), Vuksanovic and Gal (2007), Williamon et al. (2013), and Chanwimalueang et al. (2016) have all reported that mental stress causes decreases in the entropy of cardiac signals, however, our results in Figure 1D demonstrate that stress caused increases in the entropies computed from the female subjects. These results contradict the complexity loss theory from Goldberger et al. (2002), possibly supporting their view that entropies are not true measures of physiological complexity. Nevertheless, the decreased regularity of the HRV from the female subjects confirm that they experienced a stress response which differed from that of the males. A female-specific stress response has been suggested by Taylor et al. (2000), who hypothesized that whilst males experience the conventional fight-or-flight response to stressors, females exhibit a tend-and-befriend response in which they employ social coping methods to combat stress. The tend-and-befriend response is driven by the action of estrogen and oxytocin (Taylor et al., 2000). Given the effects of estrogen on cardiac dynamics, the results from this study reveal, for the first time, cardiac trends which are likely to be specific to the tend-and-befriend response. It can be inferred that the fight-or-flight response is characterized by a large increase in HR, an increase in LF, a decrease in HF, and a decrease in HRV entropy; on the other hand, the tend-andbefriend response is characterized by a smaller increase in HR, a decrease in LF, an increase in HF, and an increase in HRV entropy, as illustrated in Figure 2. Future studies must also incorporate endocrinological responses from the subjects, as the use of the calendar method to determine the menstrual phases of the female subjects, while appropriate for this pilot study, does not enable the validation of estrogen levels. This, in addition to the expansion of the female cohort to include females in the luteal phase, would enable a future comprehensive investigation into the female stress response. CONCLUSION We have investigated long-overlooked sex differences in the cardiac response to stress through temporal, spectral and nonlinear signatures of mental stress in heart rate variability (HRV) signals, recorded from 10 females in the follicular phase of the menstrual cycle, and 10 males. The cardiac responses were acquired during 15 min of rest and 15 min of a mental maths test, and the cardiac metrics include heart rate, normalized powers of the low frequency component, the normalized powers of the high frequency component, and the sample entropies of the HRV signals, accompanied by statistical comparisons. For rigor, we have also employed a new normalization procedure for the spectral components of HRV. Every metric analyzed has revealed statistically significant differences (p-value<<0.01) between the cardiac responses in the male and female subjects. In the males only, mental stress has been found to induce large increases in heart rate, increases in the power of the low frequency component of HRV, decreases in the power of the high frequency component of HRV, and decreases in the entropy of HRV. These trends have not been found in the females, suggesting that estrogen modulates the cardiac response to stress in females. Not only do the results presented here radically challenge the practice of producing scientific hypotheses which have not accounted for sex, but the stressinduced increases in the sample entropy of heart rate variability have never before been reported, thus challenging the common assumption that sample entropy is a reliable measure of signal complexity. The results from this pilot study have established that the stress-induced cardiac trends which are commonly reported (increases in heart rate and the low frequency component of HRV) are the cardiac manifestations of the fight-or-flight response in males, whereas small increases in heart rate and large increases in the high frequency component of HRV may represent the female tend-and-befriend response. Following this proof-of-concept, further studies must employ females in the luteal phase of their menstrual cycle, and the collection of endocrinological parameters. AUTHOR CONTRIBUTIONS The physiological data were recorded by TA and JX, the data analysis was completed by TA, under the supervision of DM, and the paper was written by TA and DM.
2018-11-28T22:48:19.732Z
2018-07-13T00:00:00.000
{ "year": 2018, "sha1": "fdfc5d61328a634e9c3301ee5e9952be1933f765", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2018.01616/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdfc5d61328a634e9c3301ee5e9952be1933f765", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
250429789
pes2o/s2orc
v3-fos-license
Sequence Analysis and Structural Predictions of Lipid Transfer Bridges in the Repeating Beta Groove (RBG) Superfamily Reveal Past and Present Domain Variations Affecting Form, Function and Interactions of VPS13, ATG2, SHIP164, Hobbit and Tweek Lipid transfer between organelles requires proteins that shield the hydrophobic portions of lipids as they cross the cytoplasm. In the last decade a new structural form of lipid transfer protein (LTP) has been found: long hydrophobic grooves made of beta-sheet that bridge between organelles at membrane contact sites. Eukaryotes have five families of bridge-like LTPs: VPS13, ATG2, SHIP164, Hobbit and Tweek. These are unified into a single superfamily through their bridges being composed of just one domain, called the repeating beta groove (RBG) domain, which builds into rod shaped multimers with a hydrophobic-lined groove and hydrophilic exterior. Here, sequences and predicted structures of the RBG superfamily were analyzed in depth. Phylogenetics showed that the last eukaryotic common ancestor contained all five RBG proteins, with duplicated VPS13s. The current set of long RBG protein appears to have arisen in even earlier ancestors from shorter forms with 4 RBG domains. The extreme ends of most RBG proteins have amphipathic helices that might be an adaptation for direct or indirect bilayer interaction, although this has yet to be tested. The one exception to this is the C-terminus of SHIP164, which instead has a coiled-coil. Finally, the exterior surfaces of the RBG bridges are shown to have conserved residues along most of their length, indicating sites for partner interactions almost all of which are unknown. These findings can inform future cell biological and biochemical experiments. Introduction In the last two decades there has been a transformation in our understanding of how membrane-bound organelles of eukaryotic cells interact with each other. Text books tend to emphasize the linear pathways of secretion and endocytosis, considering organelles apart both from each other and from those that do not participate in vesicular traffic (mitochondria, lipid droplets, peroxisomes, plastids). This picture has increasingly been falsified by finding individual proteins that bind to two organelles at the same time, bridging the cytoplasmic gaps between them. A major activity that is found where two organelles interact is the transfer of lipids (Prinz et al., 2020), which can be independent of vesicular traffic (Pagano, 1990;Baumann et al., 2005). Lipid transport between membranes involves lipid transfer proteins (LTPs). The first discovered LTPs all have globular domains with an internal pocket specialized to shield one lipid (or possibly two lipids) at a time (Chiapparino et al., 2016). Such domains can be anchored at a membrane contact site, and then shuttle back-and-forth between donor and acceptor membranes to transfer or exchange selected cargoes (Egea, 2021). Subsequently, LTPs with an elongated rod-like structure ∼20 nm long were found in bacteria, with a "U"-shaped cross-section the internal surface of which is entirely hydrophobic, while the external surface is hydrophilic (Takeda et al., 2003;Suits et al., 2008). This allows lipids to slide between compartments along relatively static bridges (Sherman et al., 2018). Cytoplasmic bridge-like LTPs with a similar hydrophobic groove were then discovered in eukaryotes: VPS13 and ATG2 (3000-4000 aa and 1500-2000 aa respectively) are distantly related proteins that form rods approximately 20 and 15 nm long ( Figure 1A) (Kumar et al., 2018;Valverde et al., 2019;Ugur et al., 2020;Dziurdzik and Conibear, 2021;Leonzino et al., 2021). VPS13 and ATG2 transfer phospholipids efficiently in vitro (von Bülow and Hummer, 2020;Zhang et al., 2022), and they are required for rapid growth of the yeast prospore membrane and autophagosomes (respectively), both of which have few embedded proteins, indicating delivery of lipid in bulk . A pivotal observation is that VPS13 function is inhibited by converting the lining of one segment of the rod from hydrophobic to charged residues (Li et al., 2020) (Figure 1A). This indicates that lipids must pass every point of the tube, strongly supporting the bridge model in which lipids flow along VPS13 and by implication any related LTP bridge. Three further eukaryotic bridge-like LTPs have since been identified on the basis of distant sequence homology to VPS13/ATG2 using HHpred (Soding, 2005;Castro et al., 2022), and supported by structural predictions by AlphaFold (Jumper et al., 2021). They are: (1) SHIP164 (name in humans, also called UHRF1BP1L, with a close human homolog UHRF1BP, and a plant homolog: amino-terminal region of chorein); (2) Tweek (name from Drosophila, in human: BLTP1 (newly named by the Human Genome Gene Nomenclature Committee, see https://www.genenames.org, to replace KIAA1109 or FSA; in yeast: Csf1); and (3) Hobbit (name from Drosophila; in human: BLTP2 (new name as above) to replace KIAA0100, in yeast: two paralogs Fmp27 and Ypr117w newly renamed Hob1/2, in plants: SABRE,KIP and APT1). Analysis of the predicted structures of these proteins identified a repeating unit consisting of a 5-stranded β-sheet meander plus a sixth element, which usually starts with a helix and then continues with a loop that crosses back over the meander ( Figure 1B) (Neuman et al., 2022b). The rods are superhelical, which makes it hard to see the details of their construction (Guillén-Samander et al., 2022;Toulmay et al., 2022). There are some portions of predicted structure where the superhelical twist is low, and here the core building block is easily seen ( Figure 1C). The individual unit of ≥150 residues has been named the Repeating Beta-Groove (RBG) domain (Neuman et al., 2022b). The concave (inside) surface of the β-sheet is hydrophobic and the convex (outside) surface is hydrophilic ( Figure 1D). Because each of the 6 elements of the βββββ-loop domain cross over the groove, the domain starts and ends on the same side of the groove, leading one domain to follow directly from another, repeating the same topology. Examination of the hydrophobic grooves of all five eukaryotic bridge-like LTPs shows that they are assembled entirely from multiple RBGs domains, with the rod-like proteins in each family being created from multimers of characteristic numbers of domains that determine length ( Figure 1A) (Neuman et al., 2022b). Other bridge-like LTPs include those in the intermembrane spaces of mitochondria and chloroplasts. They more closely resemble their bacterial forebears (Neuman et al., 2022b) and are not considered here. Here, the domain architecture of the RBG superfamily is studied starting with whole proteins, moving to domains, and ending up with individual residues. Firstly, phylogeny, initially characterized for VPS13 decades ago (Velayos-Baeza et al., 2004), is updated indicating that the last eukaryotic common ancestor (LECA) had six RBG proteins: one from each family, and two VPS13 ancestors related to VPS13A/C/D and VPS13B. Secondly, the majority of central RBG domains arose by internal duplication from two complete domains in an ancestral protein that had four domains. Thirdly, previously unknown accessory domains near the C-terminus of VPS13B and plant VPS13 homologs are described, which likely provide interaction sites for partners, similar to those already identified that mediate either membrane targeting (Bean et al., 2018;Kumar et al., 2018;Park and Neiman, 2020;Guillen-Samander et al., 2021) or a specific function such as lipid scramblase (Ghanbarpour et al., 2021;Orii et al., 2021;Adlakha et al., 2022). Fourth, the extreme ends of RBG multimers are shown to all have amphipathic helices that cross the groove. However the C-terminus of SHIP164 is an exception as it has a coiled-coil. A hypothesis is developed that incorporates this and other bioinformatic evidence in a speculative model of SHIP164 function. Finally, conserved residues are shown to distribute along the entire length of the external, hydrophilic surfaces of RBG proteins, indicating a greater number of sites for partner interactions directly with the bridge than previously envisaged. Results and Discussion A. Following RBG Domains Across Evolution (I) VPS13 forms a single unbroken groove: Prior to the AlphaFold predictions, it was known from low resolution cryo-EM studies that the LTP groove extended all the way along ATG2 (Valverde et al., 2019), and along a large portion of VPS13 (Li et al., 2020), but it was not clear whether the groove extended the full length of VPS13. AlphaFold predictions for ATG2, Hobbit, and SHIP164 show unbroken grooves running their full length (Jumper et al., 2021), but predictions for Tweek and for full-length VPS13 are missing, perhaps because the sequences are too long (Neuman et al., 2022b). Tweek has been constructed as one unbroken groove by overlapping partial models, made either by ColabFold, an online AlphaFold tool (Mirdita et al., 2021;Castro et al., 2022) or by trRosetta (Yang et al., 2020;Toulmay et al., 2022). In contrast, the first published full-length model of Vps13p made by overlapping fragments showed RBG1-10 separate from RBG11/12 (Toulmay et al., 2022). Linkage of the two segments of VPS13 was examined by a ColabFold prediction of the region encompassing both sides of the VAB repeats, with most of this central region being omitted. The prediction tool placed the two VPS13 segments in direct continuity ( Figure 2B), with the final strand of RBG10 running parallel to the first strand of RBG11. The form of the RBG10/RBG11 interface precisely resembles that of any other RBG-RBG interface, for example RBG11/RBG12 ( Figure 2C). Thus, ColabFold predicts that the rod-like molecule VPS13 forms a single continuous β-groove from end to end, resembling other members of the superfamily ( Figure 2D). This matches cryo-EM observations of VPS13 as a rod (De et al., 2017) with a groove along at least part of its length (Li et al., 2020), and has been reported elsewhere (Adlakha et al., 2022;Guillén-Samander et al., 2022). This finding indicates that the VAB repeats, six all-beta domains of a unique type that build into a curved structure shaped like a hook (Bean et al., 2018;Adlakha et al., 2022), can be considered as a VPS13-specific insert in the loop between RBG domains (RBG10 and 11, see Figure 1A). A. Domain structure in five families of eukaryotic bridge-like lipid transfer proteins. Repeating beta-groove (RBG) domains (yellow, also called "extended chorein-N domains" (Melia and Reinisch, 2022)) are numbered below. Also showing previously known domains with non-RBG structure: VAB repeats (VPS13 only, dark green), α repeats -tandem repeats (∼75 aa each) of paired helices in VPS13 and ATG2, also called ATG_C domains (magenta), pleckstrin homology (PH) domain in VPS13 (sky-blue), and transmembrane helices (TMH, black) (Tweek and Hobbit). An intrinsically disordered loop >300 residues long in SHIP164 is also indicated (Hanna et al., 2022), but loops shorter than 100 aa are not indicated. Blue regions A and B near the N-terminus of VPS13 (64-300 and 690-827) indicate bands where mutating hydrophobic sidechains lining the groove to charged residues (11 and 16 respectively) inhibited VPS13 function without altering the fold (Li et al., 2020). Predicted lengths of the RBG structures are as published elsewhere (Guillén-Samander et al., 2022;Neuman et al., 2022b). Homologs chosen are from yeast where possible (Vps13, Atg2, Csf1 and Hob1/Fmp27), as they are the most compact and have the smallest additional intrinsically disordered loops, or from human (SHIP164/UHRF1BP1L). B. Cartoon of three repeating β-groove (RBG) domains (colored blue/green/yellow in series). Each RBG domain consists of five β-strands in a meander pattern followed by the final sixth element consisting of an intrinsically disordered loop that most often starts with a short helix. Adapted from Neuman et al. (2022b) with permission. C. Structure of RBG domains 3, 4 and 5 from human Hobbit/BLTP2 (KIAA0100, residues 272-729) predicted by AlphaFold, colored as in B. D. Surface of RBG5 from C. (residues 569-695) showing the inside (concave) surface has hydrophobic side-chains (brown), while the outside (convex) surface is populated with hydrophilic side-chains (blue). A. Domains of VPS13A identified in Pfam and by previous studies with HHpred, colored by underlying structure: (i) RBG domains (mostly β, yellow), which include: Chorein_N, VPS13_N, VPS13_mid, Apt1 and VPS13_C (part); VAB repeats (all β, dark green) and the C-terminal pleckstrin homology (PH) domain, which is the only domain not unique to VPS13 (purple); (ii) all helical domains (magenta), which include: "handle" (C-terminus of VPS13_N, a four helical bundle containing the loops after both the RBG1 and RBG2 (Melia and Reinisch, 2022)) and ATG_C (overlaps C-terminus of VPS13_C). Two linked red sections indicate the region submitted to ColabFold (see B/C). "VPS13_N" refers to the domain called "VPS13" in Pfam (defined as "Vacuolar sorting-associated protein 13, N-terminal"), also called "VPS13_N2" at InterPro. Apt1 was originally defined in Hobbit and is homologous to the region indicated by dashed lines that includes RBG11/12 (Kaminska et al., 2016). B. Predicted structure for yeast Vps13 residues 1643-2111 + 2543-2840, which includes RBG10, VAB repeats 1&2, the unstructured linker that follows VAB repeat 6, and RBG11+12. The β/α portions of each domain/repeat are colored differently, see Key. Strand 5 of RBG10 is parallel to strand 1 of RBG11, in the same way that strand 5 of RBG11 is parallel to strand 1 of RBG12. The probability local distance difference test (pLDDT) of both RBG interfaces is ∼0.95. Asterisk indicates position in strand 2 of RBG11 where a WWE domain is found in VPS13A/C (see Section B). C. Contact map from AlphaFold, showing: internal interactions of each RBG domain near the main diagonal (grey boxes), off-diagonal interactions (RBG10+11, RBG11+12, blue dashed boxes), and predicted parallel contact by strand 5 of one RBG domain and strand 1 of the next domain (black arrows). D. Model of RBG multimer of VPS13, including the model of the Cterminus of yeast Vps13 as described in Methods. Regions are colored as in A, except: RBG domain helices are purple and disordered linkers are grey; the linker that follows the VAB repeats is highlighted in black. Note that no accessory domains extend beyond the end of the βgroove, with the PH domain lying behind the VAB repeats in this orientation. Even though AlphaFold can reliably build 3D models from contact maps, the multimer predicted by AlphaFold varies from solved cryo-EM structure in more subtle aspects including both superhelicity and the extent to which the groove forms a sharp "V", compared to a shallow groove (Li et al., 2020). Despite these limitations on AlphaFold (Adlakha et al., 2022;Neuman et al., 2022b), the VPS13 prediction has one possible biological implication, since it shows that the C-terminus of the lipid transfer groove in VPS13 is able to directly access a lipid bilayer, as the accessory domains do not project beyond the RBG multimer ( Figure 2D). However, the links to the accessory domains are flexible, and other possibilities include that hydrophobic groove of VPS13 interacts with integral membrane proteins, for example scramblases (Ghanbarpour et al., 2021;Adlakha et al., 2022). (II) VPS13 has four major types of RBG domains: Sequence relationships between different RBG domains in VPS13 were examined to determine how the multimer of RBG domains was formed. The tool used for this was HHpred, which has high accuracy and a small number of known flaws (Fidler et al., 2016). A preliminary step was to identify all RBG domains in human VPS13 isoforms ( Figure 3A). These do not align precisely with the Pfam domain structure, which has until now been the standard way to describe VPS13 structure ( Figure 3A and 2A). VPS13A is the shortest both in terms of sequence length and in number of RBG domains: 12. By comparison, VPS13 and VPS13D have 15, and VPS13B has 13 ( Figure 3A). All domains in VPS13 have five strands except RBG1 with 4 strands, the first being replaced by the N-terminal helix, and the final domain (RBG12 in VPS13A) with 2 strands (Neuman et al., 2022b). The 12 RBG domains in VPS13A align well with those in yeast Vps13 (yVps13) ( Figure 3B) and in most plant homologs ( Figure 3C), indicating that this is an ancient form. The increased number of domains in VPS13C and VPS13D fits with previous observations that their extended length originated from internal duplications of ∼500 residues (Kumar et al., 2018;Guillen-Samander et al., 2021). The basis for variation in RBG domain number may relate to width of contact size bridged by individual homologs in a way that has yet to be studied. The next step was to identify homologies between individual RBG domains. When searches are seeded with whole protein strongly homologous regions in some hits will cause false positive alignments of adjacent unrelated regions (Fidler et al., 2016). This is a particular problem in proteins with repeats (data not shown). Therefore, multiple sequence alignments (MSAs) were created for each RBG domain separately using 5 iterations of HHblits to search deeply for homologs across human, yeast, protist and plant proteomes. In these searches, all RBG domains produced strong hits to the orthologous region of VPS13 in model species across a wide range of eukaryotic evolution (probability of shared structure >98% in fly, worm, yeast, Capsaspora, Trichomonas, Trypanosoma, Chlamydomonas and Arabidopsis). One aspect of the method that was not based on biological observation is that the six elements of RBG domains were defined as βββββ-loop. This was chosen in preference to any other permutation (for example βββ-loop-ββ) to maximize the power of sequence analysis, because either including the whole loop that follows the helix, the site of greatest variability, or making a deletion reduces the sensitivity of MSAs (not shown). Indication that RBG domains may not be formed from the βββββ-loop biological unit are highlighted in the descriptions of search results below. The observation that each RBG domain in VPS13 is to some extent unique led to a deeper study of homology relationships between the central 10 domains of VPS13A (RBG2 to 11), which found patterns of homology between the domains. This was seen in a second tranche of hits (typically with probability of shared structure up to 95%, and down to 10%) aligning each domain to other regions of VPS13 (and in some cases to ATG2 and SHIP164). This indicated more distant relationships between RBG domains, which to date have only been described approximately (Kumar et al., 2018;Castro et al., 2022). To establish relationships while avoiding all-vs-all comparisons, the situation was simplified by noting that domains tend to fall into one of two types, either similar to RBG2, the penultimate domain at the N-terminus RBG2, or similar to RBG11, the penultimate domain at the C-terminus ( Figure 3B). Some of these relationships are also supported by clustering of RBG domains solely using BLAST, for example clustering RBG4 both with RBG2 and RBG7 (Supplemental Figure 1). Eight out of ten domains showed strong homology (predicted shared structure >90%) to one but not both of the penultimate N-terminal or the penultimate C-terminal domain (Supplemental Table 1A). The two domains not in this group were RBG10 with moderate homology (predicted shared structure 77%), and RBG9 with weak and mixed homology (predicted shared structure 40% for N-terminus and 18% for C-terminus) (Supplemental Table 1A). The same findings were made for VPS13C and VPS13D, except three of the central domains of both (RBG5-7) had very close homologs 3 domains distant in the multimer ( Figure 3B, see numbers 5 1 -7 1 and 5 2 -7 2 inside domains). The inferred duplication events are located at a position similar to that previously suggested (Kumar et al., 2018). The origin of the extra sequence in VPS13C/D was confirmed by examining sequence relationships between >200 RBG domains from VPS13 homologs in 7 model organisms. A cluster map confirmed close relationships between the VPS13C pairs RBG5 1+2 , RBG6 1+2 , and RBG7 1+2 (Supplemental Figure 1). RBG5 1+2 , RBG6 1+2 , and RBG7 1+2 from VPS13D were also homologous to each other, but much more weakly. This suggests that RBG5/6/7 duplicated in two independent events, more recently in VPS13C than in VPS13D. Overall, this shows that VPS13A/C/D in their full extent consist of domains related to RBG1-2-11-12, which are therefore are the four major types of RBG domain in Levine Figure 3. Homology between RBG domains indicates patterns of inheritance from shorter ancestral forms. A. Schematic representation of RBG domains in VPS13A-D, including their mapping to Pfam domains, colored as in the key. RBG domains are drawn in a stylized way to indicate that loops do not cross the β-groove perpendicularly; width of domains is determined by the number of β-strands: typically 5, but RBG1 and RBG12 narrower (4 and 2 strands respectively). Here, and in B-D following, the position of each RBG domain is indicated by numbers below, and "+"/"-" indicate variations from 5 strands per domain. B. Homology relationships to RBG2 (blue) or RBG11 (red) for central RBG domains in yeast Vps13 and human VPS13A-D. Depth of shading indicates stronger homology as in the key below. Numbers inside domains indicate the orthologous RBG domain in VPS13A, except VPS13B-RBG3, which has no homology to VPS13A, and is weakly similar to RBG12 in VPS13B (indicated by light red). Duplications indicated by bracketed areas, with superscript 1/2 for the numbers inside domains, and residue numbers for the duplication boundaries in VPS13A/C. Minor structural variations similar to RBG2 (loop) or RBG11 (bulge), or both (see Section Dii) are indicated by light blue or light red circles around domain positions (also in C). C. Homology relationships as in B for plant VPS13 proteins: VPS13M1/2 (highly similar paralogs, only one shown), VPS13S and VPS13X. RBG5 in the latter has no discernible homology outside plant orthologs. D. Homology relations of RBG domains in ATG2 (human ATG2B), SHIP164 (human and Arabidopsis), Hobbit (human) and Tweek (yeast Csf1), domain boundaries as in Supplementary File 1. Diagram summarizes findings from diverse species, except for SHIP164 where animal and plant proteins are shown separately. ATG2 RBG4 is the only domain showing no significant conservation across evolution (filled white). Homology with domains in VPS13A/VPS13B/ATG2/ SHIP164/Tweek is indicated by numbers inside domains, which are prefixed by nil/B/ATG/S/T respectively. Hatching in domains indicates moderate or strong homology only within the orthologous RBG family, for example for Hobbit it indicates homology to RBG domains at the same position in non-animal homologs. The depth of color correlates with the degree of conservation. Homology between Tweek domains is shown by lines above, thickness correlating strength of homology. A. Full domain structure of RBG proteins, including all human and Arabidopsis VPS13 homologs (except VPS13M1, which is same as VPS13M2 but missing the C-terminal β-tripod). Domains include: transmembrane helices (TMHblack bars), α-bundle (pink ovals, including "handle" in VPS13), VAB repeats (green crescent made from 6 triangles), anti-parallel paired helices (called ATG_C, pink cylinders), PH (purple circle), and previously well-known accessory domains: UBA, ricin-like lectin (β-trefoil) in VPS13D, PH (N-terminal) and C2 as indicated. Additional domains: all-α: bundles (pink cylinders); mixed α/β WWE (see key); mostly−β: ricin-like lectin in VPS13X, βH=β-helix, βS=βsandwich, βT=β-tripod (see key). Despite the β-tripods being called VPS62 in Pfam, the predicted structure aligns with the Gram-negative insecticidal protein 6fbm (z=15.8, RMSD 2.9Å for 166 residues); and the AlphaFold model of Vps62 lacks the fold (data not shown); additionally the β-tripod-VPS62 link has been falsified (Zaitseva et al., 2019). B. Structure of β-sandwich insert in VPS13B RBG10/11 loop predicted by ColabFold, which aligns with discoidin domains and similar carbohydrate binding domains (z=8.1, rmsd 3.1 Å, across 112 residues), here showing below for comparison a carbohydrate-binding module-32 (CBM32) domain from a C. perfringens alpha-Nacetylglucosaminidase family protein (PDB: 4A42). C. Structure predicted by ColabFold of the insert between RBG6 and RBG7 of VPS13X (1426-1585). DALI aligns this to haemagglutinin (PDB: 4OUJ, z=14.8 rmsd=2.1 Å, across 117 residues), confirming HHpred's probability shared structure (pSS) =88% with ricin-type β-trefoil lectins. The structurally similar domain in VPS13D (3592-3780) is shown for comparison below. D. VPS13M1 residues 2262-2421 predicted by ColabFold as a β-tripod, which appears as a tandem pair, plus an extra copy at the C-terminus of VPS13M2. E. ColabFold prediction of VPS13M1 residues 1726-1840, forming a 4-turn β-helix. In B-E, structures are colored in a spectrum from N-terminus (blue) to C-terminus (red). pLDDT of the models of new domains (excluding inserted loops) in VPS13B, VPS13X, and the β-tripod and β-helix in VPS13M was 0.88 over 142 residues, 0.89 over 136 residues, 0.88 over 158 residues and 0.95 over 110 residues (B-E respectively). VPS13. In turn this suggests that a very primitive ancestor of VPS13 may have consisted of just these four domains, with growth by repeated central duplication. Appearance of new domains centrally like this is the typical pattern of duplication of multimeric domains in one protein (Bjorklund et al., 2006). While divergence for RBG3 to RBG8 is limited (homology is strong), divergence has been greater for RBG9 and RBG10. The mixed homology in the former is not unique (Supplemental Table 1A). Here opposite ends of the RBG domain are homologous to RBG2 and RBG11 (data not shown), which is evidence for inheritance of RBG domains in units other than βββββ-loop. Structural features that differentiate between RBG2 and RBG11 are described in Section D. (III) VPS13B diverged from VPS13A/C/D at the 7 + RBG domain stage: VPS13B is an outlier compared to the others in its VAB repeats (Dziurdzik et al., 2020), and the same applies for its RBG domains. Six RBG domains (RBG1/2/ 4/11/12/13) are close in sequence terms to VPS13A (RBG1/2/4/10/11/12, and a 7 th domain is well conserved but not in sequence (RBG7 in VPS13B is similar to RBG3 in VPS13A). The remaining RBG domains in VPS13B follow a quite different pattern ( Figure 3C). Five of these, RBG5/6/8/9/10, do not align well with any of the domains of VPS13A/C/D in the same position, instead resembling a mixture of domains or a penultimate domain. In addition, one domain in VPS13B (RBG3) has just weak and partial homology to the C-terminal penultimate domain and resembles no other domain specifically ( Figure 3C and Supplemental Table 1A). These findings indicate that VPB13B started to diverge from VPS13A/C/D at or after a 7 domain stage consisting of RBG1-2-3-4-10-11-12 (numbering according to VPS13A). The timing of this divergence is addressed in the next section. (IV) Phylogeny of VPS13A/C/D and VPS13B indicates LECA expressed both of these isoforms: looking across evolution to construct a phylogeny for VPS13, in invertebrates the fruit fly D. melanogaster has three VPS13s: clear homologs of VPS13B and VP13D, plus one protein called Vps13 that is related to both VPS13A and VPS13C. This is consistent with the VPS13A/C pair arising from a relatively recent (≥300 MYr) duplication (Ugur et al., 2020). The domain structures of the three fly VPS13s are identical to human VPS13A/B/D, suggesting that VPS13C is the divergent vertebrate homolog. Among other invertebrate model organisms, the nematode worm C. elegans has two VPS13s that resemble VPS13A/C (gene: T08G11) and VPS13D (C25H3.11) (Brickner and Fuller, 1997;Velayos-Baeza et al., 2004). VPS13D in C. elegans (and in other worms, not shown) has eleven RBG domains: in detail it lacks RBG3 and one set of RBG5-6-7. It also does not contain the UBA domain found in human and fly VPS13D (Anding et al., 2018;Shen et al., 2021). Other invertebrates were examined to provide context for the situation in worms. The simple eukaryote Trichoplax adherens also has the VPS13A/C and VPS13D pair, and T. adherens VPS13D has the same 15 RBG domains as human, which indicates that worms are outliers in their loss of domains. For VPS13B, even though it is missing from C. elegans and Trichoplax, a wider search showed that some invertebrates have VPS13B. These findings indicate that RBG domains are both gained and lost across evolution, but do not address how ancient VPS13B is. To determine if ancestral versions of VPS13B and VPS13D predate the evolution of animals, more divergent genomes were examined. Capsaspora owczarzaki, a free-living single cell organism related to animal precursors (Suga et al., 2013) has four VPS13s, three of which resemble VPS13A/C, VPS13B and VPS13D in flies, indicating that VPS13B and D both originated before animals evolved. Looking deeper into evolution, the slime mold Dictyostelium discoideum, which diverged from the common animal/fungal ancestor, has multiple VPS13s (Leiba et al., 2017), but none are specifically related to either VPS13B or VPS13D, and this is the case in other amoebae (not shown). To determine if the absence of VPS13B and VPS13D in amoebae is because they evolved in opisthokonts only, VPS13 sequences were compared across the whole of eukaryotic evolution. In a cluster map of >1000 proteins, VPS13D was closely linked to VPS13A/C, while VPS13B clustered separately (Supplemental Figure 2). A key finding is that 8% of the VPS13B cluster were from SAR/Harosa protists and algae (Supplemental Figure 2), with species such as Aphanomyces containing full-length homologs of VPS13B (data not shown). By comparison, the VPS13D cluster contained one amoebal protein and one algal protein, and the basis for such clustering was unclear as each showed stronger BLAST hits to VPS13A/C than to VPS13D. Thus, in agreement with the RBG domain results ( Figure 3B), it appears that VPS13D split from VPS13A/C in pre-metazoal evolution, and that VPS13B is an ancient paralog of VPS13A/C/ D so widespread that it is likely to have also been in LECA. This conclusion differs from prior work that related VPS13B to specific homologs in plants or slime mold (Velayos-Baeza et al., 2004;Leiba et al., 2017), possibly because those assignments relied on proteins from a small number of model species rather than from a large range of protein sequences considered together. While the analysis here is based on assigning orthology, VPS13 in protists separated from other extant organisms by long evolutionary branches cannot be assigned to any one group. They still merit study as they are examples of the general principle of plasticity in RBG proteins. Individual VPS13 proteins in Chlamydomonas, Plasmodium and Toxoplasma have 7192, 9307 and 13455 amino acids respectively, the latter containing 188 predicted β-strands (not shown), suggesting expansion to 38 RBG domains with a hydrophobic groove >75 nm. It may be that intracellular parasites have unique inter-membrane contacts that are wider than typical eukaryotic cells. (V) VPS13X is a previously undescribed divergent plant homolog: Until now, plants have been thought to contain three VPS13 homologs (Velayos-Baeza et al., 2004). To describe these briefly, the only plant VPS13 that has been studied experimentally is SHBY (At5g24740), named because mutations cause Arabidopsis to appear shrubby (Koizumi and Gallagher, 2013). The name VPS13S (from SHBY) is proposed here to standardize the format across VPS13 proteins in major clades of organism where possible. The two other identified Arabidopsis VPS13 proteins (At4g17140 and At1g48090 (Velayos-Baeza et al., 2004)) have not been named or studied directly. They are close paralogs and they contain multiple accessory domains (described in Section Biii), so the names VPS13M1/2 (for multiple) are used here. VPS13S and VPS13M1/2 have RBG domains homologous to VPS13A and yVps13 ( Figure 3C), consistent with this being the arrangement of RBG domains in LECA's VPS13A/C/D ancestor. HHpred identified a fourth VPS13 in Arabidopsis (At3g50380), which is annotated in databases as "Vacuolar protein sorting-associated protein", named here VPS13X, and which has close homologs in most land plants (data not shown). VAB repeats near the C-terminus identify it definitively as a VPS13 rather than ATG2, but it is variant being shorter than other VPS13s with nine RBG domains, only six of which are homologous to domains in other VPS13s: RBG1-2-4-7-8-9 follow the same pattern as RBG1-2-4-10-11-12 (VPS13A numbering, Figure 3C). Two domains (RBG3 and RBG6) are unrelated to specific VPS13 domains, though they have distant relationships to the penultimate domains (Supplemental Table 1A). Finally, one domain (RBG5) is homologous only to the orthologous domain in other VPS13X proteins, and has seven β-strands, confirmed by ColabFold (Supplemental Figure 3). This high level of divergence makes it impossible to tell if VPS13X originated from VPS13A/C/D or from VPS13B or from their common ancestor. (VI) ATG2 shares six RBG domains with VPS13: Relationships between the eight RBG domains of ATG2 were traced as had been done for VPS13 (Supplemental Table 1B, Figure 3D). The four domains nearest the ends (RBG-1/2/7/8) are strongly homologous to the four major RBG types in VPS13A, which adopt equivalent positions (RBG-1/2/11/12). The next pair inwards show homology to domains RBG3/10 in VPS13A, though weak for RBG3. Finally, the most central two domains (RBG4 and −5) cannot be traced to any domain outside ATG2 itself. While one (RBG5) is well conserved among ATG2 proteins, the other (RBG4) is highly variable across evolution, for example S. cerevisiae and S. pombe domains are unrelated. These results suggest that ATG2 and VPS13 have a common ancestor with 6 RBG domains, thus possibly preceding the split in VPS13 at the 7 domain stage. (VII) SHIP164 lacks a specialized C-terminus: SHIP164 has homologs from animals to plants with six RBG domains, with homology confined to the their N-termini (RBG1/2/3), partially resembling the N-termini of both VPS13 and ATG2 ( Figure 3D). For RBG3, a common ancestor related to RBG7 of ATG2 is present, but the domains in animals and plants have diverged so far as to share no homology. In the C-terminus, there are some domains related to others outside the family (Supplemental Table 1C), but RBG4 and RBG6 in plants are unique forms, and the latter with 5 strands is considerably different from the same domain in animals, which has 2½ strands. The homology of the final domain of SHIP164 in animals to RBG2 of VPS13 is significant because it groups the domain with those specialized to be in the middle of the multimer (and away from those specialized to be at the end of the multimer), suggesting that the C-terminus is similar to the middle of an RBG multimer (see Section Ciii). (VIII) Hob and Tweek proteins show more distant homologies to VPS13: Before mapping RBG domains in these proteins, a preliminary step was to check previous reports that homologs of Tweek exist only in fungi (Csf1) and animals, such as fly (Tweek) and human (BLTP1) (Neuman et al., 2022b;Toulmay et al., 2022). Using HHpred, full-length homologs were identified in Trypanosoma (4203 aa, XP_011775923) and Trichomonas (2695 aa, XP_001306426), lengths within the range between yeast Csf1 (2958 aa) and fly Tweek (5075 aa) or human BLTP1 (5093 aa). All length variation in this family results from intrinsically disordered loops throughout the proteins. Although the presence of Tweek/ Csf1 homologs in multiple protists might arise from horizontal gene transfer, it suggests that Tweek/Csf1 was present in LECA, which is a more ancient origin than was previously considered. Hobbit and Tweek both start with transmembrane helices (TMHs) that anchor them in the ER ( Figure 1A) (Castro et al., 2022;Hanna et al., 2022;John Peter et al., 2022;Toulmay et al., 2022). A feature at the N-terminus of Hobbit and Tweek is that their RBG2s are dissimilar to the RBG2 shared by VPS13, ATG2 and SHIP164. However, both Hobbit and Tweek have domains that are distantly related to RBG2 in VPS13/ ATG2/SHIP164 (RBG4/6 and RBG6/7/15 respectively), indicating that this domain was present in their ancestral forms ( Figure 3D). The lack of RBG2 in its standard position might reflect reduced evolutionary pressure for protein interactions at the N-terminus compared to RBG domains because of the TMHs. The other three major RBG domain types (RBG1/11/12, VPS13 numbering) also have homologs in Hobbit. The strong C-terminal homology has been noted previously (Rzepnikowska et al., 2017), while the weak N-terminal homology is a new finding. This leaves 6 other domains, which are conserved within the Hobbit family but not beyond ( Figure 3D). In Tweek, only two of the major RBG domains can be found: RBG1 is strongly homologous to RBG1 in VPS13/ATG2/ SHIP164, and three domains are weakly similar to RBG2. The major RBG domains types at the C-terminus (RBG11/12 in VPS13) are not found, as is the case for SHIP164. Three other Tweek domains (RBG5/11/14) show weak homologies to three central VPS13 domains (6/7/8), a different set of domains than occurs in Hobbit. This suggests that Hobbit and Tweek both have common ancestors with the other RBG proteins, but the precise forms cannot be determined. Relationships between two groups of Tweek domains (RBG5-10 and RBG11-15) suggest an internal duplication. All domains in Tweek are conserved across the family, and three have variant numbers of β-strands (3, 3 and 7 β-strands in RBG8/10/15) conserved from human to Trichomonas. This indicates that acquisition of domains and their rearrangement all occurred before LECA. To summarize the whole section on conservation: the majority of RBG domains have conserved sequence across all eukaryotes. A minority of RBG domains (VPS13X RBG5, ATG2 RBG4, Hobbit RBG4) have changed so much that the individual RBG domain is not detectable across all orthologs. This minority shows that primary structure can change to lose detectable homology without affecting secondary structure or lipid transfer function. In turn this implies that the conservation present in all the other parts of RBG proteins serves specific functions other than lipid transfer itself. Section D addresses these conserved sequences in detail. B. The Range of Accessory Domains in VPS13 is Wider Than Previously Known, Providing More Ways to Interact with Partners VPS13 has a set of three widely conserved accessory domains near its C-terminus (VAB, ATG_C and PH), ATG_C also being found in ATG2. After delineating the RBG domains, it became possible to identify all other accessory, folded, non-RBG domains using HHpred with confirmatory modelling by ColabFold ( Figure 4A). Being able to benchmark HHpred against AlphaFold predictions simplifies recognition of variant RBG domains, even if they contain multiple and extended inserts of small helices and disordered loops, or part of their secondary sheet structure is mistakenly identified as helix by PSIPRED, the tool used by HHpred (Buchan et al., 2013). Below, additional alpha helical domains are discussed first, and then a range of mainly-beta domain inserts that are far more numerous among VPS13 homologs than previously thought. (I) Helical bundles similar to those in VPS13 and ATG2 are found in Hobbit in Tweek: The best known accessory domain in any VPS13 is the ∼45 aa helical UBA domain in VPS13D (Velayos-Baeza et al., 2004). This is inserted in the RBG12/ 13 loop ( Figure 4A), and has been proposed to be related to VPS13D's role in mitophagy, given its typical role in binding polyubiquitin (Anding et al., 2018;Shen et al., 2021). UBA domains also occur at the extreme C-terminus in a minority of some algal VPS13 homologs, indicating a strong selection pressure for UBA domains in this family. RBG proteins have only a few other helical regions, one of which has solved structure: the VPS13 "handle", a helical bundle with unknown function near the N-terminus consisting of helices from loops in 2 RBG domains (RBG1/2 and RBG2/3) (Li et al., 2020). Secondary and tertiary structures of other helical regions among the RBG proteins were predicted by HHpred and AlphaFold respectively. The best known helical region is the ATG_C region (∼150 aa) in ATG2 that has been reported to target lipid droplets and autophagosomes ( Figure 4A) (Velikkakath et al., 2012;Kotani et al., 2018). VPS13 has a homologous region also reported to target lipid droplets (Kumar et al., 2018), and a homologous region is present in a subgroup of fungal sterol glucosyltransferases (Grille et al., 2010). Alphafold suggests that the whole ATG_C region contains four main helices forming two repeats of an anti-parallel helical pair (∼75 aa). Although AlphaFold orients the two bundles at a specific angle to each other, the low probability of local distance difference test (pLDDT) for this region suggests that the bundles are highly mobile, with no support for any particular relative orientation. Alphafold predicts three similar anti-parallel helical pairs in Hobbit (two conserved in plants), but none in SHIP164 or Tweek, the latter having a conserved helical bundle (120 residues) near its middle ( Figure 4A). Each helical region contains at least one segment ≥18 residues that has amphipathic properties, suggested to play a role in membrane targeting (Kumar et al., 2018), however the regions in Hobbit and Tweek are not located near the end of the Hobbit and Tweek bridges, so it is not clear how they might interact with membranes. (II) VPS13B contains a carbohydrate binding accessory domain, matching the function of the lectin domain in VPS13D: Other accessory domains that have been described before, though not yet studied, include a second domain in VPS13D and one domain each in VPS13A and VPS13C. The second domain in VPS13D is a Ricin-type β-trefoil lectin inserted in a loop of the sixth VAB repeat ( Figure 4A) (Velayos-Baeza et al., 2004), which may indicate a role in binding an O-linked GlcNAc group, which can be reversibly added to serines and threonines of cytoplasmic proteins particularly in nutrient and stress sensing pathways, including regulators of autophagy (Hart et al., 2007, Hart, 2019. The domains in VPS13A/C are 75 residue α/β WWE domains inserted in RBG11, although databases annotate only a minority of cases ( Figure 7A) (Cai et al., 2022;Hanna et al., 2022). WWE domains are thought to bind partner proteins involved in either ubiquitination or poly-ADP-ribosylation (Li and Chen, 2014). Together with the UBA domain of VPS13D, this may reflect consistent pressure for VPS13 proteins to interact with the ubiquitination machinery. While VPS13B was not previously thought to have an additional domain, the survey here revealed a 180 residue mostly-beta domain inserted in the RBG10/11 loop ( Figure 4A). This might have been missed previously because it has no sequence homologs, and its mostly-β structure is hard to distinguish from the typical pattern of RBG domain elements. HHpred identified the domain in all VPS13B homologs, including in protists (e.g., the oomycete Phytophthora). ColabFold modelled this as a β-sandwich ( Figure 4B). The same β-sandwich is found in the discoidin domain, which binds carbohydrate, so VPS13B and VPS13D potentially have accessory domains with similar functions, although the structure and position differs from the lectin in VPS13D. (III) Multiple accessory domains link plant VPS13 proteins to ubiquitination: Plant homologs vary considerably in their accessory domains. While VPS13S is the same as yVps13, VPS13X is the only protein studied here that lacks any of the characteristic accessory domains: its C-terminal PH domain is replaced by three helices with amphipathic properties, possibly extending the ATG_C region. VPS13X also contains an additional 145 residue mostly-β domain in the loop between RBG6 and RBG7. ColabFold predicts this to be a Ricin-type lectin, with the same typical β-trefoil structure found in VPS13D, though lacking any sequence homology ( Figure 4C) (Parker et al., 2021). The presence of functionally identical domains in VPS13X (plants) and VPS13D (opisthokonts) might indicate an ancient relationship between these proteins, or repeated acquisition of accessory domains from the same family. In contrast to other VPS13 homologs with one extra domain, VPS13M1/2 have many (6 and 5 respectively, Figure 4A). In Pfam VPS13M1/2 are identified as respectively containing a PH domain in the RBG4/5 loop or a C2 domain inserted the first loop of the first VAB repeat. HHpred extends this to discover both PH and C2 domains in both VSP13M1 and -M2. Pfam also documents a tandem pair of "Vps62 domains" in both proteins, while HHpred finds a third such domain in a loop of the C-terminal PH domain of VPS13M1 only. AlphaFold predicts these domains as β-tripods ( Figure 4D), a structure first described in bacterial lysins, where a possible function in protein binding has been identified, and the confusion with Vps62 is explained as a spurious alignment error (Zaitseva et al., 2019). Finally, HHpred identified a four-turn right-handed β-helix located in the RBG8/9 loop of VPS13M1/2 ( Figure 4E). This domain is also found in some protist VPS13s (not shown), and previously was found in various proteins including F-box proteins such as FBXO11 (Yoder et al., 1993;Ciccarelli et al., 2002). Thus, the analysis of domains in plants suggests yet another link for VPS13 to ubiquitination pathways. Overall, all four human VPS13s have non-standard accessory non-RBG domains near the C-terminus and an even more complex situation has evolved in plants ( Figure 4A). Knowing the location of these domains adds to the catalog of sites likely to bind interaction partners, as already known for VAB and PH domains (Bean et al., 2018;Dziurdzik et al., 2020;Park and Neiman, 2020;Guillén-Samander et al., 2022). In yeast just one homolog carries out the multiple functions of VPS13, and correspondingly yVps13 has multiple intracellular locations (De et al., 2017;Dziurdzik and Conibear, 2021). It is possible that the many different accessory domains in complex organisms facilitate the division of VPS13 function into subsets that are regulated independently. C. Extreme Ends of RBG Multimers Have Amphipathic Heliceswith the Exception of the C-Terminus of SHIP164 If RBG proteins bridge across membrane contact sites, each extreme end might interact with a bilayer to allow lipid entry/ exit. This section looks at the properties of structural elements at the ends of RBG multimers, focussing on a common finding that they are capped by amphipathic helices, and an exception at the C-terminus of SHIP164, which has a different kind of helix, a coiled-coil. (I) RBG multimers without TMHs start with an amphipathic helix, except ATG2: At the N-terminus there is uniformity across the entire RBG superfamily, with all RBG1 domains being homologs of the Chorein_N domain, even though for Hobbit this homology is weak (Figure 3). In one study of targeting of ATG2, just 46 residues at the N-terminus were needed for ER localization (Kotani et al., 2018), which includes just the start of RBG1/Chorein_N. RBG1 uniquely has the first strand replaced by a helix that crosses the multimer perpendicularly, in contrast to the helices that occur in the middle of the multimer that align along it (see below). To look for adaptation for interaction with target bilayers, these helices were examined for potential amphipathicity (Gimenez-Andres et al., 2018). VPS13 starts with a helix with amphipathic properties that are well conserved, being on average uncharged and having a broad hydrophobic face (9 residues, Figure 5A and Supplemental Table 2A). This is consistent with insertion in a membrane with many packing defects and low levels of anionic headgroups, such as the ER where the N-terminus targets in almost all examples studied of VPS13 (Kumar et al., 2018, Gimenez-Andres et al., 2018González Montoro et al., 2018). Exceptions include VPS13B with + 3 charge, which might explain its ability to "moonlight" on endosomes (Koike and Jahn, 2019). Even more extreme, the amphipathic helices in SHIP164 and VPS13X have narrow hydrophobic faces and are highly charged ( Figure 5B, Supplemental Table 2A), which is consistent with these N-termini inserting into tightly packed membranes with high levels of anionic headgroups, similar to those that insert in the plasma membrane (Bigay and Antonny, 2012;Hanna et al., 2022). ATG2 differs from VPS13 and SHIP164 by commencing with an unstructured hydrophobic loop that can enhance membrane insertion ( Figure 5C) (Fuglebakk and Reuter, 2018) and distantly resembles the "WPP motif" that targets plant proteins to the nuclear envelope (Patel et al., 2004). Following that loop is a helix with a short amphipathic section (11-13 residues) ( Figure 5D). This helix is important for ER targeting, as it includes three charged residues required for Atg2 function (Kotani et al., 2018). Although it is unknown how these two elements for membrane interaction might function together, they might be able to target multiple organelles, accounting for the ability of ATG2 to transfer lipids from more than one source (Noda, 2021), particularly highly curved tubules (Maeda et al., 2019). The N-termini of both Hobbit and Tweek start with TMHs that integrate into the ER ( Figure 1A) (John Peter et al., 2022;Neuman et al., 2022a;Toulmay et al., 2022). While Hobbit has only a TMH, in Tweek and its homologs the TMH is followed by an amphipathic helix ( Figure 5E and Supplemental Table 2A). Since this is unlikely to dominate over the TMH in ER targeting, its role might be to select regions of curvature and/or to locally disorganize the bilayer within the ER (Gimenez-Andres et al., 2018). (II) The helix at the extreme C-terminal end of RBG multimers is amphipathic, except in SHIP164: At the other end of RBG multimers, most are immediately followed by a helix. In VPS13, ATG2, Tweek and most Hobbit homologs (not human, but in fly yeast and plant) the helices are amphipathic ( Figure 6A, Supplemental Figure 4 and Supplemental Table 2B). These helices have biophysical properties similar to the helices that immediately precede the β-sheet (Figure 5), and they too are predicted by Alphafold to cross the RBG multimer and occlude it (for example, VPS13A see Figure 7), so they might have similar functions to their counterparts at the N-terminus. Future experiments will determine whether these interact directly either with the bilayer, for which feasible models have been created by others (Dall'Armellina et al., 2022), or alternately with protein partners as has been implied by functional relationships with scramblases (Ghanbarpour et al., 2021;Orii et al., 2021;Adlakha et al., 2022;Guillén-Samander et al., 2022). SHIP164 is the one RBG protein that has a different form. In opisthokont SHIP164 the disordered region after the final RBG ends with a coiled-coil. This means that the C-terminus of SHIP164 is unique among human RBG proteins by lacking a amphipathic helix. (III) Speculation about the C-terminus of SHIP164: might it dimerize end to end? In addition to the C-terminus SHIP164 having a coiled-coil but no amphipathic helix ( Figure 6A), the RBG domain itself resembles a central domain, rather than a terminal one ( Figure 3C). This suggests the possibility that sequence C-terminal to RBG6 might have an alternate function to interacting with a membrane. Speculatively, it could interact with another RBG domain at a dimerization interface. ColabFold was therefore used to test whether SHIP164 can homodimerize. Predictions indicated two dimer interfaces: sheet-to-sheet and coiled-coil ( Figure 6B). The sheet interface is between the final strand, which is halflength, and the penultimate strand ( Figure 6C). Although AlphaFold is imperfect, including in predicting dimeric interfaces (Bryant et al., 2022), these predictions taken as a whole are intriguing as they suggest the possibility that SHIP164 in vivo forms tail-to-tail dimers. Although such dimers have been reported for purified SHIP164 in vitro (1-18), as in A. C. Peptide consensus of the segment prior to initial helices of ATG2 N-termini from 46 sequences across evolution; median length 8 (6-15) residues with conserved large hydrophobic side-chains. D. Helical wheel projection of the first 11 residues only of the initial helix of human ATGB (7-17), with remaining helix indicated by grey letters below. Three positive residues 10 KKR 12 in ATG2B that are homologous to 10 QKR 12 in yeast Atg2 that mediate ER targeting by an unknown mechanism are underlined in blue (Kotani et al., 2018). E. Helical wheel projection of the amphipathic helix from the yeast homolog of Tweek (Csf1), showing the hydrophobic face as in A. Figure 6. The C-terminus of the RBG multimer of SHIP164 uniquely has a coiled-coil. A. Helices following the final RBG domain in five families were assessed as amphipathic (blue) coiled coil (red) or neither (grey). For predicted helices to be defined as amphipathic, 5 or more residues within a segment of 18 residues have to form a hydrophobic face (not glycine/alanine/proline). Recognized domains that follow the RBG multimer without any strongly predicted interaction are shown as filled shapes (not to scale, colors as Figure 1). The penultimate RBG domain is also shown for context. The examples chosen from each family are: VPS13: yeast; ATG2: human ATG2A; hobbit: yeast Hob1 (aka Fmp27); Tweek: yeast Csf1; SHIP164: human. Similar results were obtained with other family members. B. Top ranked model made by ColabFold of two copies of the C-terminus of SHIP164 (250 residues, 1215-1464), with RBG6 + terminal helix seen from the top (left, with fog as a depth cue) and from the side (right). Two dimerization interfaces were predicted, both with very high confidence (pLDDT≥90%): the β sheet (predicted for all 5 top models), and the coiled-coil (predicted in four of the five top models). Chains A and B are colored differently (see Key). C. Diagram of predicted β-sheet dimeric interaction for SHIP164. The final strand of RBG5 and all of RBG6 are shown (colored as in A) together with a second copy rotated through 180°(colored yellow). The dimerization interface (blue lines) consists of parallel interactions between the final short strand and the N-terminal portion of the penultimate strand on the other monomer. (Hanna et al., 2022), this observation is not conclusive because purified ATG2 also forms dimers at high concentrations. This question can only be addressed by seeking experimental evidence for the possibility of SHIP164 dimerization in vivo. In the absence of such evidence, it is worth pointing out a plausible alternate possibility that also explains the bioinformatic findings: in this scenario SHIP164 is a monomer, interacting with a membrane partner, possibly via a hetero-dimeric coiledcoil. By contrast, homo-dimerization in vivo would have two functional implications: firstly, it would extend the reach of SHIP164 from ∼10 nm to ∼20 nm, consistent with the inter-membrane distances between endocytic vesicles enriched for SHIP164 (Hanna et al., 2022). Secondly, the two membrane interaction sites of a tail-to-tail dimer are identical copies of the N-terminus, which would imply symmetrical lipid transfer by SHIP164 bridging between homotypic organelles. Such homotypic activity has been described for cholesteryl ester transfer protein (CETP) acting on biochemically similar HDL particles (Mohammadpour and Akhlaghi, 2013). Here, homotypic activity by SHIP164 might nvolve bridges between the seemingly uniform endocytic vesicles that are separated from the ER by a zone of matrix (Hanna et al., 2022). D. Conservation at the Level of Sequence (I) Amphipathic helices of the outermost RBG domains in VPS13 appear to have functions other than packing the terminal helices onto the groove: As described in Section B, there are four major types of RBG domain (RBG1/2/11/12, VPS13A numbering). These are also among the most highly conserved domains (Supplemental Figure 1). What are the unique structural and sequence features of these domains? To start with the outermost RBG domains (RBG1/12) were examined, focussing mainly on VPS13 as this family has the richest information. In both RBG1 and RBG12 the amphipathic helices cap the groove by crossing it perpendicularly to the sheet (red arrows in Figure 7A). This property of RBG1 has been seen by crystallography in VPS13 and ATG2 (Kumar et al., 2018;, but for RBG12 there is no crystallographic data, only AlphaFold prediction. Many of the most highly conserved residues in these domains are located on both the perpendicular helices and the adjacent sheet, indicating that they are involved in packing interactions. However, these are not the same as the residues that specify the amphipathic helices, since the hydrophobic face of the amphipathic helices is not involved in packing. Instead these residues, all moderately or highly conserved, do not point at the subjacent sheet and do not directly contact it ( Figure 7B). This indicates that the amphipathicity of these helices is conserved as a feature separate from packing onto the sheet. Speculatively, if the amphipathic helices engage with a partner, either a lipid bilayer or a protein, they might adopt different conformations from the ones found in available structures. One possibility that might be explored is that they could rotate to open up the hydrophobic groove allowing lipid entry/exit ( Figure 7B, curved arrows). Some support for helix mobility comes from considering isoleucine 2771 in VPS13A, which causes disease when mutated ( Figure 7A, grey arrow) (Rzepnikowska et al., 2017). The analogous mutation I2749R modelled in yVps13 reduces binding of the C-terminus to phosphatidylinositol 3-phosphate (PI3P). Given its location on the inner surface of strand 1 of VPS13A RBG12, the increased size and charge of the A. and B: Models of RBG1 and RBG12 from VPS13, each domain rotated to highlight the interface between terminal helix and adjacent sheet. A. shows the most conserved residues identified by Consurf as spheres. Red arrows indicate the helices that run perpendicularly across the outermost ends of the RBG multimer (N-terminal helix in RBG1 and the C-terminal helix in RBG12). RBG12 has a helix inserted between strands 1 and 2 (grey arrowhead in A). Color scheme:-secondary structure: sheetyellow; helix-purple; loop-grey; side chains only of highly conserved residues (spheres) are colored by residue type: A/I/L/P/ V: coral pink, FWY: magenta, NQST: cyan, G: white Cα, His: light blue, RK: blue, DE: red (using single letter code). Structures are based on AlphaFold models. B. highlights two small groups of residues: spheres = hydrophobic face of the amphipathic helices (RBG1: 1/3/7/8/11/14/15/18/19/22; RBG12 2852/6/9); ball and stick = sheet residues near to these (RBG1:27/29/31; RBG12 2821/3/5). There is no direct contact. Grey helices, black curved arrows and question marks indicate speculative movement of helices to open the ends of the groove. Isoleucine 2771, mutated to arginine in some patients, is indicated by the black ball and stick residue in Asee main text. mutant side-chain appears unable to affect PI3P binding directly, but might affect the closed position of the amphipathic helix. (II) Conserved structural variations differentiating between the two archetypal central RBG domains VPS13 vary across orthologous domains: The two RBG domain types of greatest significance are RBG2 and RBG11 (VPS13A numbering), because they represent archetypal forms in sequence terms that account for almost all central domains in VPS13, ATG2 and SHIP164 and for ∼50% of domains in Hobbit and Tweek (Section B, Figure 4). To understand these two major types of domain, the predicted structures RBG2 and RBG11 were compared ( Figure 8A). This showed that each has a characteristic structural feature: RBG2 has a loop between strands 3 and 4 (≥15 aa), and RBG11 has a bulge inserted in the middle of strand 4 (≥5 aa) onto the external face of the groove ( Figure 8A). The loop in RBG2 is verified by the identical appearance being seen by crystallography (Kumar et al., 2018). The bulge in RBG11 cannot be verified directly (but see below). Similar loops tend to be present in domains related by sequence to RBG2 and bulges in domains related to RBG11, including RBG6 where a bulge is present in the cryo-EM structure (Li et al., 2020). Parallel structural variations (loop and the bulge) are smaller or absent in RBG domains otherwise homologous to RBG2 or RBG11, both in humans ( Figure 8B) and in plants ( Figure 3C, circles highlighting domain positions). Indeed, some domains have both features (Supplemental Figure 5A). Variability of the features is illustrated by RBG10, which has neither feature in any orthologous domain except for animal VPS13A/C, indicating that a loop may have been acquired just in that clade. Therefore, loops and bulges show too much variability to provide information about the events that created the domains present in LECA. (III) Conserved residues in RBG domains VPS13 form a strip on the external, hydrophilic surface ideally placed for interacting with partners: As a final step to understanding RBG domain conservation, the most highly conserved residues in the central domains (RBG2-11) of VPS13A were mapped ( Figure 8C and D, and Supplementary Movies 1-3). Looking first at the helices at the ends of domains, half of RBG2-11 have conserved residues packing the interface between the helix and adjacent sheet, while the other half have no or few conserved residues on the helix or sheet (Supplemental Figure 5D). This suggests that the latter group of helices might be mobile, however, countering this, in cryo-EM of VPS13 all the helices of RBG domains 2 to 7 align with the long axis contacting the rim of the groove (Li et al., 2020). A second feature of the central domains is the presence of conserved residues along the entire outside convex surface of the groove, which is simplified here by showing hydrophilic residues only viewed from the convex surface and hydrophobic residues only viewed from the concave surface ( Figure 8C and D), and is also seen when viewing all residues on all surfaces (Supplementary Movies 2 and 3). To check if conserved residues near the C-terminus are involved in intramolecular interactions with accessory domains, RBG10/11/12 were examined for contacts with VAB (repeat 1) ATG_C and PH domains, which AlphaFold places overlying them. Analysis of co-evolution maps indicated that there is no such contact (data not shown). Similar to VPS13, some central domains of other RBG multimers often have conserved hydrophilic residues on their external faces (Supplemental Figure 5A to C), and this applies across almost the whole of Hobbit (Supplemental Figure 5E and F). Overall, these findings indicate that a large majority of the entire length of RBG multimers has conserved residues on the outside (mainly hydrophilic) surface. Conserved hydrophilic residues on the convex surface of RBG domains tend to occur near the rim of the groove from which helices originate ( Figure 8C, open arrows), which suggests that this rim is a characteristic site for interactions with partners. This applies to the most central RBG domains, which are furthest from membrane anchoring points. An example of the importance of such central conservation is found in a catalog of high probability disease-causing missense mutations in VPS13B (Zorn et al., 2022). Four of 12 mutated sites are external facing residues located along various parts of the multimer, and three mutated sites not inside the groove are at some distance from the ends (RBG6 to 10) ( Figure 8E). Thus, disease genetics agrees with sequence conservation to suggest that residues far from membrane anchoring points that do not interact with lipid cargo are critical to function. Conclusions Analyzing the sequences of RBG protein sequences has allowed the examination of several conserved generic properties in RBG proteins: the number of RBG domains; patterns of internal domain duplication; accessory domains; interaction modules at the extreme ends of the multimers; and conserved residues along the entire RBG multimer. Because all of these properties are conserved, they are likely to be related to conserved functions. The number of domains may match the width of contact site, which will surely be a subject of future study, along with multimer flexibility to accommodate changes in length. The pattern of domain duplication shows the extent to which the origins of the RBG superfamily can still be detected in sequences >1.2Bn years after they originated (Mills et al., 2022). One major caveat to these conclusions is that they assume that current sequences indicate a pattern of gradual accumulation of domains from ancestral forms related to RBG1-2-11-12 vertically across time, rather than events such as gene conversion either within or between RBG proteins. However, whatever the events Figure 8. Distribution of conserved structural and sequence variations in VPS13. A: Comparison of the predicted structures of RBG2 (light blue) and RBG11 (gold) from VPS13A, indicating the extended loop between strands 3 and 4 of RBG2 and the bulge in strand 4 of RBG11; left hand view shows the concave hydrophobic surface, right-hand view (rotated 180°over a horizontal axis) shows the convex hydrophilic surface, so both views have the N-terminus on the left-hand side. The loop in RBG2 includes a short helix (residues 228-236) that is placed adjacent to the outside surface of RBG1 both by AlphaFold and in the crystal structure 6CBC, however neither the helix nor that part of the sheet has conserved residues. Black lines in RBG11 indicate positions of the start and end of the WWE domain ( Figure 4A). B-D: Three different sets of views of the 10 central RBG domains from VPS13A (human, predicted in 2 segments, courtesy of Pietro De Camilli) aligned in the same orientation, untwisting the natural superhelix. The three views are: B. Back-bone only of convex surfaces (oriented as in A. right), showing loops as in RBG2 (light blue circles) and bulges as in RBG11 (gold asterisks); C. The most conserved hydrophilic residues (DEHKNQRST) viewed from the concave surface (as in B); open arrows indicate the dominant location for these residues near the rim of the groove where the helices originate. D. The most conserved hydrophobic residues (ACFGILMPVWY) viewed from the convex surface, view as in A. left). Spheres indicating conserved residues are colored as Figure 7. Structures are based on AlphaFold models, with the helical bundle ("handle") removed for clarity. E. Locations of 12 VPS13B missense variants that cause human disease without effects on protein expression (Zorn et al., 2022). Location of mutated residue is indicated by shape as in the key: triangle (x2) = inside (hydrophobic) surface of groove; square (x1) = outside (hydrophilic) surface of groove; circle (x3): intrinsically disordered loop; diamond (x6) = accessory domain. Shapes are colored for wild-type (left) and mutant (right) side-chains: ILC: coral pink; Y: magenta; NST: cyan; G: white; RK: blue; DE: red; numbering is for the 4022 aa VPS13B isoform, which has 25 aa inserted at ∼1400 aa compared to the isoform reported by Zorn et al. (2022); domains are colored as Figure 1A, with added grey lines and thick purple segments to indicate loops and helices. Each RBG strand and each VAB repeat is separated by dashed and dotted lines respectively. Note that the Y2341C mutation is in the β-sandwich inserted in the loop following RBG10 ( Figure 4B). were, conservation across eukaryotes indicates that they took place before the formation of LECA. As domains duplicated, their binding partners might also have duplicated; for example multiple members of protein family X may bind to different parts of VPS13 if the ancestral protein X interacted with the ancestral forms of either RBG2 or RBG11. Folded accessory domains are strong candidates to mediate partner interactions, after the identification of key interactions for VAB repeats (Bean et al., 2018;Dziurdzik et al., 2020;Adlakha et al., 2022) and for the C-terminal PH domain of VPS13A (Guillén-Samander et al., 2022). This work defines the full range of such domains: the ones with defined folds are all in VPS13, mostly near its C-terminus. Amphipathic helices are present at the extreme ends of multimers, with one exception: the C-terminus of SHIP164, which instead has a coiled coil, the significance of which is not yet known. Sequence conservation on the external face of the lipid binding hydrophobic groove may participate in a series of binding sites along the entire bridge for protein partners. Some partners of RBG may be proteins that are separately recruited to the contact site, but others may be recruited solely by this interaction, potentially making RBG proteins hubs for contact site function. Shortcomings of the study include that it only focusses on folded domains, leaving out short linear motifs, only some of which have been mapped in VPS13 (Kumar et al., 2018;Guillen-Samander et al., 2021) and ATG2 (Bozic et al., 2020;Ren et al., 2020), which fits with their overall discovery rate of <5% (Davey et al., 2017). Adding these to information on domain structure will allow further hypotheses on RBG protein function to be generated and tested. Remote Homology: HHpred Sequences were obtained from both Uniprot and NCBI Protein databases for VPS13 in 8 organisms: human, fly (D. melanogaster), nematode worm (C. elegans), Trichoplax adherens, S. cerevisiae, S. pombe, Capsaspora owczarzaki and A thaliana. For the one sequence that was incomplete, Trichoplax VPS13A/C (1299 aa), additional sequence was assembled from adjacent genes that encode sequences homologous to the N-and C-termini of VPS13AC. This added the N-terminus, but left a likely gap of 1500−2000 amino acids that is possibly be encoded in a genomic region of 13891 bp. tBLASTn in this region identified 262 aa of VPS13A/C-like sequence in 6 regions (probable exons, data not shown) in 5 statistically significant hits, suggesting that an unannotated complete VPS13A/C is expressed in Trichoplax, of which 1561 aa have so far been identified. The form that RBG domains take in HHpred (Soding, 2005) was initially determined from cross-correlating HHpred searches seeded with portions of entire protein structures predicted by AlphaFold. Many strands are seen in HHpred as two disconnected halves, each 6-10 residues. Inserts with no sheet or helix were ignored. Long inserts in the middle of domains were omitted if they prevented alignment on one side of the insert. This allowed identification of all RBG domains in human VPS13A, VPS13B, extra RBG domains in VPS13C/D, and selected domains from yeast and Arabidopsis. Benchmarking of HHpred against AlphaFold showed that HHpred underestimates the length of β-strands in RBG domains by >10%, and that it mis-identifies a small minority of strands as helices (data not shown). Based on these observations, all HHpred hits longer than 25 residues with predicted shared structure ≥10% that contained any predicted sheet were considered true positives. Each domain, defined as the strands plus 12-20 aa of the 6 th element (i.e., the first few turns of the helix for uniformity), was submitted to HHpred with 5 iterations of HHblits to make multiple sequence alignments (MSAs), then used to find homologs in target HMM libraries of key whole proteomes (human, S. cerevisiae, C. owczarzaki and A thaliana). To identify homology of a query domain to RBG2 and RBG11 in VPS13, four pair-wise alignments were made using the "Align two sequences/MSAs" option, aligning the query with MSAs from the penultimate domains both of VPS13A (RBG2 &11) and of VPS13B (RBG2 & 12). Both VPS13A and VPS13B were included here to capture the breadth of VPS13 sequences, and the four MSAs of penultimate domains were made superficially (one iteration of HHblits) to preserve unique qualities of the domains. All regions with predicted secondary structure that did not align well with the typical 5 strand + loop pattern of RBG domains were submitted to HHpred to identify their fold (Gabler et al., 2020), and also had structure predicted by ColabFold (Mirdita et al., 2021). Cluster Maps For relationships between RBG domains, 229 domains from VPS13 proteins in the 8 eukaryotes listed above were submitted to CLANS, using BLOSUM45 as scoring matrix (Frickey and Lupas, 2004;Gabler et al., 2020). 162 connected domains at p < 0.1 were clustered (>100,000 rounds) in 2 dimensions. For relationships between whole VPS13 proteins, 1266 full length proteins sequences were submitted to CLANS, using BLOSUM62. This list was obtained by generating two separate lists of proteins homologous to the central portion of the RBG multimer (to avoid proteins that only are homologous to the VAB domain) seeding HHblits searches with either VPS13A/C/D (RBG2-9) or VPS13B (RBG2-10) (Remmert et al., 2012). The resulting lists of 886 VPS13A/C/D homologs (1 iteration) and 703 VPS13B homologs (3 iterations) were then reduced by removing (near) identities using MMseq2 with default settings (Steinegger and Soding, 2017), and the four human sequences were added. Full-length sequences were used to cluster, >100,000 rounds, with 1186 proteins connected at p < 10 −30 . Analysis of Helices and Short Sequences Sequences were analyzed for propensity to be amphipathic and coiled coils using Heliquest and Coils tools, respectively (Lupas et al., 1991;Gautier et al., 2008). A consensus from the N-termini of ATG2 homologs was obtained from a MSA after 3 rounds of HHblits seeded with ATG2 from Drosophila (Remmert et al., 2012). From 514 entries, sequences containing intact extreme N-termini (38 aa) were selected, rare inserts were reduced by dropping sequences plus editing by hand, and identical sequences reduced to single entries, leaving 46 sequences. These were aligned with MUSCLE (www.ebi.ac.uk). The first 9 columns of this were visualized with WebLogo (https:// weblogo.berkeley.edu). Identification of Conserved Residues AlphaFold models, either complete or divided into regions defined as RBG domains (see Supplemental File 1) were submitted to Consurf (Ashkenazy et al., 2016), using standard settings except breadth of searches was maximized by setting HMMER to 5 iterations, producing 400-2000 unique sequence homologs. Where insufficient homologs were obtained, HHblits was used to build an MSAeither 2 or 3 iterations, n = 200-300. Highly conserved residues were identified as those scoring 8 or 9 on conservation color scale (from 1 to 9). LECA last eukaryotic common ancestor LTP lipid transfer protein MSA multiple sequence alignment RBG repeating beta groove TMH trans-membrane helix yVps13 yeast Vps13 pLDDT probability local distance difference test Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Biotechnology and Biological Sciences Research Council, (grant number BB/P003818/1). Supplemental Material Supplemental material for this article is available online.
2022-07-12T15:25:34.026Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "0037a51d62b4360b9f414ba5040e427d5b1ace26", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1177/25152564221134328", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "5ce7fd466d61473e902aca3528fc6c2046b2e47b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
117049957
pes2o/s2orc
v3-fos-license
Ground state of a two dimensional quasiperiodic antiferromagnet We consider the spin 1/2 Heisenberg antiferromagnet on a two dimensional quasiperiodic tiling. The T=0 state is long range ordered, with an inhomogeneous distribution of the staggered moment. An approximate real space renormalization scheme first developed for the square lattice is generalized and used to obtain information on the ground state energy as well as the distribution of local correlations in the quasicrystal. Magnetism in quasicrystals can be very complex, due to the extreme sensitivity to structural details, in such systems, of local moment amplitudes as well as of the interactions. A considerable simplification of the problem is however possible for the recently studied rare-earth based quasiperiodic alloy ZnMgHo [1]. The rare-earth based magnetic alloys represents a conceptually simpler system than the transition metal alloy quasicrystals that were initially the object of experimental studies, since the magnetic moments are associated with f-orbitals, and can be assumed in the first approximation to be independent of the local itinerant-electron density of states. This is to be contrasted with the earliest magnetic quasicrystals of the AlMn family, where the itinerant magnetic moments on the Mn atoms depend sensitively on detailed structural features due to the d-orbital hybridization (see the review by Hippert et al in [2]). To add to the difficulties the early alloys were metastable quasicrystals of inferior structural quality so that the role of disorder had to be considered in addition to the intrinsic behavior. Experimental results indicated a wide distribution of effective moments on the Mn atoms, as well as of the interactions between these, leading to a large number of unknown parameters in the phenomenological models describing such systems. From a theoretical viewpoint, therefore, the rare earth system is clearly far simpler. ZnMgHo was shown to undergo a magnetic transition into a magnetic state characterized by short range antiferromagnetic correlations with quasiperiodic modulation [1]. The experimental results lead naturally to the question of what properties one expects for the ground state of a quasicrystal with short range antiferromagnetic interactions. An acceptable starting point for models of such systems could be, as for crystalline compounds, a Hamiltonian with short range antiferromagnetic couplings between pairs of identical spins, H = J ij S i .S j . Fig.1 shows the results of a recent Monte Carlo study of a two-dimensional model of quantum spins on a quasiperiodic tiling [3]. The circles on the vertices have radii that depend on the value of the local staggered moment, a quantity that we will define further below. The tiling considered is the eight-fold symmetric octagonal (Ammann-Beenker) tiling, in which sites can have six possible values of coordination number z. Sites were occupied by S = 1 2 spins, with uniform interactions J i,j = J > 0 along the edges of the tiling. The system is bipartite, meaning that every spin belongs to one of two sublattices and interactions couple only spins of different sublattices. Analogously to the spin 1 2 square lattice antiferromagnet, which is now believed to have a ground state with long range order, we expect that this quasiperiodic system, too, has a broken symmetry ground state with long range order. Classically, the ground state corresponds to having oppositely directed sublattice magnetizations, with no frustration, in the sense that all bonds can be "satisfied" simultaneously. In the quantum case, the ground state will correspond to zero total spin since for the octagonal tiling, the two sublattices are equiva-lent. The inhomogeneous structure of the ground state seen in Fig.1 is a reflection of the environment-dependence of the quantum fluctuations around the Neel state in the quasicrystal. There is for the moment no spin wave expansion that would allow, as in a periodic solid, to calculate the distribution of staggered moments and explain the QMC results. In fact, as we will see, a real space approach seems more appropriate for quasiperiodic tilings, and many such calculations exist for the case of one dimension. One-dimensional models to study the behavior of quantum spins on quasiperiodic chains been considered by several authors. Quantum spin chains have been analyzed using renormalization schemes [4,5] based on the inflation symmetry of these chains. Using a mapping to fermionic models and techniques of bosonization, [6] it is possible to obtain interesting results concerning global properties such as the magnetization as a function of external field, and the spectral gaps for a variety of different quasiperiodic sequences. However, real space information such as the distribution of the local quantities m 2 loc,i ∼ | S i S i+1 | has not so far been calculated. For two dimensional structures, real space configurations have been studied for models with classical spins. Here, the ground state is nontrivial only when the model includes frustration. In [7] Godreche et al introduced a renormalization scheme on the Penrose tiling for a Heisenberg exchange model with competing antiferromagnetic interactions, and were thus able to obtain a phase diagram consisting of a variety of ordered phases. The real space spin configurations were calculated numerically in [8] for classical spins interacting via long-ranged dipolar interactions, and a complex magnetization distribution with overlapping decagonal rings reflecting the underlying Penrose tiling was found. A quasiperiodic magnetic state with a heirarchical ringed structure was found, as well, in a different context: that of itinerant magnetism due to interacting electrons [9]. With this background, we return to the problem of quantum Heisenberg spins with nearest neighbor antiferromagnetic interactions. Ref. [3] presented local staggered order parameters similar to the quantities m loc,i above, calculated for individual sites using the expectation values for local spin-spin correlations. One sees in Fig. 1 that sites of the same z have similar local order parameter amplitudes. An explanation of this behavior was given by considering isolated star shaped clusters called Heisenberg stars in [3]. This provided a qualitative understanding of the decrease of local staggered magnetizations as a function of z, but for a more quantitative fit to the QMC results, it is necessary to go beyond the isolated cluster approximation, and take into account longer range correlations. This can be done in a renormalization group (RG) calculation that uses an important symmetry of the tiling, namely invariance under discrete scale transformations called inflations. This renormalization group is a generalization of the calculation of Sierra and Martin-Delgado for the square lattice [10], where the authors considered star-shaped block spins formed by a central spin and its four nearest neighbors. In their calculation, block spins formed from these five-spin clusters are shown to interact via an effective Heisenberg antiferromagnetic interaction on a bigger √ 5 × √ 5 square lattice. The effective spin values scale to infinity, i.e. the classical limit, under renormalization. Their model for a translationally invariant system can, as we will see, be adapted to our quasiperiodic case under certain approximations. We thus calculate not only the global ground state energy as was done for the square lattice, but also the distribution of local order parameters. We will discuss the method, which has been briefly reported in [11], in some detail in the present paper. We begin with an introduction to the quasiperiodic tiling and the spin Hamiltonian in the next two sections. The RG scheme is described in the fourth section. Results and discussions are presented in sections five and six. Some general remarks The octagonal tiling [12] shown in Fig.2 can be thought of as the equivalent of the square lattice for quasiperiodic systems. It has therefore been frequently used for analytical and numerical investigations of the effects of quasiperiodic modulations in two dimensions. Spectral properties of electrons [13], transport properties [14], vibrational properties [15] and magnetic properties [3] have thus been studied for discrete models defined on the octagonal tiling. The tiling is built from two kinds of tiles, squares and 45 o rhombuses. These two types of tiles can fill the two-dimensional plane in an aperiodic way, as Penrose first showed for the five-fold tiling named after him [16,17]. Although there is no translational invariance in a quasiperiodic tiling, any given tile arrangement of tiles reoccurs all over the tiling with a certain frequency of re-occurrence -or, alternatively viewed, there exists a mean distance of separation between such identical domains. This is referred to as the repetitivity property of quasiperiodic tilings, and is very different from the situation in a disordered medium (where the expected distance in which one expects to find a second region identical to the first increases exponentially with the size of the region). Similarly, the property of symmetry under rotations for these tilings differs from that in crystals, for which the new and the old structures coincide exactly. For the quasicrystal, the equivalence of the new and old tilings holds in the "weak" sense, namely, any finite region of the new tiling after rotation will be identical to finite regions of the old one. Such aperiodic structures can be built using "matching rules". These are local rules that determine if and how two tiles can be laid side by side (see Ch.1 of [18]). Alternatively, tilings such as the Penrose and octagonal tilings could be generated by a projection method down from a higher dimensional periodic structure [19]. Such an approach can give either a deterministic, perfectly ordered tiling, or a random one where tiles are assembled subject only to the constraint that they should fill space without overlapping [20]. Random tilings are of great theoretical interest, but we are here interested in deterministic tilings, which have the important property of invariance under inflation/deflations, or discrete scale invariance. This symmetry is illustrated in Fig. 3 and will be described in more detail in the next section. It is this property that is responsible for the characteristic singular electronic and magnetic properties of such tilings and it was first pointed out in the Penrose tiling, which is invariant under a replacement of tiles by τ -fold bigger tiles, where τ = ( √ 5 + 1)/2 [17]. One can define geometrical inflation rules for, among others, the Fibonacci chain in one dimension, the octagonal tiling in two dimensions, and the icosahedral tiling in three dimensions. The renormalization approach is a natural one for such geometrically self-similar quasiperiodic tilings , and this structural property has been exploited in order to establish recurrence relations for parameters occurring in discrete spin models, electron hopping models, etc, as mentioned before for the one-dimensional case, but also for some two-dimensional models [7,21], where analytical methods remain hard to implement. As noted in the introduction, our approach is inspired by the renormalization calculation of Sierra and Martin-Delgado [10] for the square lattice. Some principal properties of the octagonal tiling that are used in the RG calculation are reviewed in the next section, without demonstration. (For those interested, Appendix A contains some additional details on how to obtain a quasiperiodic structure, and how inflations/deflations are described in the framework of the projection method. Although not strictly necessary to understand the calculations presented below, an understanding of the geometrical properties of the tiling is important for those wishing to improve this approximate RG scheme and extend it to other models. For more details, the reader is referred to reviews in [22,23]). The six nearest neighbor configurations, corresponding to coordination numbers z = 8, 7, ..., 3 are labeled A,B,...,F as shown in Fig.2. Fig. 4 shows these environments separately. In an infinite tiling, each of these types of site occurs with a well-defined frequency f i , where (see Appendix A) The six local environments with λ = 1 + √ 2. One distinguishes between two kinds of D sites as explained in the next section. It can be checked using the above frequencies that the average site coordination number on the octagonal tiling is exactly four. The inflation transformation Inflation proceeds as follows for the octagonal tiling: one starts with a tiling composed of tiles of a given initial edge length (we will assume this is equal to 1) and one reconnects a precisely determined subset of vertices so as to obtain a new tiling of the same type as the old, i.e. having the same set of local geometries, except for an overall scale change by a numerical factor λ = 1 + √ 2 ( Fig.4). The sites shown as black dots on the original tiling belong in the α class: A,B,C and half the D (called D 1 ) sites. These become the sites of the new bigger tiling, while the remaining (β) sites drop out. Note that there are two varieties of five-fold sites, D 1 and D 2 , which belong to the α and β classes respectively. On the octagonal tiling, they always occur in pairs. Appendix A shows how the two classes of D sites can be distinguished in terms of their perpendicular space coordinates [24]. Under inflation, the density of sites is reduced to 1/λ 2 of its initial value. The sites that remain acquire new values of the site coordination numbers z ′ ≤ z. The table below lists the initial and final values of coordination number for each of the α class sites (note that there are four different subcategories for the A sites -see Appendix A for more on the properties of these subcategories). For the four types of α sites, the table below lists the nearest neighbors (nn) in terms of the type of site and the number of sites of that type. This information will be useful in determining the final block spin value at the central site, as we will explain in sec.III. α site nn spin type(number) A F (eight) B F (five), E (two) C F (two), E (four) D 1 D 2 (one),E (four) where i, j denotes a pair of spins S i and S j linked by an edge, and J ij = J > 0 for such a pair. This system is bipartite with two identical (to be understood in the weak sense) subtilings (as on the square lattice). Finite spin clusters A sites are surrounded by eight F sites. If one isolated one such cluster of 8+1 spins, the lowest energy state for classical spins is the one with the eight peripheral spins antiparallel to the central spin. In the quantum case, the ground state of the cluster is rotationally invariant, and corresponds to the total cluster spin value S tot = 7/2. The other α sites correspond to total cluster spin values in the ground state of S tot = (z B − 1)/2 = 3 around a B site, and so on. The four clusters are shown in the left hand side series of Fig.5. Clusters of each type can be defined on larger and larger length scales, by using the inflation rules already outlined to determine the new A,B,C and D 1 sites after inflation. Fig.5 shows the four α clusters on the next largest length scale on the right hand side series. Here, block spin centers are shown with big black dots, while the sites corresponding to the β sites are indicated by smaller dots. On a yet bigger length scale, Fig.6 shows a "second generation A site", namely, a site that remains of A type after two inflations, along with all the sites belonging to the cluster before the two decimations. The α clusters on all length scales are the building blocks for the renormalization scheme that follows. IV. THE RENORMALIZATION TRANSFORMATION The renormalization calculation is a generalization to an aperiodic system of the one used for the square lattice by Sierra and Martin-Delgado [10]. We review briefly the steps of their calculation before showing how they are modified in the quasiperiodic case. We consider the nearest neighbor Heisenberg antiferromagnet described by Eq. 2 with spin 1 2 on the vertices and the initial coupling J along the edges of the squares (of side a = 1). Fig.7 shows the five-spin blocks enclosed by circles. The four couplings inside each block are shown outlined by thick grey lines. As one sees, the block spins form a new rotated square lattice of side √ 5 (Fig.7). Each of the blocks can be diagonalized exactly. With every step of RG, only the lowest energy states of the blocks are retained to form the basis for the effective Hamiltonian. T 0 and T † 0 denote the operators describing the transformations from the original Hamiltonian (acting in the complete Hilbert space) to the effective Hamiltonian (acting in the reduced Hilbert space). For a single block, the lowest energy sector corresponds to spin 3 2 , and the ground state energy is e 0 = −JS(4S + 1). The couplings not already taken into account give rise to inter-block interactions, calculated by first order perturbation theory. It is easy to check that the new block spins will be coupled antiferromagnetically to its nearest neighbors, like the original spins. The effective Hamiltonian H(N, S, J) can thus be written approximately as a sum of single-block contributions (the diagonal terms) and a set of terms involving nearest neighbor blocks (off-diagonal terms), and the formal expression for the transformed problem reads where the new Hamiltonian H ′ has the same form (bilinear in S ′ ) as H, and N ′ = N/5. The effective spin of a block spin is S ′ = 3S = 3 2 . The spin renormalization factor relating one of the four boundary spins to the new block spin has been shown to be close to the classical value ξ 0 = S i /S ′ ≈ 1 3 ( see [10] for the exact value). The interaction between two contiguous blocks is J ′ = 3ξ 2 0 J. Repeating the steps of renormalization, one has ultimately for the ground state energy per site an infinite sum as follows where S (n+1) = 3S (n) and J (n+1) = 3ξ 2 (S (n) )J (n) . Under RG, the spins evolve to the classical limit, S → ∞ indicating that in the quantum case as well one has a ground state with broken symmetry. The couplings scale to zero indicating the model is massless. Qualitatively, thus, the RG gives the now accepted physics of the model, however, quantitatively the value obtained for e ∞ ≈ −0.546 is not as good as that obtained by spin wave expansion and is about 15 % higher than that established by numerical calculations [25]. We will return to this point at the end of the paper. On the octagonal tiling, it is clear that several kinds of block spins must be introduced. A natural choice is to designate the α sites as block spin centers. Fig.12 shows the positions of the block spins (black dots) on a portion of the tiling. Upon inflation, the other sites will disappear, leaving only the block variables, and some residual interactions between them. If no new couplings are generated, one will find an effective Hamiltonian similar to the old, except for the renormalized couplings which become site dependent. One can repeat the process, and determine if there is convergence to a fixed point. The simple scheme outlined above cannot be implemented without some modifications and approximations. The first problem arises because the connectivity of the tiling is such that some of the block spins overlap, that is, share two intermediate β sites in common. This is shown by the thick grey lines in Fig.12, which indicate the boundary between overlapping blocks. Overlapping occurs between contiguous C and D 1 blocks, as well as between contiguous D 1 blocks. This overlapping occurs with a finite density. One can calculate this density by noting that the shared sites occur between any two sites that are a distance λ 2 d s apart, where d s is the short diagonal of the rhombus. One finds, using the relative frequencies of occurrence of squares and rhombuses that the density of pairs is √ 2/λ 3 , that is, about 10% of the total number of pairs. To deal with this problem, we therefore considered two possible modifications of the original model, i) doubling the number of spins on each shared site, and considering each spin as being coupled to one block only, and ii) decoupling the block spins by annulling one of the bonds to the left or the right so that spins are no longer coupled on both sides. The first modification leads to overestimating the total energy, the second to underestimating it, with respect to the original octagonal tiling. Spin doubling on selected sites leads to an uninteresting flow under renormalization, where cluster energies basically repeat a scaled Heisenberg star distribution at each step. The bond dilution scheme yields a more complicated behavior of cluster energies under renormalization, and is the option taken up in detail in this paper. We note that the diluted model remains twodimensional, and is not of a scale invariant fractal such as the Sierpinski gasket [26], where bonds are also deleted heirarchically but in a way that leads to an effective fractal dimension less than two. The second problem is the quasiperiodic connectivity between blocks which leads ultimately to an infinite number of environments. This is dealt with by truncating the number of environments we choose to distinguish between. The α sites always have the same type of nearest neighbors (given in Table 2), however the β sites occur in several configurations. We will now truncate the table of connectivities by allowing only one type of D 2 ,E and F site, and a connectivity table as follows: In this subsection we discuss the blocks that are obtained after dilution and the values of the effective block spin. Fig.9(top) shows in detail a central D 1 site which transforms to an F site under inflation. The three neighboring block spins are shown as well, with the block spin sites shown by black dots. The original links are indicated by thin black lines, while the new effective links on the inflated tiling are shown by thick grey lines. Fig.9(middle) shows a C site transforming to an E site, with the same conventions used to denote block spin sites and new effective couplings. In this figure one sees that two of the block spins, corresponding to neighboring D 1 blocks, overlap. The pair of sites shared between the two blocks is coupled to the left and right by a total of four bonds. In the bond-dilution approach, one has to set two of the bonds equal to zero. This can be done in one of two ways that treat the two blocks equitably, leaving each D 1 block with one less bond. Finally, Fig.9(bottom) shows an A site transformed under inflation to a final A site. In this case, the eight D 1 blocks surrounding the center block form a ring of overlapping blocks. There are two ways to decouple them all by annulling eight of the sixteen links joining them in way that treats all the D 1 sites equitably. Ultimately, the bond dilution results in an effective reduction of connectivity of C and D 1 sites: the former have the effective z valuez = 5 and the latter z = 3. B. Spin renormalization factors Consider a block spin composed from a cluster of z spins surrounding a central spin and antiferromagnetic interactions. In the simplest case where all spins have the value S, the block has a spin of S ′ = (z − 1)S in the ground state. The spin renormalization factors are taken to be equal to the classical value for simplicity, so that for a given block ξ z = (z − 1) −1 . The new Note that the number of values of the block spins after each inflation does not grow -there are just four possible different values of the block spin at any stage of inflation. As in the square lattice example, the spins all tend to the classical limit as n goes to infinity. In addition, the largest eigenvalue of C, 3, is precisely that of the square lattice in section IV.1 ! This eigenvalue, along with the corresponding eigenvector gives the flow of effective spin values in the limit of large n. Thus for large n S (n) ≈ 3S (n−1) . This is the same spin renormalization as that on the square lattice where z is everywhere equal to 4. In both cases, the spins tend to infinity, i.e. the classical limit, under renormalization. On the tiling, moreover, the block spins tend to constant relative asymptotic values which are site dependent and given by the eigenvector (1, 1, 1, 1, 1, 3 4 , 1 2 ). C. Ground state energy of an isolated block Consider the configuration of z + 1 spins of Fig.10 in which each of the z links represent the same antiferromagnetic coupling J, termed the Heisenberg Star (HS) in [3]. For spin 1 2 variables on each site and for a given antiferromagnetic coupling J between the central spin and its z neighbors, the ground state energy can be found exactly to be On the octagonal tiling, one has the seven different families of star clusters on the tiling, with the corresponding values of z on the right hand side of the equation. The superscript "0" indicates that this corresponds to the energy of unrenormalized clusters. We also require the ground state energy in the case of clusters of spins of unequal lengths. The lowest energy state of a cluster in which z spins of unequal lengths S i = n i s 0 are coupled with strength J to a central spin S 0 = n 0 s 0 is taken to be the following generalisation of Eq.6 In the present model, although initially the couplings are all equal, after one RG step the couplings take on different values. Therefore we shall make an approximation later that consists of replacing the set of couplings around each site by a single locally averaged value. The largest eigenvalue of the proliferation matrix P is equal to 7 so that the total number of blocks increases(decreases) with the number m of deflations(inflations) as 7 m for large m. Notice that the proliferation of blocks is described by an integer, and not the irrational number λ 2 ≈ 6.8, each of these numbers being the answer to a different question. The former describes the rate of growth of a finite system in terms of the number of blocks. The latter is the scale factor of the change of site density under inflation/deflation for the infinite quasicrystal, and this is not restricted to have integer values. E. Renormalization of links There are an infinite number of types of links since each link couples two sites that are each unique. However, just as we chose to truncate the size of the space of solutions by distinguishing only seven types of sites, we can consider a "minimal" model where it suffices to take into account only five kinds of links. These are represented in an array j = (j αF ,j αE ,j D1D2 , j D2F ,j EF ). Here, j αF is used to denote the link between (A,F), (B,F),(C,F) and (D 1 ,F) pairs. Similarly, j αE denotes the link connecting (B,E),(C,E) and (D 1 ,E) pairs. This oversimplification of the link classification ignores, in particular, that E and F sites can occur in more than one environment. However, in the first approximation, we have assumed here that one can treat all the sites of a given family as identical out to first neighbors, and this approximation will be found post facto to yield reasonably good numerical results. Note that there are no bonds linking sites that are separated by a distance d s in the original tiling (recall that this is the shortest distance possible on the octagonal tiling) and the same is true for the sites of the inflated tiling since our bond dilution has the effect of decoupling such blocks. Interblock links are all the links not taken into account in the definition of blocks. To find the new effective links, one also allows for bond moving, as illustrated by the following example: consider a central A site surrounded by eight D 1 -clusters. These transform to an A site with eight F sites around it after an inflation. We wish to obtain the effective link between the central A and one of the F sites. The original A site has sixteen links to the eight D 1 -clusters -i.e. it has two links per D 1 -cluster. These two links between the center and each peripheral block are of the EF type (see Fig.8c). Thus the new effective coupling between the central A → A site and each of the eight D 1 → F sites around it on the inflated tiling is of the αF type. It is antiferromagnetic, like the original couplings. One takes into account the spin renormalization factors of the block spins mentioned before, namely ξ A and ξ D respectively. The new coupling can then be expressed in terms of the previous generation of couplings by the equation For the second type of links, j αE , one sees that there are three EF links joining a A → B site to a C → E site, so that the new αE link is given by The other effective couplings can be written down similarly, although a problem arises due to the fact already mentioned, namely, that E and F sites can occur in more than one local environment. Here we chose just one option among the several, to write down the new effective D 2 F and EF couplings. With this truncation of the link relations, we have a system of equations between the five old and five new couplings, j (1) = M (0) j (0) , where with the initial condition (taking the zero order coupling J = 1) j (0) = (1, 1, 1, 1, 1). F. Averaged values of renormalized couplings After one inflation, the new tiling has the same geometry, with the same relative frequencies of vertices as the old tiling, however, the new onsite spins S (1) and intersite couplings j (1) are no longer uniform from site to site. To proceed, we define averaged quantities -averaged renormalization factors ξ (1) i and averaged couplings, for each of the seven types of site. The average couplings are easily found, using the local environments listed for each of the seven families in Tables 2 and 3. The simplest situation occurs for A sites, which have eight A-F links surrounding them, so that the average coupling is just  αF . For the six remaining sites we can similarly define averaged couplings that are linear combinations of the j (n) . Dropping the superscripts, we thus have seven averaged couplings as follows: Average renormalization factors ξ (1) i are analogously determined for each of the seven sites, and used to obtain the new matrix M (1) . This process is repeated and the result is a set of recurrence relations j (n+1) = M (n) j (n) with M having the same structure as in Eq.8. One can now study the evolution of the matrix M under successive inflations. The maximum eigenvalue of M , γ 5 ≈ 0.15. This results in a power law decay of the couplings for large n, since j (n) ≈ γ 5 j (n−1) . The corresponding eigenvector |v 5 determines the fixed point relative couplings. G. Hamiltonian of inflated system The effective Hamiltonian after a single inflation is now written down much as for the case of the square lattice. After the first renormalization there are block spins at each of the α-class sites, whose ground state zero order energies are ǫ where j can take on the values A,B,C or D 1 . The first term is a sum over the energies of Heisenberg stars defined on the four types of blocks α, given by Eq.6 or equivalently by Eq.7 with ǫ (0) j ≡ ǫ(1, z, n 0 = 1, n i = z). V. RESULTS We will discuss the calculation of the local order parameters and then that of the ground state energy. Local staggered magnetic moments The QMC data in [3] give values of local order parameters. These can be defined in terms of the local energies around a site i where the sum is over all the nearest neighbors of a given site i, and the spin correlations were evaluated in the ground state. We have added a factor 1 2 per bond (that is, the bond energy is shared equally between the two sites at each end). The local order parameters are defined by [28] m num loc,s = e i /z (15) It is the quantity e i that we now wish to calculate. The inflation symmetry of the quasiperiodic system allows us to define clusters on length scales that increase as powers of λ 2 . We would like a relation between the local energies e i and the cluster energies, denoted E (n) (z), evaluated as a function of z for bigger and bigger cluster size as n increases. The energy per site for a cluster of the ith type tends to a certain value in the infinite size limit. We propose that this limiting value coincides with the local energies calculated by the QMC. This is based on the expectation that there is a fixed point distribution for cluster energies, like the one found for the block spins, and for the averaged couplings. The number of terms contributing to the cluster energy is governed by the largest eigenvalue of the block proliferation matrix P , so that E (n) /7 n tends to a limit as n → ∞. It is this quantity that corresponds to the numerically evaluated local energies. With this assumption, the local order parameters at every stage of RG are found from We now describe how to calculate the cluster energies at each stage of RG. Zeroth order calculation The zeroth approximation was obtained in [3], the energies of the clusters at this order being easily calculated using Eq.6 for each of the values of z, e (0) = ǫ (0) . The values obtained are The staggered moments corresponding to these energies are a simple function of z This function is plotted in Fig.11a (dashed line). In accord with the qualitative trend of the QMC data, it shows that m loc,s decreases with increasing z. With each additional bond, the central spin enters into a resonant state with more and more neighboring spins, with the result that for each individual bond there is less amplitude for formation of a singlet. A. First order calculation The seven averaged couplings at this order have the numerical values { A , ...,  F } = {0.14, 0.13, 0.12, 0.10, 0.16, 0.24, 0.29} (19) These averaged couplings are used in the calculation of the ground state energy at each of the new clusters. This is done using Eq.7, along with the block spin values for the center and three surrounding blocks deduced from Eq.5. The first order Heisenberg star energies for each of the seven types of site are thus The energy of a cluster at first order, denoted E (1) , includes this Heisenberg star energy and all zero order diagonal terms of the sites belonging to the cluster. These first order energies of the clusters can be expressed as follows: (21) where j = 1, .., z are the nearest neighbor sites of i, and anc(i) denotes the ancestor of site i. This definition takes into account the first order star cluster energy for the cluster i plus the zero energy term for the center site, plus one-half the zero energy terms for the surrounding sites. To illustrate with an example: consider an A site on the inflated tiling, with eight nearest neighbor F sites around it. The zero order energy term for an A site is the block spin energy of its ancestor A site, namely, ǫ (1) Consider another example of an F-site which has three neighbors, say an A site and two E sites. The F site arises from a D 1 site. The zero order block energy associated with it is therefore ǫ C , to the total F-cluster energy. The total energy of the F-cluster is found by adding four zeroth order terms plus the HS energy for F sites, which have a first-order coupling  (1) F . Other cluster energies can be similarly obtained, and are listed below. B. Second order calculation and higher orders For n = 2, the energies of the seven clusters for the twice-inflated tiling can be written out in terms of the energies ǫ (k) (z) (k = 0, 1, 2). It is easy to obtain the explicit expressions since it suffices to increase all the superscripts in Eq.22 by one (so for example the ǫ (1) i become ǫ (2) i ). The zero order energy terms are also easily obtained from the preceding order zero energy terms by use of the proliferation matrix P defined in Eq.8. We give the F cluster energy to this order, as an example: (1) At third order, proceeding similarly, there will be a term in ǫ (3) F , four terms in ǫ (2) , and a certain number of terms in ǫ (1) and ǫ (0) . The number of blocks of each type can be found easily using the proliferation matrix to determine the number of ancestors of each type of block. In Fig.11a we have compared the m s obtained after zero (the dashed curve) with the results at one and two RG steps (open circles and squares). After the second step, the values of m s converge quickly as can be seen in Fig.11b which shows the third (circles) and fourth order (squares) results along with the QMC data, m (num) loc,s . C. Predictions for the full octagonal tiling The limiting values of m loc,s are clearly below the QMC data. This is to be expected, due to the bond dilution. One has to correct for the effect of the appreciable bond dilution occurring at C and D sites in order to obtain an estimate of the energy of the undiluted octagonal tiling. On the one hand, the bond dilution leads to having fewer energy terms in the Hamiltonian and consequently underestimating the cluster energies. On the other hand, the loss of bonds is partly offset by the fact that the dilution tends also to suppress frustration and raise the local order parameter. An ad-hoc way to put back the "missing bond-energies" is to add in half of the missing link energies at each of the C and D sites. This is easily done here by adjusting thez values at each of the sites,z C goes up from 5 to 5.5 whilez D1 is increased from 3 to 4. Using this ad-hoc procedure we can get estimates for m s values on the original octagonal tiling. The grey squares of Fig.11c were obtained by adjusting the n = 4 data in this way. As the figure shows, this procedure yields a fairly good agreement with the QMC data. The same procedure is used to obtain the ground state energy estimate of the full octagonal tiling in the next section. Ground state energy The ground state energy E 0 is the sum over all blocks at all orders, of the block energies. At zero order the number of blocks of z-spins N (0) (z) = N f i ( i.e. proportional to the original frequencies of occurrence given in Eq.1). The density of vertices decreases with each inflation as λ 2 , so that The block energies ǫ (n) are the energies of blocks with a spin S (n) 0 at the center, with effective couplings  (n) to the S (n) i surrounding spins. The series for the energy gives e 0 ≈ −0.51. We can estimate the effect of bond dilution, as was done for the local order parameters. Using the corrected values ofz explained in the last section, one finds an adjusted ground state energy of about −0.59. This value of the GS energy is significantly smaller in absolute value than the value deduced from the QMC data in [3]. We recall that this was true of the square lattice calculation as well. In that case, the RG calculation of Delgado and Sierra was already noted in [10] to underestimate the bonding energies of pairs of spins because of the inadequacy of first order perturbation theory around the Neel state. The same is presumably true of our RG on the octagonal tiling. For the former case the RG calculation was compared with the terms of a 1/S expansion of the ground state energy, and shown to lack the subleading order term, resulting in the observed discrepancy of values. On the square lattice, e 0 has been determined numerically [25] to high precision to be −0.6694, while finite size scaling for the tiling [29] obtains a value of −0.6581. The closeness of the values obtained for these two very different problems is rather surprising. It is a probable that this close proximity of values is due to the fact that the octagonal tiling, with its two sublattice structure and its average coordination number of 4. The differences must arise from the next nearest neighbor distributions which differ for the two systems, although this remains to be verified by explicit calculation. VI. DISCUSSION AND CONCLUSIONS In conclusion, we have presented an approximate RG scheme for ground state properties of a two-dimensional quasiperiodic tiling that can be solved after bond dilution. Other approximations involve the truncation of the number of distinct sites and the number of distinct links, and replacing local couplings around sites by average values in order to simplify the effective Hamiltonian after every inflation. The results obtained for the diluted tiling were used to get estimates for the undiluted tiling. Despite these approximations, we believe the model solved is close to the perfect two dimensional quasiperiodic structure, and it allows for a rather detailed solution of real space properties of these heirarchical structures. The results obtained by RG for local order parameters are close to those calculated for the full undiluted model, after our adjustment procedure. It thus appears that the model takes into account the most relevant aspects of the quasiperiodic geometry of the octagonal tiling. The RG method presented is less good at obtaining the ground state energy, similar to the situation already noted for the square lattice by Sierra and Martin-Delgado, who showed that a better result is obtained by going to second order of perturbation theory to obtain the effective Hamiltonian after renormalization. Concerning the proximity of values of the ground state en-ergy in these two systems, our calculation is not accurate enough to explain this observation. A calculation to higher order would involve further nearest neighbor sites, improve the energy estimate and perhaps help to explain the small energy difference between the tiling and the square lattice. It would be interesting as well to compare results for other bipartite two dimensional tilings, including the Penrose tiling. The zero temperature magnetic state of this quasiperiodic Heisenberg antiferromagnet has a structure factor with peaks that can be indexed using the four dimensional indexing scheme (see Appendix). The positions of the peaks is very simply related to the positions of the peaks of the paramagnetic state: they are situated halfway in between. In other words, the paramagnet is indexed by four integers, while the antiferromagnet has half-integer entries, corresponding to the antiferromagnetic vector q = { 1 2 , 1 2 , 1 2 , 1 2 }. This is the quasiperiodic analogue of the square lattice where just such a shift occurs in reciprocal space and corresponds to the antiferromagnetic vector q = { 1 2 , 1 2 } (see [30] for a discussion along with a simple one dimensional version of a quasiperiodic antiferromagnet). The real life quasiperiodic compound ZnMgHo was studied by neutron scattering and shown to have short range antiferromagnetic correlations below about 20K. These correlations lead to a magnetic superstructure that is, as for our two dimensional model, shifted with respect to the paramagnetic state. The antiferromagnetic vector that best fits the data has a more complicated value than the simplest form for a 3d quasiperiodic antiferromagnet (q i = 1 2 , i = 1, 6). This is because the magnetic unit cell is much larger for the three-composant system, due to the fact that only the Ho sites carry a magnetic moment, resulting in smaller spacings between peaks in reciprocal space. Finally, the RG scheme presented here can be adapted to discuss other discrete quasiperiodic models, such as tight-binding models for electrons hopping between vertices of the tiling. It should provide a useful theoretical framework for describing quasiperiodic tilings in general. APPENDIX. THE CUT-AND-PROJECT METHOD. One dimensional example The cut-and-project method of obtaining quasiperiodic tilings is easiest to illustrate in the case of the celebrated one-dimensional tiling -the Fibonacci chain(see Luck's review in [22]). The Fibonacci chain comprises two basic tiles or line segments of two different lengths, "long"(L) and "short"(S) arranged in a deterministic sequence. The Fibonacci chain can be generated iteratively from a single S segment using the following substitution rules: replace each S by an L, and each L by SL. Two succes-sive segments of the infinite chain are shown below to illustrate the substitution rules (dashed lines represent L, thick lines S) The Fibonacci sequence of segments, or tiles, can be generated by projecting selected edges of a twodimensional square lattice onto the one dimensional "physical space" E 1 as shown in Fig.13. The vertical and horizontal edges project onto the S and the L tiles respectively. The orientation of E 1 is given by tan −1 1/τ (where τ = ( √ 5 + 1)/2 is the golden mean, a solution of τ 2 − τ − 1 = 0), an irrational slope, so the tile sequence never repeats. The edges selected for projection onto E 1 obey the following condition: the projection of the edge onto the perpendicular space E 2 must fall within the "window of selection" W (indicated by the thick line segment representing the projection of the unit square shown in grey). A finite sequence of twelve edges satisfying this condition are shown in bold in Fig.13 and they result in the projected structure ...LSLLSLLSLSLL... Two dimensional case In analogy with the one dimensional case, the octagonal tiling is obtained from the projection onto E (the physical two-dimensional space) of a subset of vertices of a four-dimensional cubic lattice. The subspaces E 1 ,E 2 are now two-dimensional, and are invariant under eightfold rotations in the four dimensional space. The orientation of the physical plane E is given by the number λ = 1 + √ 2, one of the solutions of λ 2 − 2λ − 1 = 0. The tiles, which are projections in this plane of the 8 faces of the 4d cube, are squares and 45 o rhombuses. The vertices (and edges) that are selected for projection in the two dimensional perpendicular space E 2 must fall within the window of selection shown in Fig.14. This octagonshaped area is delimited by the projection of the sides of a four dimensional unit cube. The octagonal tiling has by construction the eight-fold symmetry in the weak sense already described. There are six different kinds of nearest neighbor configurations for vertices of the tiling, denoted A through F as shown in Fig.2. B. Inflations and deflations For the octagonal tiling, the inflation transformation is given by a 4 × 4 matrix acting in the four dimensional cu-bic lattice, satisfying U 2 − 2U − 1 = 0 and having entries of 0 or 1 only. Projecting the subset of points selected by u leads to a bigger tiling of the same type as the original one. Only the highest z sites remain selected, while the others disappear. The sites that remain are those within the middle octagon, the α family: A, B, C and D 1 . The sites that disappear correspond to the region outside the middle octagon. The perpendicular space representation also allows to determine rapidly the new z values of the sites that remain: one simply redraws the acceptance domains of Fig.14 after rescaling, inside the middle octagon. Thus a point that was previously in the D 1 domain will find itself in the F domain, and the C domains map into E domains. A sites remain A sites if they are close to the center of the diagram, otherwise they become one of the other α sites after inflation. The four categories of A sites mentioned in section II.3 differ by their distance from the origin in perpendicular space. The perpendicular space projection of a site determines its evolution under inflation -the closer a site is to the center of the octagonal selection window, the longer it remains an A site under successive inflations. Also, one clearly sees that D-sites come in two types, with different perpendicular space domains. The following table resumes the old and new site types after inflation: A → A or B or C or D 1 B → D 2 C → E D 1 → F The number of sites per unit area is reduced by the scale factor λ 2 after each inflation, and the relative frequencies of occurrences of each of the seven families of sites is invariant. The frequency of occurrence of the ith family is proportional to the area occupied by that family in the perpendicular space projection. It can thus be easily verified using Fig.14 that these frequencies are: f A = λ −4 ; f B = λ −5 ; f C = 2λ −4 ; f D1 = λ −3 =; f E = 2λ −2 ; f F = λ −2 . The average coordination number is exactly 4, as can be checked using the frequencies given. The interested reader can find these and other important geometric and algebraic properties of the system described for example in [23]. C. Reciprocal space and structure factor The diffraction peaks of the octagonal tiling are found at positions given by projections into 2d of reciprocal lattice vectors a i of the 4d cubic structure. The intensities of the peaks are not uniform however, but depend on the Fourier Transform (FT) of the finite selection window. The main features of the diffraction pattern are thus (see Belin et al in [2] for more on the topic) • The peaks have an eight-fold symmetry around the peak at the origin. • peaks occur at positions corresponding to the set of integers h, k, m, n representing the projection into the 2d plane of the 4d vector q = ha 1 +ka 2 +ma 3 + na 4 . That is, one can index peaks by a set of four integers. • Intensities are highly dependent on the value of q since the FT of the selction window is oscillatory and long ranged. The set of eight most intense peaks nearest the origin is used to define a quasi Brillouin zone for the tiling. D. Approximants and some of their properties Numerical studies of quasiperiodic systems are performed on finite pieces of the infinite system. In particular, it has been pointed out that periodic boundary conditions are preferable to open or closed boundary conditions in terms of eliminating spurious states and eigenvalues. A periodic approximants is a structure that can be periodically continued and can be augmented in size so as approach arbitrarily close to the perfect infinite structure. This is, again, easiest illustrated by going back to the Fibonacci chain. In the cut and project technique, it should be clear that if one tilts the irrationally oriented selection strip away from the special angle, one will obtain a periodically repeating chain every time the slope is rational. τ −1 has a series of approximants given in terms of the Fibonacci numbers as follows:{α 1 , ...} = {1, 1 2 , 2 3 , 3 5 , ..., F k F k+1 ...}, where F k is the kth term in the Fibonacci sequence defined by the recurrence relation with F 0 = F 1 = 1. By increasing the value of the denominator of the rational number F n /F n+1 -i.e. by choosing increasingly longer approximants of the golden mean -one will get a structure of period F n+2 . The finite sequences of L and S within the approximant are the same as those found in an infinitely long chain. For the two dimensional case, Ref. [27] describes how to obtain square approximants to the octagonal tiling by the projection method. These are obtained from the approximants to the silver mean which depend on ratios of the so-called Octonacci sequence : with O 1 = 1; O 2 = 2. These are the finite size systems used for a number of numerical studies including the quantum Monte Carlo calculations. The first few square approximants have the following sizes k 2 3 4 5 N k 239 1393 8119 47321 We will list some features of these approximants that may be important to bear in mind depending on the models studied. • Reflection symmetry (exact) with respect to the bottom left-top right diagonal • 90 o Rotation symmetry around the center (approximate) • Odd parity of repetition. By this is meant that one changes sublattice when one goes from a site to its first periodic repetition along either x or y directions. For a number of numerical calculations it is easiest to restore the bipartite property by taking a system size doubled along both directions (i.e. quadrupled unit cell with respect to the sizes given in the Table). • Inflation relation between approximants. Fig.15 shows a small approximant superimposed on the next largest one.
2019-04-14T02:06:31.287Z
2004-09-28T00:00:00.000
{ "year": 2004, "sha1": "575acd964722704eac4c668eb320380f63d6c48f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0409711", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "575acd964722704eac4c668eb320380f63d6c48f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2906431
pes2o/s2orc
v3-fos-license
Alert : Severe cases and deaths associated with Chikungunya in Brazil Since the detection of the Chikungunya virus in America in 2013, two million cases of the disease have been notified worldwide. Severe cases and deaths related to Chikungunya have been reported in India and Reunion Island, estimated at 1 death per 1,000 inhabitants. Joint involvement in the acute and chronic phase is the main clinical manifestation associated with Chikungunya. The severity of the infection may be directly attributable to viral action or indirectly, owing to decompensation of preexisting comorbidities. In Brazil, the virus was identified in 2014, and recently, there has been a significant increase in the number of deaths caused by the Chikungunya virus infection, especially in Pernambuco. However, the numbers of fatalities are probably underreported, since for many cases, the diagnosis of Chikungunya infection may not be considered, for deaths by indirect causes. An increase in the mortality rate within months of epidemic occurrence, compared to previous years has also been reported and may be associated with Chikungunya virus infection. An in-depth investigation of reported mortality in Brazil is necessary, to measure the actual impact of the deaths, thereby, allowing the identification of possible causes. This will alert professionals about the risks, and hence, enable creation of protocols that target reducing mortality. INTRODUCTION The Chikungunya virus (CHIKV) is a single-stranded ribonucleic acid (RNA) organism, of the family Togaviridae and genus Alphavírus, with three subtypes (two African and one Asian).Aedes aegypti and Aedes albopictus are the main carriers involved in the transmission of CHIKV.Although the first report of surges associated with Chikungunya was in 1952 in Tanzania, it was not until after 2005 that a large number of cases in the Indian Ocean islands emerged.The most extensive surge occurred on Reunion Island in 2005, hitting one third of the population, with 266,076 reports and 237 deaths attributed directly to CHIKV infection.In 2013, it struck the Western hemisphere, initially on the Caribbean Islands of Saint Martin and spread quickly until June 2014, hitting 20 other countries in the Caribbean, and South and Central America, with over 400,000 reports 1,2 . In Brazil, the first cases were reported in the second half of 2014 in the Cities of Oiapoque and Amapá, in the North, and in Feira de Santana, Bahia, in the Northeastern region.The epidemic only hit other Northeast states in the second half of 2015.In 2015, 20,598 cases were reported, and in 2016, until the 32 nd epidemiological week, there were 216,102 notifications in the entire country.Of those, 189,814 (87.8%) happened in the Northeast of the country 3 .In the Northeast, Pernambuco was one of the states with highest numbers of reports, with 53,601 suspected cases until the 36 th epidemiological week. The disease presents high attack rates and 75-95% of individuals infected by CHIKV are symptomatic, which is in contrast to infections with other arboviruses.The attack rates struck 35-75% of a population in only one epidemic 2,4 .During the outbreak on Reunion Island, surveillance figures from a serosurvey confirmed 16,050 cases, and estimated 244,000 reports, which corresponded to a 35% attack rate 5 . Chikungunya presents with articular pain as the main clinic manifestation at different stages of the disease, and it is an important cause of physical incapacity, significantly impacting on the quality of life of those affected [6][7][8] .After the acute phase, which lasts about 10 days, studies show that 40-80% of patients can chronically evolve with the articular clinical manifestation for months or even years.A prospective study by Schilte et al, on Reunion Island, reported that 69% of patients persisted with arthralgia after 36 months 8 .In a recent meta-analysis, the global prevalence of chronification found after computing the results was 40% (27.7-50.5%)and for studies with more than 18 months of follow up, the prevalence was 32% 9 . DEATHS RELATED TO CHIKUNGUNYA Besides the incapacitating articular pain, severe cases and deaths related to Chikungunya have been reported.In the 2005-2006 epidemic on Reunion Island, of a population of approximately 800,000 inhabitants, 244,000 cases of Chikungunya were estimated and 203 deaths were confirmed, in a proportion of 1 death for each 1000 notified cases and a global mortality of 25/100,000 inhabitants 5 .The most commonly affected are the elderly with an average age of 79 years.Of the 244,000 cases, 121 (60%) deaths were caused either directly because of the infection, or indirectly, mainly due to decompensation of previous comorbidities.The remaining 123,000 severe cases reported the following as main reasons for hospitalization: respiratory failure (19 cases); cardiovascular decompensation (18), meningoencephalitis (16), severe hepatitis (11), major cutaneous lesions (10), renal insufficiency (7), among others 5 . Atypical cases that needed hospitalization, and exhibited risks of complications, were reported in another study by Economopoulo A et al.Among 610 adults with complications, cardiovascular changes occurred in 37% of cases (heart failure, arrhythmia, myocarditis, acute coronary disease), 24% had neurological disorders (encephalitis, meningoencephalitis, seizure, Guillain Barré syndrome), 20% pre-renal insufficiency, 17% developed pneumonitis, and 8% developed respiratory failure 10 .Although 89% had a history of medical conditions, for some complications were not related to previous comorbidities, thereby, reinforcing the severity of this disease regardless of associated medical conditions.Of 120 hospitalized patients of renal failure, 66% did not report previous renal illness.Of 44 arrhythmia cases, 63% did not have any cardiovascular background and of 131 cases with changes on the glycemic level, 20% were diagnosed diabetes mellitus for the first time 10 . POTENTIAL DEATHS ASSOCIATED WITH CHIKUNGUNYA In addition to the suspected and confirmed deaths due to infection by Chikungunya, there is a high possibility of greater numbers than those officially confirmed (Table 1).Josseran L, was first to use methods estimate surplus deaths in the Chikungunya outbreak in Reunion Island.The authors compared the numbers of fatalities for all causes, which occurred on Reunion Island in the peak months of Chikungunya waves, with the deaths in previous years.From January until April of 2006, a surplus of 260 fatalities and a lethality rate of 1/1,000 were identified 11 . A similar study performed in Ahmedabad city, compared the average deaths for any cause during 2002-2005 with the year of 2006, which was struck with a Chikungunya outbreak.Considering population increase, 3,506 excess deaths occurred above the expected rate for that year.Of these, 2,994 occurred between the months of August and November alone, especially in September, within which 41% of excess deaths (1,448) occurred, corresponding to the peak of Chikungunya epidemic.The expected mortality rate increased by 57% within those months 12 .The city had a population around 6 million inhabitants and officially, the government registered only 66,777 cases of Chikungunya and 10 confirmed deaths.The initial data showed an attack rate of 1.1% for Chikungunya cases and a proportion of one death for each 6,677 symptomatic infection cases, reflecting a flaw in the reporting system and investigations of deaths 12 . Two other studies, one on Mauritius Island (with a population of 1.2 million), and another in Port Blair City, India (with 136,000 inhabitants) also observed respectively, 742 and 72 surplus deaths, after a Chikungunya epidemic during the peak months of the outbreak 13,14 .In Port Blair, the study evaluated the two years following the CHIKV epidemic (2007-2008); it showed that the numbers of fatalities had reduced and no Chikungunya epidemic was identified during these years 13 . An editorial published by Mavalankar D, in 2007, cautioned about the severity of the epidemic in India, suggesting an increase in death rates, noticed by the assistants but registered by the government, and this was attributed to a limited surveillance and investigation system in the country.The author suggests that among the 1,391,165 cases officially notified, 1,194 deaths occurred; however, but consider that the number is probably five times higher, something around 6,5 million cases and 6.389 deaths for an intermediate estimation and of 19,168 deaths (3/1,000) for a higher estimate, considering those numbers the mortality could have been three times worse than the ones reported on Reunion Island (1/1,000) 15 . A possibility for a potentially larger number of death cases related to Chikungunya not reported by the official systems, can be linked not only to the flaws of the surveillance system for case notifications or investigation of deaths, but also to situations of unregistered cases of Chikungunya infection in the death certificate filled by the health professionals. Many causes of deaths related to the infection are due to decompensation of comorbidities, which include patients with previous cardiac disorders, renal or pulmonary diseases that may be registered in the official death certificate without reference to CHIKV, especially in first outbreak situations in regions without previous experience with the disease.Besides that, the diagnosis of CHIKV infection may not be considered for cases of deaths caused by neurological cases and pneumonitis in young patients or those without comorbidity. A difficulty in the identification of deaths associated with the arbovirus was recently reported in a publication by Cavalcanti L et al.The authors evaluated deaths in a reference service of deaths verification and detected 90 deaths from dengue, which were not suspected during disease progression.The cases of deaths were referred to this service alongside other diagnosis, in which the pathologists suspected dengue as the cause of death.These excesses of unidentified and suspected deaths for a disease that had already existed in the country for 30 years, such as dengue, reinforces caution and the need to investigate these surplus deaths, in relation to Chikungunya, which has a pattern associated with a severe form of the disease 16 . CHIKUNGUNYA X DENGUE: COMPARING THE SEVERITY Brazil has recorded over 80% of cases of dengue in Latin America 17 .In the last three decades, with outbreaks happening since 1982, around 10,906,332 cases were notified in Brazil by the Ministry of Health in the last 25 years (1990 to 2015) and 5,224 deaths, a lethality rate of 0.5/1,000 cases.In 2015, the year with the largest number of notifications in history, there were 1,649,008, with 863 deaths, suggesting the same lethality rate and with a mortality rate of 0.04/100,000 inhabitants 3 (Table 2). In Pernambuco, the second most populated State of Northeast Brazil, nearly 9 million inhabitants, between 1990 and 2015, there were 556,220 officially notified cases and 260 deaths (lethality rate 0.5/1,000).The year 2013 recorded the highest number of deaths in the stated, totaling 37 (Table 2). Until the 36 th epidemiological week, 53,601 suspected cases of Chikungunya were notified in Pernambuco.Among the 313 registered cases of death due to arboviruses, 88 were confirmed, with the remaining under investigation.Seventy cases were positive for Chikungunya (79.5%), with fiftythree (60.2%) isolated CHIKV infection only and the other 17 (19.3%)confirmed as being accompanied by co-infection through laboratory results.Dengue was isolated for 18 (20.5%)cases 18 .These numbers already exceed the number of previously reported deaths due to arboviruses in the state, which shows a pattern of higher severity of the outbreak.Considering that in the epidemic week there were 53,601 suspected cases notified of Chikungunya, we would have a lethality rate of 1.3/1,000. The first confirmation of death by arboviruses in 2016 was due to Chikungunya, recorded in a bulletin in epidemiological week 10, with other 95 death cases notified by investigation 19 .During the epidemic, the number of cases of fatalities increased and surveillance investigation was amplified, identifying in epidemiological week 26, a total of 26 (86.6%) deaths owing to Chikungunya and 7, owing to dengue, with another 240 cases under investigation 20 .Based on this tendency, it is possible to estimate that by the end of the investigation of all 313 cases in the year 2016, more than 70% will be associated with Chikungunya, which would represent 219 deaths, a number that is close to the total of fatalities in the state due to dengue in the last 25 years.Considering that in this epidemiological week there were 53,601 suspected notified cases of Chikungunya, we would have an increase in lethality of 4.1/1,000. ESTIMATIONS FOR BRAZIL The isolated analysis of high attack rates for the disease and its lethality, like the ones described on Reunion Island, when projected for tropical countries with large number of inhabitants, has a potential to generate an elevated absolute number of fatalities.Even without considering the sub notifications or excess deaths unnoticed by assistance potentially related to CHIKV.In Brazil, a country with continental dimensions, these numbers can assume alarming proportions. When compared with dengue, is pertinent to question if the greater number of deaths by Chikungunya is due to a more expressive virulence of the virus or a reflex of the high attack rates.In virtue of the inaccuracy of available numbers in Brazil, the analysis can be made for different scenarios. Considering most conservative and available data in literature, the lethality of Chikungunya (1/1000) is higher than that of dengue (0.5/1,000), which in a way reflects higher severity.Besides that, what really causes an impact is the large absolute number of deaths expected in an epidemic, especially seen in larger populations, as a reflex of the high attack rates and numbers of symptomatics in epidemics of CHIKV, which generates a higher number of cases of the disease and, consequently, of deaths in absolute numbers. Dengue exhibits estimation of attack rates of 3-25%, in which only 5-25% of infected patients are symptomatic [21][22][23] .When analyze the Chikungunya epidemics, the attack rates can reach 35-75% of the population, with 75-95% of the individual infected by Chikungunya, presenting with the symptomatic form 2,4,5 .Therefore, by estimating deaths based on these variants, there are large absolute numbers of expected deaths for Chikungunya if compared to dengue (Table 3). In Brazil, with 200 million inhabitants, the epidemic reached the various states of the federation in different years, as in the case of dengue; therefore, we have during the years, 60 million cases (attack rate: 30%) and an estimation of 60,000 deaths (Table 3). In this way, even if the relationship existing between cases and deaths is only described by the Reunion Island incidence, this fact by itself is preoccupying, especially for developing countries, with large populations and some fragilities of an already overloaded health system.This scenario may be even more preoccupying with higher number of deaths, if we consider the possibility of parts of these cases not being notified and therefore, not registered by the official system, due to issues with sub-notification.An estimation based on the studies in which surplus deaths were potentially related to Chikungunya, would drastically alter these variables and its lethality rate. CONCLUSION Severe cases and deaths attributable to Chikungunya have been reported in different countries, presenting high lethality rates and initially being described in the proportion of 1 death per each 1,000 cases of Chikungunya; however, these numbers may have been underestimated, owing to the fact that some deaths are not being identified as chikungunya-related and therefore, not included as part of the official records. Moreover, considering future adjustments in the lethality rates, as a result of more accurate analysis, those numbers are already alarming.Given the high attack rates and large amount of symptomatics in chikungunya outbreaks, countries like Brazil, with continental dimensions and large populations, may have substantial absolute numbers of deaths that can even exceed those related to other important arboviruses, like dengue. An in-depth investigation about the deaths now noted in Brazil, is necessary and urgent, to measure the actual impact of those deaths, by allowing the identification of possible causes of these deaths, as well as to alert the professionals about the risks and to create protocols that targeting reducing mortality. TABLE 1 Estimation of lethality rates based on excess deaths potentially related to CHIKV infection. TABLE 2 Official lethality rate for dengue epidemic in Brazil and in Pernambuco. TABLE 3 Number of deaths estimations in epidemic of dengue and Chikungunya based on historical lethality rates and percentage of symptomatic for type of infection.
2018-04-03T03:23:48.181Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "a8fab015e20ca52d626511d62577fcc0ba0976fa", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rsbmt/v50n5/1678-9849-rsbmt-50-05-585.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e2209fe38a38fba76e1681bc179ba4e96fd7b474", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
62880946
pes2o/s2orc
v3-fos-license
MiR-451a suppressing BAP31 can inhibit proliferation and increase apoptosis through inducing ER stress in colorectal cancer The global morbidity and mortality of colorectal cancer (CRC) are ranked the third among gastrointestinal tumors in the world. MiR-451a is associated with several types of cancer, including CRC. However, the roles and mechanisms of miR-451a in CRC have not been elucidated. BAP31 is a predicted target gene of miR-451a in our suppression subtractive hybridization library. Its relationship with miR-451a and function in CRC are unclear. We hypothesized that miR-451a could induce apoptosis through suppressing BAP31 in CRC. Immunohistochemistry and real-time PCR were used to measure BAP31 expressions in CRC tissues and pericarcinous tissues from 57 CRC patients and CRC cell lines. Dual-luciferase reporter assay was used to detect the binding of miR-451a to BAP31. The expression of BAP31 protein in CRC tissues was significantly higher than that in pericarcinous tissues, which was correlated with distant metastasis and advanced clinical stages of CRC patients. The expression of BAP31 was higher in HCT116, HT29, SW620, and DLD cells than that in the normal colonic epithelial cell line NCM460. The expression of BAP31 was absolutely down-regulated when over-expressing miR-451a in HCT116 and SW620 cells compared with control cells. Mir-451a inhibited the expression of BAP31 by binding to its 5’-UTR. Over-expressing miR-451a or silencing BAP31 suppressed the proliferation and apoptosis of CRC cells by increasing the expressions of endoplasmic reticulum stress (ERS)-associated proteins, including GRP78/BIP, BAX, and PERK/elF2α/ATF4/CHOP, which resulted in increased ERS, cytoplasmic calcium ion flowing, and apoptosis of CRC cells. These changes resulting from over-expressing miR-451a were reversed by over-expressing BAP31 with mutated miR-451a-binding sites. Over-expressing miR-451a or silencing BAP31 inhibited tumor growth by inducing ERS. The present study demonstrated that miR-451a can inhibit proliferation and increase apoptosis through inducing ERS by binding to the 5’-UTR of BAP31 in CRC. Introduction Colorectal cancer (CRC) is the third most common malignant gastrointestinal tumor in the world. Its mortality has increased from 694,000 in 2012 to 774,000 in 2015, with the increased death ratio being 11.53% 1 . With the improvement of people's living standards, the incidence and the mortality of CRC in China were both increased to the fifth among all cancers in 2011 2 . The current treatments for CRC include resection, radiotherapy, and chemotherapy. Chemotherapy can be used for patients at different clinical stages, but is not recommended for patients with poor general or organ functions. The recommended initiation of chemotherapy is within 8 weeks after surgery, and the time limit for chemotherapy should be not more than 6 months 3 . Although the response rate to systemic chemotherapy is less than 50%, drug resistance develops in nearly all patients 4,5 . So, there is an urgent need to explore new therapeutic targets for CRC to improve clinical efficacy. Micro RNA (microRNA; miRNA), consisting of about 21-23 nucleotides, is a eukaryotic ubiquitous endogenous small RNA. MiRNA gene is a highly conserved gene family, which is involved in multiple biological processes such as proliferation, apoptosis, and senescence. MicroRNA-451a (miR-451a) has been reported to be significantly down-regulated in chronic myeloid leukemia, glioma, non-small cell lung cancer, gastric cancer, and breast cancer. It can inhibit the proliferation, invasion, and metastasis of tumor cells, and increase the apoptosis and improve the therapeutic effects of radiotherapy and chemotherapy [6][7][8][9][10][11][12][13][14] . However, its role and target genes in CRC have not been elucidated, yet. Our previous report has demonstrated that the expression of miR-451a in CRC tissues was significantly down-regulated compared to pericarcinous tissues of 68 CRC patients. The expression of miR-451a was decreased in HCT116, SW620, HT29, SW480, and DLD cells compared with the normal colonic epithelial cell NCM460 15 . Therefore, we believed that miR-451a, as a tumor suppressor, plays an important role in the carcinogenesis of CRC. We also predicted seven potential target genes of miR-451a in CRC by our suppression subtractive hybridization method 15 . BAP31, one of our predicted target genes of miR-451a, located in the endoplasmic reticulum, is an important molecular chaperone protein 16,17 . As a carrier protein, BAP31 plays an important role in apoptosis [18][19][20] . The expression of BAP31 protein was dramatically upregulated in human malignant melanoma tumor tissues and human primary hepatocellular carcinoma when compared with normal tissues 21,22 . However, the roles of BAP31 in CRC remain unclear. Whether or not it is a target of miR-451a remains undetermined. In the present study, we aim to investigate the effects and mechanisms of BAP31 in CRC in vivo and in vitro, and how miR-451a regulates the expression of BAP31. Elevated BAP31 expression in CRC tissues was correlated with advanced clinical stage and distant metastasis To analyze the relative expression of BAP31 in CRC tissues with pericarcinous tissues, we performed real-time polymerase chain reaction (PCR) analysis in paired CRC and pericarcinous tissues in a cohort of 57 CRC patients. Results revealed that BAP31 mRNA expression in CRC tissues was significantly increased compared to matched pericarcinous tissues ( Fig. 1a; n = 57, fold change: 4.87, *P < 0.05). There were 44 cases with up-regulated BAP31, accounting for 77.19% ( Fig. 1a; 44/57). Immunohistochemical (IHC) staining results showed that BAP31 protein expression was significantly up-regulated in CRC tissues compared to matched pericarcinous tissues (Fig. 1b, c). Western blot analysis confirmed the IHC results (Fig. 1d, e; **P < 0.01). As shown in Table 1, the elevated BAP31 expression was significantly correlated with advanced clinical stages and distant metastasis of CRC (*P < 0.05), but not with other pathological parameters. Moreover, the upregulation of BAP31 was particularly obvious in clinical stage II and III CRC patients (Fig. 1f). There were 22 cases with up-regulated BAP31, accounting for 38.60% of the total cases, and 84.62% in 26 cases of stage II. There were 18 cases with up-regulated BAP31, accounting for 31.58% of the total cases and 85.71% in 21 cases of stage III (Table 1). MiRNA usually binds with the 3'-UTR of its target genes, but rarely with the open reading frame (ORF) and 5'-UTR. MiRanda and TargetScan, the widely used microRNA target-prediction programs, were applied to predict BAP31 as miR-451a's target. But we could not find any binding sites with miR-451a in BAP31 mRNA 3'-UTR. To elucidate the mechanisms of miR-451a downregulating BAP31, RNAHybrid program was used to predict miR-451a's binding sites of BAP31. Twenty binding sites were predicted by the program, among which three binding sites had the highest scores: BAP31 upstream 177, 148, and ORF 666 sites (Fig. 3a). They were cloned into pSicheck-2.0 vector. We also cloned the common mode of the 3'-UTR region of BAP31 into pSicheck-2.0 (Fig. 3c). These plasmids were co-transfected into 293T cells with pcDNA-miR451a. Firefly luciferase and renilla luciferase activity in each group was measured. Surprisingly, the renilla luciferase relative activity was decreased by 80.3, 30.1, 44.6, and 42.8% in vector BAP31 upstream 177 site, upstream 148 site, ORF 666 site sequence area, and BAP31 3'-UTR sequence area, respectively ( Fig. 3b; **P < 0.01). Four mutations were performed in the miR-451a-binding sequence, including Mut1: T to G (-175 bp), G to A Fig. 1 BAP31 expression was significantly elevated in CRC tissues compared to pericarcinous tissues. a The relative expression of BAP31 mRNA in CRC tissues of 57 cases compared to pericarcinous tissues as detected by real-time PCR (*P < 0.05). b, c The relative expression of BAP31 protein in CRC tissues compared to pericarcinous tissues as detected by immunohistochemistry (**P < 0.01; 1-6: six pairs of resected CRC specimens were randomly selected from stage II and stage III patients; samples 1-3 were in stage II, and samples 4-6 in stage III) and western blotting (**P < 0.01; N, pericarcinous tissues; C, colorectal cancer; 1-4: eight pairs of clinically resected CRC specimens randomly selected from stage II and stage III patients; samples 1-4 were in stage II, and samples 5-8 in stage III). d, e The relative expression of BAP31 protein in CRC tissues compared to pericarcinous tissues as detected by immunohistochemistry (scale bar, 50 μm; **P < 0.01). (f) The expression of BAP31 in CRC tissues of patients with different clinic stages: 1-3, stage I; 4-29, stage II; 30-50, stage III; 51-57, stage IV. CRC, colorectal cancer (-171 bp); Mut2: G to A (-166 bp); Mut3: G to T (-161 bp), T to G (-159 and -158 bp); and Mut4: deletions at -160, -159, and -158 bp (Fig. 3c). The four mutated sequences were cloned into pSicheck-2.0. These plasmids were also co-transfected into 293T cells with pcDNA-miR-451a. The renilla luciferase relative activity was decreased by 80.3, 15.89, 26.54, 7.54, and 8.43% in wild-type vector BAP31 upstream 177 site, mutation 1, mutation 2, mutation 3, and mutation 4, respectively ( Fig. 3d; **P < 0.01). The results demonstrated that miR-451a can bind to the BAP31 mRNA 5'-UTR 177 site and cannot bind to the mutated sequences, which also supported BAP31 mRNA as a putative target of miR-451a. Over-expressing miR-451a or silencing BAP31 inhibited the proliferation of CRC cells To further investigate the potential mechanism of down-regulation of BAP31 by over-expresseing miR-451a, we transfected pcDNA-miR-451a, pcDNA-NC, psilencer-BAP31, and control plasmid, pSilencer-Scr, into HCT116 and SW620 cells. To detect the efficiency of BAP31silencing plasmid, western blotting was used to detect the expression of BAP31 when pSilencer-BAP31 and negative control plasmid, pSilencer-Scr, were transfected into HCT16 and SW620 cells for 48 h. As shown in Fig. 4a, BAP31 protein expression was decreased to 63.9 and 47.9% by over-expressing miR-451a in HCT116 and SW620 cells, respectively (**P < 0.01). We further measured the proliferation of HCT116 and SW620 cells over-expressing miR-451a or silencing BAP31. To investigate whether over-expressing BAP31 with mutated miR-451a-binding sites can rescue the effect of overexpressing miR-451a, the full-length sequence of BAP31 was cloned and inserted into pcDNA-3.1 vector to obtain the construct pcDNA-BAP31. The miR-451a-binding sites were mutated (-161bp G to T, -159 bp and -158 bp T to G) to obtain the pcDNA-BAP31-Mut construct. HCT116 and SW620 cells were transfected with pcDNA-miR-451a, pcDNA-NC, psilencer-BAP31, pSilencer-Scr, and pcDNA-miR-451a + pcDNA-BAP31-Mut. As shown in Fig. 4b of proliferation in HCT116 and SW620 cells induced by over-expressing miR-451a can be reversed by the overexpression of BAP31 with a mutated miR-451a-binding site. Over-expressing miR-451a or silencing BAP31 induced the apoptosis of CRC cells To find the inhibition mechanism of over-expressing miR-451a and silencing BAP31 on the proliferation of CRC cells, apoptosis was measured by Hoechst staining, terminal dexynucleotidyl transferase(TdT)-mediated dUTP nick end labeling (TUNEL) staining, and flow cytometry. Hoechst staining and TUNEL staining results showed that nuclear fragmentation and apoptosis were significantly increased in The relative expression of BAP31 protein in DLD, HT29, SW620, HCT116, and NCM460 cells (**P < 0.01). Over-expressing miR-451a reduced the expression of BAP31 mRNA and protein. d After transfection by pcDNA-miR-451a, the expression of miR-451a was increased and that of BAP31 mRNA was decreased in HCT116 cells and SW620 cells. e The expression of BAP31 protein in HCT116 and SW620 cells was decreased on over-expressing miR-451a (**P < 0.01; NC, untreated; P-NC, transferred pcDNA-NC; P-M, transferred pcDNA-miR-451a). CRC, colorectal cancer HCT116 and SW620 cells when over-expressing miR-451a or silencing BAP31, which can be reversed by the overexpression of BAP31 with a mutated miR-451a-binding site when over-expressing miR-451a (Fig. 4d, e). Flow cytometry results showed that the total apoptosis was increased by 12.69% in HCT116 cells over-expressing miR-451a, in which the early and late apoptosis increased by 0.72 and 11.97%, respectively. The total apoptosis in HCT116 cells silencing BAP31 was increased by 13.81%, in which the early and late apoptosis increased by 2.48 and 11.33%, respectively. The early apoptosis in HCT116 cells over-expressing BAP31 with a mutated miR-451a-binding site when overexpressing miR-451a did not have significant changes, and the late apoptosis increased by 1.44% (Fig. 4f). The total apoptosis of SW620 cells over-expressing miR-451a was increased by 10.88%, including a 3.67% increased early apoptosis and 7.21% increased late apoptosis. Apoptosis in SW620 cells silencing BAP31 increased by 13.55%, in which early apoptosis increased by 7.99% and late apoptosis increased by 5.56%, respectively. The total apoptosis in SW620 cells over-expressing BAP31 with a mutated miR-451a-binding site when over-expressing miR-451a was increased by 6.63%, in which the early and late apoptosis increased by 6.16 and 0.17%, respectively (Fig. 4f). Over-expressing miR-451a or silencing BAP31 induced ERS by up-regulating ERS-related proteins and increasing the cytoplasmic calcium concentration in CRC cells Since BAP31 was located in endoplasmic reticulum, we further detected the morphology and protein-expression MiR-451a inhibited the expression of BAP31 by binding to its 5'-UTR. a The predicted miR-451a seed region in the upstream and the ORF of BAP31 gene. b When upstream 177 was blocked, the luciferase activity was significantly reduced by miR-451a (**P < 0.01). c The location of the predicted miR-451a seed region in BAP31 gene. The base changes and deletions in four mutations of BAP31 upstream 177 sequence. d The renilla luciferase relative activity in wild-type vector BAP31 upstream 177 site and mutations (Wt, wild-type; Mut1, mutation 1; Mut2, mutation 2; Mut3, mutation 3; Mut4, mutation 4). ORF, open reading frame changes in endoplasmic reticulum after over-expressing miR-451a or silencing BAP31. Immunofluorescence results showed that BAP31 protein expressions in HCT116 and SW620 cells over-expressing miR-451a or silencing BAP31 were both reduced and remained localized in the endoplasmic reticulum. And most of the endoplasmic reticulum lost the normal morphology and structure, and appeared concentrated or acquired a vacuolar structure. When over-expressing BAP31 with a mutated miR-451a-binding site with miR-451a overexpression, the expression of BAP31 on the endoplasmic reticulum and the morphology and structure of endoplasmic reticulum had no significant changes compared to the control (Fig. 5a, b). When ERS occurs, the expression of ERS-related proteins also increases to accelerate the processing of unfolded proteins and thereby reduce ERS; so we detected the relative expression of the ERS-related protein, GRP78/BIP, and the downstream PERK/elF2α/ ATF4/CHOP signaling pathway. The results showed that when over-expressing miR-451a or silencing BAP31 in HCT116 and SW620 cells, the expressions of GRP78/BIP were all increased to different extents (Fig. 5c, d). In the PERK/elf2α/ATF4/CHOP signaling pathway, the relative expressions of phosphorylated PERK, phosphorylated elF2α, ATF4, and CHOP were all increased to different extents. When ERS occurs, the first response is to release the endoplasmic reticulum calcium ion into the cytosol. So we detected BAX, an endoplasmic reticulum calcium Fig. 4 Over-expressing MiR-451a or silencing BAP31 suppressed the proliferation of CRC cells and increased the total apoptosis of CRC cells. a BAP31 protein relative expression was decreased in HCT116 and SW620 cells on silencing BAP31 (**P < 0.01; NC, untreated; P-S, transferred psilencer-Scr; P-B, transferred psilencer-BAP31). b, c Over-expressing miR-451a or silencing BAP31 exhibited significant inhibitory effects on the proliferation of HCT116 (b) and SW620 (c) cells compared to the control and untreated cells (NC). d, e Over-expressing miR-451a or silencing BAP31 induced significant apoptosis in HCT116 and SW620 cells compared to the control cells. The apoptotic cells were stained brightly blue by Hoechst staining (d; scale bar, 50 μm) and green fluorescence by TUNEL staining (e; scale bar, 25 μm). (f) As detected by flow cytometry, total apoptosis was increased by 12.69 and 13.81% when over-expressing miR-451a or silencing BAP31, respectively, in HCT116 cells. The total apoptosis increased by 10.88 and 13.55%, respectively, when over-expressing miR-451a or silencing BAP31 in SW620 cells. CRC, colorectal cancer releasing-associated protein, and the cytoplasmic calcium concentration in HCT116 and SW620 cells when overexpressing miR-451a or silencing BAP31. The results showed that the relative expression of BAX was significantly increased compared to the control cells (Fig. 5c, d). And the cytoplasmic calcium concentration in HCT116 and SW620 cells over-expressing miR-451a or silencing BAP31 was also increased (Fig. 6a). Cytosolic calcium concentration was 1.67-fold higher in HCT116 cells over-expressing miR-451a than in the control group. When silencing BAP31 in HCT116 cells, the cytoplasmic calcium concentration was 2.01-fold higher than that in the control group (Fig. 6b). Cytosolic calcium concentration was 2.08-fold higher in SW620 cells overexpressing miR-451a than that in the control group, while silencing BAP31 in SW620 cells increased the cytoplasmic calcium concentration by 2.27 fold (Fig. 6c). But the cytoplasmic calcium concentration did not significantly change when over-expressing BAP31 with a mutated miR-451a-binding site with miR-451a over-expressed both in HCT116 and SW620 (Fig. 6b, c). Over-expressing miR-451a or silencing BAP31 inhibited tumor growth in vivo To further confirm the above findings, we established CRC xenografts in nude mice to determine the effects of miR-451a and BAP31. Tumor growth curve showed that the tumor growth rate was inhibited when overexpressing miR-451a or silencing BAP31 in vivo (Fig. 7a). Tumor weights of pcDNA-miR-451a and pSilencer-BAP31 groups were significantly decreased to 18.17 and 12.51% of the control group, respectively ( Fig. 7b; **P < 0.01). Real-time PCR results showed that the expression of miR-451a in the tumor tissues of pcDNA-miR-451a group was significantly higher than that in the control group ( Fig. 7c; **P < 0.01). Western blot results also showed that BAP31 expression was inhibited in pcDNA-miR-451a and pSilencer-BAP31 groups ( Fig. 7d; **P < 0.01). The relative expressions of GRP78/ BIP, p-PERK, p-elF2α, ATF4, CHOP, and BAX were increased in pcDNA-miR-451a and pSilencer-BAP31 groups compared with the control group, respectively ( Fig. 7d; **P < 0.01). 5 Expression and cellular location of BAP31, and the expression of ERS-associated proteins when over-expressing miR-451a or silencing BAP31. a, b ER-specific fluorescent dye ER-tracker was used to detect ER morphology. BAP31 fluorescent-conjugated antibody was used to detect BAP31 expression and cellular location. In HCT116 (a) and SW620 (b) cells, the normal morphology and structure of endoplasmic reticulum cannot be observed, while a lot of concentrated or vacuole-like structures were obvious (scale bar, 25 μm). c, d The expressions of ERS-associated proteins GRP78/BIP, BAX, p-PERK, p-elF2α, ATF4, and CHOP in HCT116 (c) and SW620 cells (d) were significantly increased on over-expressing miR-451a or silencing BAP31 (**P < 0.01, *P < 0.05; P-NC, transferred pcDNA-NC; P-M, transferred pcDNA-miR-451a; P-S, transferred psilencer-Scramble; P-B, transferred psilencer-BAP31). ERS, endoplasmic reticulum stress Discussion Our results showed that the expression of BAP31 in CRC tissues was significantly increased, while the expression of miR-451a in CRC tissues was obviously down-regulated. We found that BAP31 relative expression levels were correlated with advanced clinical stage and distant metastasis of CRC. We also found that the expression of BAP31 in clinical stage II and III cases was most obviously increased, in which the expression of miR-451a was obviously decreased. We also found that the expressions of BAP31 mRNA and protein were upregulated in the CRC cell lines, HCT116, HT29, SW620, and DLD, compared with normal colonic epithelial cells, NCM460. In our previous experiments, HCT116 and SW620 cells had a relatively lower expression of miR-451a among HCT116, SW620, HT29, SW480, and DLD cell lines. We also demonstrated that after the downregulation of miR-451a, the expression of BAP31 was increased in our suppression subtractive hybridization library 15 . And now, the BAP31 relative expression was also higher in these cell lines, which suggested a potential negative correlation between the expression of miR-451a and BAP31. Our in vitro and in vivo results demonstrated that BAP31 plays a very important role in the carcinogenesis of CRC under the direct regulation of miR-451a. The classic pattern of miRNA-regulating target genes is to bind to the 3'-UTR of its target genes. So most of the miRNA target gene-prediction softwares are based on the 7-6 bases in the 3'-UTR region of the target genes, which are strictly paired to miRNA seed sequence 23 . This prediction model does provide convenience for the research of a large number of target genes of miRNA. However, more and more studies have found that miRNA may regulate target genes by binding to the 5'-UTR or ORF region [24][25][26] . The 3'-UTR classical prediction model showed that the binding between miR-451a and BAP31 Fig. 6 Over-expressing miR-451a or silencing BAP31 increased intracellular calcium concentration. a Cytosolic calcium-specific fluorescent dye Fluo-4AM was added into the cell culture medium to a final concentration of 10 μM to detect calcium ions in the cytoplasm 48 h after transfection with pcDNA-miR-451a, pcDNA-NC, psilencer-BAP31, and psilencer-Scr in HCT116 and SW620 cells (scale bar, 50 μm). b Cytosolic calcium concentration was 1.67-and 2.01-fold higher in HCT116 cells over-expressing miR-451a or silencing BAP31, respectively. c Cytoplasmic calcium concentration was 2.08-and 2.27-fold higher in SW620 cells over-expressing miR-451a or silencing BAP31, respectively ( × 200) 3'-UTR regions was not strong. Our dual-luciferase reporter experiments demonstrated that miR-451a could directly regulate BAP31 by binding to its 5'-UTR instead of 3'-UTR. When miR-451a-binding sites were mutated, miR-451a cannot bind to the mutated sequences any longer. Our study does provide new evidence for understanding the mechanisms of miRNA-regulating target genes. BAP31 has three transmembrane domains on the endoplasmic reticulum. Its N terminus is located in the endoplasmic reticulum lumen, and C terminus in the cytoplasm, mediating protein-protein interactions 20,27-29 . The relative expression of BAP31 was dramatically upregulated in malignant human melanoma and primary hepatocellular carcinoma when compared with normal human tissues. However, the significance of its overexpression and its function remains unclear 27,30,31 . We also found that over-expressed miR-451a or silenced BAP31 could significantly inhibit the proliferation of HCT116 and SW620 cells, which resulted from increasing ERS-associated protein expressions and calcium ions releasing from the endoplasmic reticulum into the cytoplasm. These mechanisms induced ERS and increased the apoptosis of CRC cells. The endoplasmic reticulum is widely distributed within the cell and plays a very important role. In the endoplasmic reticulum, when the expression of the built-in chaperone proteins changes, the cytosolic calcium concentration decreases. Then, chemically toxic substances will accumulate and stimulate oxygen stressing 18,[32][33][34][35] . When a large number of unfolded or misfolded proteins accumulate in the endoplasmic reticulum, the normal physiological function will be changed. This phenomenon is known as ERS 36,37 . ERS primarily acts through activating IRE-1, which is a significant event in andrographolide-induced CRC cell death 38 . From the above, we have the following speculation for carcinogenesis of CRC (Fig. 8). Decreased miR-451a expression in CRC induces the over-expression of BAP31, which increases expressions of PERK, GRP78/Bip, and BAX. We have validated it in the present study. GRP78/BiP, a major chaperone protein involved in protein folding and transporting, promotes proper protein conformation. PERK binds to GRP78/BiP when the endoplasmic reticulum is under normal circumstances 39,40 . Under conditions of hypoglycemia, hypoxia, and low calcium, ERS will be induced, which will promote the releasing of GRP78 from these ER membrane proteins and binding to misfolded proteins 41 . GRP78/Bip regulates cell proliferation, apoptosis, invasion, metastasis, as well as angiogenesis through activating a series of signal transduction pathways 42 . PERK is an endoplasmic reticulum type I transmembrane protein belonging to Ser/ Thr protein kinase. It is activated and self-phosphorylated on the endoplasmic reticulum when GRP78/Bip disassociates from the GRP78/Bip/PERK complex. Activated PERK will result in the phosphorylation of downstream elF2α to prevent more protein production. Meanwhile, the phosphorylation of elF2α activates the elF2α/ATF4/ CHOP signaling pathway [43][44][45] . In addition, BAX and BAK proteins can regulate the calcium leak from the endoplasmic reticulum 46 . BAX on the endoplasmic reticulum and mitochondria can be an important starting apoptotic signal protein. When ERS occurs, the first response is to release the endoplasmic reticulum calcium ion into the cytosol. The cytosolic calcium concentration will be increased and will then activate mitochondrial apoptosis pathway 35,46 . In conclusion, the present study demonstrated that BAP31 is the target of miR-451a. Over-expressing miR-451a or BAP31 can inhibit proliferation and induce apoptosis in CRC through inducing ERS. Patients and tissue samples The CRC samples and pericarcinous tissues were surgically resected and quickly stored in liquid nitrogen from 57 CRC patients from the Department of Surgery, West China Hospital, Sichuan University, from November 2014 to January 2016. The pericarcinous tissues, which were away from tumors at least by 5 cm, did not contain cancer cells, which usually appeared inflammatory and fibrotic. Pathological diagnosis was made by two pathologists independently according to the World Health Organization (see figure on previous page) Fig. 7 Over-expressing miR-451a or silencing BAP31 inhibited tumor growth in vivo and the relative expression of miR-451a, BAP31, and ERS-associated proteins in CRC tissues of different groups of mice. Twenty 5-week-old BALB/c-nu male mice were assigned into four groups: pcDNA-NC control group, pcDNA-miR-451a group, psilencer-Scr control group, and psilencer-BAP31 group, each group with five mice. As many as 8 × 10 6 cancer cells were subcutaneously injected into these mice. The weight of the mice and tumor volumes were determined every 3 days. The mice were sacrificed when administered for 21 days. The tumors were collected and weighed. a Tumor xenograft volume was smaller in nude mice treated with plasmids over-expressing miR-451a or silencing BAP31 than that in the control group. b The body weights of tumor xenografts were also lower than those of the control group (*P < 0.05, **P < 0.01). c The relative expression of miR-451a in xenografts administered with pcDNA-miR-451a was higher than that of the control group (**P < 0.01). d The relative expression of BAP31 was lower in pcDNA-miR-451a and psilencer-BAP31 groups than those in control groups (**P < 0.01). The expressions of ERS-associated proteins were higher in pcDNA-miR-451a and psilencer-BAP31 groups than those in control groups (**P < 0.01,*P < 0.05). CRC, colorectal cancer; ERS, endoplasmic reticulum stress classification 47 . Tumors were staged according to the TNM Classification of Malignant Tumors (TNM) 48 . None of the patients underwent chemotherapy before surgical resection. This study was approved by the Ethics Committee of West China Hospital (No. K2016041). Written informed consent was obtained from patients before sampling. Real-time RT-PCR Total RNA was extracted from tissues with Trizol according to the manufacturer's instructions (Takara, Dalian, China). Its concentration and purity were determined using BioPhotometer (Eppendorf, Hamburg, Germany). Its integrity was checked with 1% denatured agarose gel. First-strand cDNA was synthesized from 2 µg of total RNA using 100U of Moloney murine leukemia virus reverse transcriptase (Promega, Shanghai, China) in a 10 μl reaction mixture containing 10 U of RNase inhibitor and 1 µl of 10 mM reverse transcription primers as follows. The real-time RT-PCR was run with AceQ TM qPCR SYBR Green Master Mix (Vazyme, Nanjing, China) in a final volume of 20 μl, containing 10 μl of the master mix, each with 0.4 μl of 10 µM forward and reverse primers, and 2 μl of cDNA. The thermal cycling program was as follows: an initial heating at 95°C for 5 min, followed by 35 cycles of 10 s at 95°C, and 60°C for 30 s. Expression of miR-451a was normalized to the internal U6 levels. The mRNA of BAP31 expression was normalized to the mRNA of GAPDH. Each sample was run in triplicate, and the threshold cycle numbers (Ct) were averaged. Melting curve analysis was performed by increasing the temperature from 65 to 95°C in 0.1°C/s increments for each fluorescence reading using the CFX96 Touch™ qPCR system (Bio-Rad, California, USA). Cell culture Five CRC cell lines, HCT116, HT29, SW480, SW620, DLD, and a normal colonic epithelial cell line NCM460, and human 293T embryonic kidney cell lines were obtained from American Type Culture Collection (ATCC, Manassas, VA, USA). They were cultured in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum, 100 U/ml penicillin, and 100 µg/ml streptomycin, and maintained in a humidified incubator with 5% CO 2 at 37°C. Dual-luciferase reporter vector construction The sites of miR-451a binding with BAP31 gene were predicted by the online software tool RNA hybrid, in http://bibiserv.techfak.uni-bielefeld.de/rnahybrid/ website. To construct dual-luciferase reporter vector, the primers designed were as follows: Small-interference RNA construction pSilencer 4.1-CMV neo vector was used for the expression of siRNA in HCT116 and SW620 cells. The gene-specific insert specified a 19-nucleotide sequence corresponding to the nucleotides (619 site) downstream of the transcription start site of mBAP31 (sense and antisense), which was separated by a 9-nucleotide noncomplementary spacer from the reverse complement of the same 19-nucleotide sequence. This vector was referred to as a silencing vector, psilencer-BAP31. A control vector, pSilencer-Scr, was constructed using a 19nucleotide sequence (Scramble sense and Scramble antisense) with no significant homolog to any mammalian gene sequence and therefore served as a negative control. MTT assays At 0, 24, 48, 72, 96, and 120 h after transfection, 20 μl of 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT) solution (5 mg/ml) was added into each well, three wells for each time point. Then, the culture medium was discarded after 4 h. One hundred and fifty microliters of dimethyl sulfoxide was added to each well and gently shaken in darkness for 10 min until the crystal fully dissolved. Absorbance (OD) at 570 nm was detected in the detection microplate reader. Hoechst staining Cells were fixed with 4% formaldehyde for 20 min and gently washed to remove the fixative solution. At the cellular climbing film, a small amount of the hoechst33342 dye was dropped to cover the sample. Then, the cells were incubated at room temperature for 3-5 min. The hoechst33342 dye solution was removed. Cells were washed twice with PBS and observed under fluorescence microscopy after having mounted. TUNEL staining Cells were washed with PBS and fixed with 4% formaldehyde for 20 min. In order to gently remove the fixative solution, the cells were washed with PBS. Triton X-100 (0.3%) was added to the cells for incubation for 5 min at room temperature and then washed with PBS. The apoptotic cells were analyzed by an in situ one-step TUNEL apoptosis assay kit (Beyotime, Jiangsu, China), following the procedure specified by the manufacturer. DNA fragmentation of the nuclei in the injured areas was stained. Apoptotic cells showed green fluorescence. Immunofluorescence HCT116 and SW620 cells were fixed with 4% formaldehyde for 48 h, washed with PBS containing 2% triton, blocked for 1 h at room temperature with bovine serum, and incubated at 4°C with BAP31 antibody (Proteintech, Rosemont, USA) overnight. Then cells were washed with PBS and incubated at room temperature for 1 h with the conjugated secondary antibody (Zhong-ShanJinQiao, BeiJing, China). After washing with PBS, ERtracker was added and incubated at room temperature for 20 min. Following the washing of ER-tracker by PBS, DAPI was added. Cells were observed with a fluorescence microscope, and a fluorescence image was obtained. Flow cytometry After digesting with ethylenediaminetetraacetic acidfree trypsin, the cells were collected by centrifugation at 300 g for 5 min at 4°C. The cells were washed twice and centrifuged at 300 g with pre-cooled PBS, at 4°C for 5 min. The cells were collected, and 100 μl of 1 × binding buffer was added to resuspend the cells. Five microliters of Annexin V-FITC and 5 μl of PI Staining Solution were gently mixed with cells at room temperature for 10 min in the darkness. Four hundred microliters of 1 × binding buffer was added to cells and mixed within 1 h. The compound was detected by flow cytometer Accuri C6 (Laboratories, Hercules, CA, USA). Cytoplasmic calcium concentration detection Fluo-4AM dye (10 mM) was added to the cell culture medium to a final concentration of 10 μM. Cells were incubated at 37°C, 5% CO 2 , for 1 h with Fluo-4AM dye. Then, it was replaced with fresh medium and incubated for 20 min. The cells were washed with PBS twice within 30 min and observed under a fluorescence microscope. Immunohistochemistry The expressions of BAP31 (Cell Signaling Technology, MA, USA) proteins were studied by IHC in the CRC samples and pericarcinous tissues. Paraffin-embedded biopsies were included in tissue microarrays as previously described 49 . The sections were incubated at room temperature for 2 h with the indicated antibodies, and the immunostaining was performed using the ChemMate DAKO EnVision Detection Peroxidase/DAB kit (DAKO Diagnóstics, Barcelona, Spain). In vivo experiments Four-week-old male BALB/c-nu mice were obtained from Chengdu Dashuo Laboratory Animal Co., Ltd. (Chengdu, China) and were housed in a specific pathogen-free environment. Twenty 5-week-old BALB/cnu male mice were randomly assigned into four groups: pcDNA-NC control group, pcDNA-miR-451a group, psilencer-Scr control group, and psilencer-BAP31 group, each group with five mice. This experiment was in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and was approved by the Ethics Committee of West China Hospital (No. K2016040). CRC cell line, HCT116, was collected with 0.24% trypsin-ethylenediaminetetraacetic acid solution, washed, and suspended in the serum-free medium. HCT116 cells (8 × 10 6 ) were subcutaneously injected into nude mice. Tumor volume was assessed by measuring the length (L) and width (W) with a caliper (tumor volume, V = 0.5 × L × W 2 ). The body weights and tumor volumes were determined every 3 days. PcDNA-NC, pcDNA-miR451a, psilence-Scr, and psilence-BAP31 were intratumorally or peritumorally injected once a week for 3 weeks. The mice were sacrificed when administrated with vectors for 21 days. The tumors were collected and weighed, respectively. Total RNA and protein were extracted from tumors. Real-time PCR and western blot were carried out to detect the relative expression of miR-451a and BAP31 in tumors from each group. Statistical analysis The data were expressed as the mean ± standard deviation. All histograms were carried out using Graph-Pad Prism 5.0 software. The Statistical Package for Social Sciences version 13.0 (SPSS Inc., Chicago, Illinois, USA) was used for standard statistical analysis by one-way analysis of variance. The relative gene expressions were analyzed with Livak method 52 . Pearson's Chi-square test was used to compare the pathological data from the clinic 53 . The statistical significance was set at P < 0.05.
2019-02-17T14:46:38.954Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "230cdb768e65c1a0f6a8af2b4f946bccb44ce7e5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-019-1403-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "230cdb768e65c1a0f6a8af2b4f946bccb44ce7e5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245131564
pes2o/s2orc
v3-fos-license
Essential model parameters for nonreciprocal magnons in multisublattice systems We theoretically investigate the microscopic conditions for emergent nonreciprocal magnons toward unified understanding on the basis of a microscopic model analysis. We show that the products of the Bogoliubov Hamiltonian obtained within the linear spin wave approximation is enough to obtain the momentum-space functional form and the key ingredients in the nonreciprocal magnon dispersions in an analytical way even without solving the eigenvalue problems. We find that the odd order of an effective antisymmetric Dzyaloshinskii-Moriya interaction and/or the even order of an effective symmetric anisotropic interaction in the spin rotated frame can be a source of the antisymmetric dispersions. We present possible kinetic paths of magnons contributing to the antisymmetric dispersions in the one- to four-sublattice systems with the general exchange interactions. We also test the formula for both ferromagnetic and antiferromagnetic orderings in the absence of spatial inversion symmetry. Under space-time inversion symmetry, the electronic band structures are categorized into four groups: the k-symmetric band dispersion with the spin degeneracy in the presence of both spatial inversion (P) and timereversal (T ) symmetries, the k-(anti)symmetric spinsplit band dispersion without T (P) while keeping P (T ), and the k-antisymmetric band dispersion without both P and T , where k is the wave vector of electrons. In particular, the k-antisymmetric band dispersion has been extensively studied in recent years, since it becomes a source of nonreciprocal conductive phenomena owing to the inequivalence between k and −k [35]. The nonreciprocal nonlinear optical effect is a typical example [36][37][38][39]. The microscopic origin of the k-antisymmetric band dispersion is accounted for by the active magnetic toroidal moment, which corresponds to a polar tensor with timereversal odd [40][41][42][43][44][45][46][47]. The nonreciprocal phenomena have also been discussed in magnetic insulators [35,. In spite of the absence of carriers, the collective excitaions of magnons lead to directional-dependent dynamical properties, where we refer it to the nonreciprocal (asymmetric) magnons [35,62]. Similar to the electron band dispersion, an appear-ance of nonreciprocal magnons is attributed to the active magnetic toroidal moment [70]. Although they were mainly studied for ferromagnetic slabs [48,49] and for magnetic orderings in the noncentrosymmetric crystals [50,51,59,71,72], where the magnetic dipolar interaction and/or the Dzyaloshinskii-Moriya (DM) interaction are important [73,74], it was shown that they occur even via other mechanisms, such as frustrated exchange interactions [75,76] and bond-dependent symmetric exchange interactions [77,78]. The nonreciprocal magnons have a potential to exhibit further intriguing nonreciprocal phenomena, such as the magneto-optical effect [79][80][81] and spin Seebeck effect [82], which avoid Joule heating. Engineering asymmetric band deformations in the systems without P and T symmetries is important for nonreciprocal conductive phenomena irrespective of electrons and magnons. Meanwhile, the microscopic conditions have not been fully clarified yet, although active magnetic toroidal multipoles are necessary from the symmetry aspect [47,[83][84][85]. Recently, a useful framework to extract essential model parameters for the asymmetric band structure in the electron systems has been proposed on the basis of augmented multipoles [86]. Similar approach has also been performed in the magnon systems by introducing the bond-type magnetic toroidal dipole degree of freedom, which is only applied to the mechanism induced by the DM interaction [72]. It is desired to have a simple formula to investigate which model parameters contribute to the asymmetric band deformations in magnon systems with arbitrary spin interactions. In the present study, we investigate the microscopic conditions for emergent nonreciprocal magnons in multisublattice systems in an analytical way. We show that the product of the Bogoliubov Hamiltonian after the linear spin wave approximation provides two important information for nonreciprocal magnons without the cumbersome Bogoliubov transformation. One is the momentumspace functional form and the other is the essential model parameters to cause the antisymmetric band deformations. We demonstrate that our scheme ubiquitously accounts for the microscopic key ingredients irrespective of the mechanisms by analyzing a spin Hamiltonian with arXiv:2112.07071v1 [cond-mat.str-el] 14 Dec 2021 the general exchange interactions in the one-to foursublattice systems. We discuss the important magnonhopping processes that arise from the exchange interactions in real space. We also test our scheme for both ferromagnetic and antiferromagnetic orderings with the DM interaction and the symmetric anisotropic interaction. Our results will be useful to extract the significant model parameters in inducing the nonreciprocal magnons under complicated noncollinear magnetic orderings. The remaining of the paper is organized as follows. In Sec. II, we present a general method of extracting the essential model parameters from the Bogoliubov Hamiltonian. We present a general expression contributing to nonreciprocal magnons on the basis of the spin Hamiltonian with both symmetric and antisymmetric exchange interactions in the one-to four-sublattice systems in Sec. III. We apply the method for the ferromagnetic ordering in the breathing kagome lattice structure and the collinear/noncollinear antiferromagnetic orderings in the honeycomb and breathing kagome lattice structures in Sec. IV. Section V is devoted to a summary of the present paper. Appendix A provides lengthy expressions in terms of momentum-space functions in the three-and four-sublattice cases. II. APPROACH Let us start a general spin Hamiltonian, which is given by where S α l is an α (= x, y, and z) component of classical spin at site l. J ⊥ ll , J z ll , J v ll , J xy ll , J yz ll , and J zx ll are the symmetric exchange interactions, while D x ll , D y ll , and D z ll are the antisymmetric exchange interactions. The latter corresponds to the DM interaction. The nonzero components of J ll are determined by point group symmetry of the bond. For later convenience, the spin is rotated so as to align the local axis along the z direction: where R z (φ l ) and R y (θ l ) are the rotation matrices around the z and y axes, respectively, and T is the transpose of the vector. where H zx ll and H yz ll consist of the product ofS xSz andS ySz , respectively. The interaction tensorJ ll is represented by rotating J ll . We investigate magnon spectra within a linear spin wave approximation. By applying the Holstein-Primakov transformation, which is given byS iη a iη (the subscripts i and η denote the indices for a unit cell and a sublattice, respectively, and a iη is the boson operator for sublattice η), to the spin Hamiltonian in Eq. (4), the Bogoliubov Hamiltonian is derived. By performing the Fourier transformation as a iη → a qη , the resultant Bogoliubov Hamiltonian in the n-sublattice system is given by where Ψ † q = (a † q1 , a † q2 , · · · , a † qn , a −q1 , a −q2 , · · · , a −qn ) and X q and Y q are the n × n matrices. In Eq. (4), H z ll corresponds to the diagonal elements of X q , while H ⊥ ll , H DM ll , H v ll , and H xy ll correspond to the off-diagonal elements X q and Y q . In other words, only the spin components perpendicular toS z l contribute to a magnon hopping process. Meanwhile, H yz/zx ll does not appear in Eq. (10), since it consists of the odd number of boson operators. When H B q is a positive-definite matrix, the Cholesky decomposition is possible as H B q = K † q K q , where K q is the upper triangular matrix. Then, H B q is transformed into the Hermitian matrix H q as where the 2n × 2n matrix g satisfies (g) ηη = [Ψ qη , Ψ † qη ]. The eigenvalues ω qm (m is the band index) in Eq. (11) are obtained by diagonalizing H q . Nonreciprocal magnon excitations mean that the eigenvalues have an antisymmetric component with respect to q, i.e., ω qm = ω −qm . To investigate important model parameters for the nonreciprocal magnons in a systematic way, we introduce a following quantity as which is related to the eigenenergy. A similar quantity has been discussed in the antisymmetric band modulation and spin splittings in the electron system [86][87][88]. The antisymmetric component is extracted by Thus nonzero F (s) q signals the appearance of nonreciprocal magnons. From the expression of Eq. (14), one can deduce the essential model parameters inducing nonreciprocal magnons, as detailed in Sec. III. In Eqs. (5)- (9), there are four types of magnon hoppings and one onsite potential in the real space Bogoliubov Hamiltonian, which are expressed as From theses expressions, one finds that the real (imaginary) part of the standard hopping a † iη a jη is related to H ⊥ ll (H DM ll ), which corresponds to the off-diagonal part of X q , while the real (imaginary) part of the anomalous hopping a † iη a † jη is related to H v ll (H xy ll ), which corresponds to the off-diagonal part of Y q . As only the hopping processes to satisfy the magnon-number conservation are important, one can find that an even order ofJ v ll andJ xy ll can contribute to nonreciprocal magnon excitations. In addition, when taking into account the fact that an odd order of imaginary hopping can also contribute to nonreciprocal magnon excitations, we expect that the antisymmetric magnon band structure is related to the odd order of an effective antisymmetric DM interaction or the even order of an effective symmetric anisotropic interaction. This indicates that the antisymmetric magnon band structure can be reversed regarding q by the sign ofD ll , while that is not by the sign ofJ v ll andJ xy ll . As we will show the general feature of F In this section, we discuss a general behavior of F (s) q independent of the lattice structures and the exchange interactions. We show the microscopic processes contributing to nonreciprocal magnons in the multisublattice systems with n = 1-4: one-sublattice case in Sec. III A, two-sublattice case in Sec. III B, three-sublattice case in Sec. III C, and four-sublattice case in Sec. III D. It is noted that the present scheme can be also applied to the systems with the sublattice n > 4 in a straightforward way. A. One-sublattice case We consider the one-sublattice system with η = A, which describes only the ferromagnetic state without the sublattice degree of freedom. In the one-sublattice system, X q and Y q are the 1×1 matrices. By using Eqs. (16)- (20), the expressions of X q and Y q are given by where h has a q dependence, which are different from the multisublattice cases, as will be discussed in Secs. III B-III D. Although the magnon dispersions in the one-sublattice case with the 2×2 matrix H B q are analytically obtained by performing the Bogoliubov transformation, we test the expressions in Eqs. (14) and (15) for later complicated multisublattice systems. The lowest contribution of F (s) q is given by The expression in Eq. (23) indicates that only the effective DM interactionD z contributes to nonreciprocal magnon dispersions. When calculating the higher order of F (s) q , one finds that the (2m + 1)th-order terms of F are proportional toD z h D(as) q , while the 2mth-order ones vanish for an integer m. This means that the nonreciprocal magnon in the one-sublattice system is induced wheñ D z = 0 irrespective of other interactions. This result is consistent with that obtained by the direct diagonalization. The above result is intuitively understood from the magnon-hopping process in the real-space picture, as shown in the case of F (1) q in Fig. 1. The process in Fig. 1 gives rise to effective imaginary magnon hopping that is a source of nonreciprocal magnons along the hopping direction. Furthermore, the functional form of nonreciprocal magnons are obtained in an analytic form from Eq. (23). In the crystal system, the q dependence of F (s) q is derived to satisfy the magnetic point group symmetry in the system, as shown in Sec. IV. B. Two-sublattice case Hereafter, we examine F (s) q in the multisublattice case. In this section, we show F (s) q in the two-sublattice case with η = A and B, where X q and Y q are the 2×2 matrices. By considering the general exchange interactions between A and B sublattices, X q and Y q are represented by where and η = A and B. In contrast to the one-sublattice case, h The lowest contribution of F (s) q is given by s = 3, whose expression is represented as The first term in Eq. (29) represents the contribution from the effective DM interaction proportional toD z , which is similar to the result in the one-sublattice case in Sec. III A. Meanwhile, the second term in Eq. (29) represents the contribution from the effective symmetric anisotropic exchange interaction includingJ v andJ xy , which does not appear in the one-sublattice case. In other words, the symmetric anisotropic exchange interaction can become a source of nonreciprocal magnons in the multisublattice system [see also the results in Eq. (35) in the three-sublattice case (Sec. III C) and in Eq. (38) in the four-sublattice case (Sec. III D)]. The real-space pictures in terms of the magnon-hopping processes for each term are shown in Fig. 2. It is noted that the effective symmetric anisotropic interaction contributes to the nonreciprocal magnons in the form ofJ vJ xy in order to satisfy the magnon-number conservation and the space-time inversion symmetry. We also note that the q dependence of nonreciprocal magnons can be different for different mechanisms, as found in the first and second terms in Eq. (29). In addition, there are three differences from the onesublattice case in Eq. (23). The one is the appearance of J z in Eq. (29), which means thatJ z is also important to induce the nonreciprocal magnons. The second is the sublattice-dependent factor z A + z B and z A − z B ; the nonreciprocal magnons byD z (J vJ xy ) vanish when z A = −z B (z A = z B ). The third is the q dependence in the first term in Eq. (29) owing to nonzero h We note that the expression in Eq. (29) does not directly reduce to that in Eq. (23) when regarding A and B sublattices as the same sublattice, i.e., z A = z B : The essential model parameter in Eq. (29) isJ zDzJ ⊥ , while that in Eq. (23) isD z . At first glance this result appears to contradict with each other, but it is due to the fact that the factorJ zJ ⊥ is canceled out with the denominators when evaluating the energy spectrum [72]. Hence, from the viewpoint of obtaining the essential model parameters, it is useful to calculate F (s) q in the minimal unit cell. By using the expression in Eq. (29), one obtains the essential model parameters for the emergence of nonreciprocal magnons in the two-sublattice antiferromagnetic orderings and the ferromagnetic ordering in the two-sublattice noncentrosymmetric structures. We show the example of the staggered antiferromagnetic ordering in the honeycomb lattice structure in Sec. IV B. C. Three-sublattice case We consider a behavior of F (s) q in the three-sublattice case with η = A, B, and C. For the general exchange interactions between different sublattices, the 3 × 3 matrices, X q and Y q , are represented by where and η, η = A, B, and C. The lowest contribution of F (s) q corresponds to the s = 3 term similar to the two-sublattice case, which is given by where H µq (µ = 1-7) is the antisymmetric function consisting of odd number of h ζ(as) q and even number of h ζ(s) η ηq for η = η = η . The specific expressions of H µq are shown in Appendix A owing to the lengthy expressions. There are mainly three contributions in the nonreciprocal magnon dispersions in Eq. (35), which are proportional toD z including H 1q -H 4q , (D z ) 3 including H 5q , andJ vJ xy including H 6q and H 7q . We schematically show the magnon-hopping processes corresponding to H µq (µ = 1-7) in Fig. 3. Among H µq , H 2q , H 3q , H 4q , H 5q , and H 7q consist of three magnon hoppings between three sublattices, while H 1q and H 6q consist of two magnon hoppings between two sublattices. Indeed, H 1q and H 6q correspond to the left and right panels of Fig. 2, respectively, while other H µq have no correspondence to the two-sublattice case. In other words, this indicates that contributions from H 2q , H 3q , H 4q , H 5q , and H 7q can appear when the exchange interaction path includes the triangle geometry, such as the triangular and kagome lattices, while those from H 1q and H 6q do not need the triangle geometry. Thus, only the latter processes can contribute to the nonreciprocal magnons in the case of the one-dimensional three-sublattice chain in the absence of F ACq and G ACq . The general expression in Eq. (35) describes the model parameter conditions for the nonreciprocal magnons in the three-sublattice antiferromagnetic orderings, such as the 120 • antiferromagnetic ordering on the triangular and breathing kagome lattices. We show three examples in the breathing kagome system in Secs. IV A, IV C, and IV D. D. Four-sublattice case Finally, we consider the four-sublattice case, where X q and Y q are represented by where F ηη q , G ηη q , and Z η are the same as Eqs. (32), (33), and (34), respectively. Similar to the two-and three-sublattice cases, the low- q , which is given by where H µq (µ = 1-7) is similar to H µq in the threesublattice case, and the only difference is found in the number of hopping paths due to the different number of the sublattice, as found in Appendix A. Similar to the three-sublattice case, H 2q , H 3q , H 4q , H 5q , and H 7q can appear when exchange interaction path includes the triangle geometry, while H 1q and H 6q do not depend on such a geometry. For example, in the tetrahedron cluster structure shown in Fig. 4(a), all H µq can contribute to the nonreciprocal magnons, whereas in the square cluster structure with the nearest-neighbor exchange interactions in Fig. 4(b), only H 1q and H 6q can contribute as In this way, the expressions in Eqs. (38) and (39) describe the microscopic process contributing to nonreciprocal magnons under the four-sublattice antiferromagnetic orderings, such as the pyrochlore antiferromagnets and the four-sublattice tetragonal antiferromagnets. IV. APPLICATION TO NONCENTROSYMMETRIC MAGNETS In this section, we apply the expression in Eq. (15) to noncentrosymmetric ferromagnets and antiferromagnets to host nonreciprocal magnons. As the ferromagnets, we consider the ferromagnetic ordering in the breathing kagome lattice structure in Sec. IV A. As the antiferromagnets, we consider three types of antiferromagnetic orderings: the staggered collinear antiferromagnetic state in the honeycomb lattice structure in Sec. IV B, the upup-down ferrimagnetic state in the breathing kagome lattice structure in Sec. IV C, and the noncollinear 120 • antiferromagnetic state in the breathing kagome lattice structure in Sec. IV D. In each section, we first show the Bogoliubov Hamiltonian and then we discuss magnon spectra and essential model parameters. A. Breathing kagome ferromagnets Model We consider a breathing kagome lattice structure as an example of noncentrosymmetric crystal structures [72]. The breathing kagome lattice structure consists of upward and downward triangles with the different sizes, as shown in Fig. 5(a). The interaction matrix corresponding to Eq. (2) is given by where the superscript ( ) denotes the interaction for the upward (downward) triangles where γ is the breathing parameter, and χ AB = 0, χ BC = 2π/3 and χ CA = 4π/3. We here consider four independent interactions from the symmetry analysis: the isotropic inplane interaction J ⊥ , the DM interaction D, the bonddependent anisotropic interaction J a , and the z spin interaction J z . The direction of the DM vector is taken along the +z (−z) direction for the upward (downward) triangle. The anisotropic interactions, D, J a , and J z − J ⊥ originates from the relativistic spin-orbit coupling and/or dipole-diople interactions. Compared to Eq. (2), one finds the correspondence of (J v ηη , J xy ηη ) and (J a cos χ ηη , −J a sin χ ηη ). In the ferromagnetic state with magnetic moments along the z direction, we do not need the rotation of the spin frame, i.e.,J ⊥ ηη = J ⊥ ,D ηη = D,J v ηη = J a cos χ ηη , J xy ηη = −J a sin χ ηη , andJ z ηη = J z in Eqs. (5)- (9). By performing the Holstein-Primakov transformation and then the Fourier transformation, the 3 × 3 matrices X q and Y q in the Bogoliubov Hamiltonian matrix H B q are given by [72] where where ρ ηη is the displacement vector between η and η sublattices in the breathing kagome lattice structure. It is noted that the length of a side of both the upward and downward triangles is taken as one for notational simplicity. Result The ferromagnetic spin configuration becomes stable when J z is dominant and ferromagnetic. We show the magnon dispersions along high symmetry lines in the Brillouin zone [ Fig. 5(b)] in the ferromagnetic state after the numerical Bogoliubov transformation. Figure 5(c) shows the magnon spectra ω q for D = 0.2 without J a , while Fig. 5(d) shows ones for J a = 0.5 without D. Both cases clearly exhibit that the magnon bands are modulated antisymmetrically in the functional form of q x (q 2 x − 3q 2 y ) [72]. The angle dependence in the limit of |q| → 0 is given by cos 3φ when setting (q x , q y ) = q(cos φ, sin φ), as shown in Fig. 5; the antisymmetric modulation appears along the K'-Γ-K line, while it does not along the M(Σ)-Γ-M(Σ ) line. The above result means that both D and J a become the origin of the nonreciprocal magnons. Such model parameter conditions are easily obtained by evaluating F (s) q in Eq. (15) without solving the eigenvalue problems. For a general case at D = 0 and J a = 0, the lowestorder contribution from F (s) q is of third order as shown in Sec. III C, which is given by where Thus, one finds that the antisymmetric functional form of is consistent with that in the magnon dispersions in Figs. 5(c) and 5(d). Furthermore, the expression in Eq. (47) clearly presents the essential parameters in nonreciprocal magnons: γ, D, and J a . The condition of γ = 1 represents the importance of the breathing structure, which is reasonable in terms of spatial inversion symmetry; it is recovered for γ = 1. In a similar way, F The result indicates that asymmetric feature vanishes for D = 0 and D = √ 3J ⊥ in addition to γ = 0, 1 and D = − √ 3J ⊥ in Eq. (47). Thus, D is one of the essential parameters, and its odd order contributes to the asymmetric dispersions. On the other hand, for nonzero J a and D = 0, Eq. (47) turns into We find that the even order of J a becomes the essential parameters in the case of D = 0. These results are consistent with those obtained from the general expression in Sec. III C. Model The honeycomb lattice structure consists of two sublattices A and B, as shown in Fig. 6(a). From the presence of threefold rotational symmetry around the z axis and mirror symmetry perpendicular to the xy plane along the bond direction at each local site, the interaction tensor for the nearest-neighbor spins is given by where ν = 0-2 is the bond index for the nearest-neighbor spins and χ ν = 0, 2π/3, 4π/3 for ν = 0-2. The three bond vectors are d 0 = (1, 0), d 1 = (−1/2, √ 3/2), and d 2 = (−1/2, − √ 3/2). The DM interaction vanishes owing to inversion symmetry on the A-B bond center. The contribution of the DM interaction arises in the interaction tensor for the next-nearest-neighbor spins belonging to the same sublattice, which is given by where ν = 0-5 is the bond index for the next-nearestneighbor spins. We ignore the other symmetric exchange interactions in J AA and J BB . The opposite sign of the DM interaction for the A and B sublattices is owing to inversion symmetry in the system. We consider the staggered antiferromagnetic state with S z A = 1 and S z B = −1, as schematically shown in Fig. 6(a). In contrast to the ferromagnetic ordering in Sec. IV A, the spin frame is required to be locally rotated according to Eq. (3) in order to use Eq. (15). After rotating the spin frame, the effective interactions corresponding to Eqs. (5)-(9) are given bỹ for the νth bond (J v AB andJ z AB do not depend on ν). Owing to the π rotation of the spin frame for the sublattice B, the bond-dependent interaction J a is transformed intoJ ⊥ AB andD AB in Eqs. (53) and (56), and the sublattice-dependent DM interaction turns into the uniform DM interaction in Eq. (57). By performing the Holstein-Primakov transformation and then the Fourier transformation, the 2 × 2 matrices X q and Y q in Eq. (11) are given by [70,78] where Result The staggered antiferromagnetic spin configuration is stabilized by supposing that J z is the dominant antiferromagnetic interaction. We take J z = 1 and J ⊥ = 0.99, respectively. The magnon dispersions in the antiferromagnetic state are shown in Figs. 6(c) and 6(d), where the Brillouin zone is shown in Fig. 6(b). The magnon spectra ω q in Fig. 6(c) are calculated for D = 0.05 and J a = 0 and those in Fig. 6(d) are for D = 0 and J a = 0.1. Similar to the result in Sec. IV A, the asymmetric modulations occur in both situations. The antisymmetric functional form is given by q y (3q 2 x −q 2 y ), as shown by the color plot in Fig. 6(b), which means that the angle dependence is expressed as sin 3φ in the limit of |q| → 0. From Eq. (15), the essential model parameters are straightforwardly computed. The lowest-order contribu-tion in terms of D is given by Meanwhile, the lowest-order contribution in terms of J a is of third-order, which is given by where we set D = 0. These results are consistent with those in Eqs. (23) and (29) in Sec. III. Similar to the ferromagnetic ordering in Sec. IV A, the result obtained from Eq. (15) gives the same functional form as that in the magnon dispersions in Figs. 6(c) and 6(d). Furthermore, the expressions in Eqs. (63) and (65) indicate the odd order of the effective DM interaction causes the asymmetric magnon dispersions as obtained in Sec. III. Model We discuss the other example of the nonreciprocal magnons in the ferrimagnetic state. We consider the up-up-down magnetic ordering in the breathing kagome lattice structure as a fundamental example. The up-updown spin configuration is shown in Fig. 7(a). The spin Hamiltonian is common to Eqs. (40) and (88) in Sec. IV A. The effective interaction tensors corresponding to Eqs. (5)-(9) are modified from those in Sec. IV A for the antiparallel spin pairs, i.e., A-C and B-C spins. The interactions are given bỹ The π rotation of the spin frame around the y axis for the C sublattice leads to the correspondence between (J ⊥ ηη ,D ηη ↔ J v ηη , J xy ηη ) and (J v ηη ,J xy ηη ↔ J ⊥ ηη , D ηη ). Then, the 3 × 3 matrices X q and Y q in the Bogoliubov Hamiltonian in momentum space are obtained as [72] where F ABq and G ABq are common to Eqs. (44) and (45), respectively. Result The up-up-down spin configuration is not simply stabilized by the spin Hamiltonian owing to the degeneracy arising from the kagome lattice structure. We here introduce the interlayer ferromagnetic exchange coupling with the coupling constant J by supposing the quasitwo-dimensional structure [72]. Then, the diagonal matrix element (X q ) ii = (0, 0, Z) in Eq. (75) turns into (X q ) ii = (J , J , Z + J ), which opens the gap in the magnon spectra. In the following, we fix J ⊥ = 0.9, J z = 1, and γ = 0.5. Figures 7(c) and 7(d) show the magnon dispersions under the up-up-down magnetic ordering along high symmetry lines in the Brillouin zone in Fig. 7(b). The data in Fig. 7(c) is obtained at D = 0.2, J a = 0, and J = −2 and that in Fig. 7(d) is D = 0, J a = 0.5, and J = −2.4. In contrast to the magnon dispersions in the ferromagnetic state in Sec. IV A, threefold rotational symmetry in the dispersions does not hold, which is consistent with the symmetry of the magnetic orderings. This result indicates that there is an additional angle dependence of cos φ to cos 3φ, whose behavior is schematically shown as the color plot in Fig. 7(b). We also confirm that the magnon dispersions in Figs. 7(c) and 7(d) are characterized by the above angle dependence. By evaluating F (s) q in Eq. (15), the essential model parameters are extracted. The lowest-order contribution is given as the same form of Eq. (47) except for the sign. In other words, the lowest-order contribution gives the angle dependence of cos 3φ. The other cos φ dependence is obtained by the second lowest-order contribution F (5) q . For J a = 0, F (5) q is given by where where we omit the irrelevant contributions and Thus, the additional antisymmetric modulation in the up-up-down state is given by q 5 cos φ, indicating that the modulation of cos φ affects the large q region in the Brillouin zone. Also in these cases in x − 3q 2 y ), which is the same as that in Fig. 5 Eqs. (82) and (84), the odd order of the effective DM interaction and the even order of the effective symmetric anisotropic interaction can be a source of the antisymmetric dispersions. Such q n dependence in cos φ depends on the model parameters. For example, we consider the situation where the breathing parameter for the DM interaction γ DM is different from γ, γ DM = γ [72]. In this case, the cos φ dependence appears in F where The expression in the form of the effective interaction is omitted due to its length. Owing to nonzero g 2 , i.e., γ DM = γ, F q has the contribution of q cos φ in the limit of |q| → 0, which means the linear band modulation is found in the small q region [72]. Model Finally, we discuss the nonreciprocal magnons in the noncollinear antiferromagnetic state. We consider the 120 • antiferromagnetic ordering in the breathing kagome lattice structure in Fig. 8(a). Here, we consider the situation where the horizontal mirror symmetry in the kagome plane is broken owing to the presence of polar field along the z direction, which means that the point group symmetry is lowered to C 3v . Then, the spin Hamiltonian is given by where D and J a are additional exchange interactions that arise from the horizontal mirror symmetry breaking under the polar field. The effective interactions in the rotated spin frame are given byJ where η, η = A, B, and C, and we neglectJ zx ηη andD x ηη owing to the linear spin wave approximation. The expressions are the same for the different bonds (A-B, B-C, and C-A) owing to the symmetry. The 3 × 3 matrices X q and Y q in the Bogoliubov Hamiltonian in momentum space are the same as those in Eqs. (42) and (43), respectively. Meanwhile, F ηη q , G ηη q , and Z have different forms as Result The 120 • spin configuration is obtained as a metastable state by taking the exchange model parameters as J ⊥ = 1, J z = 0.8, D = −0.2, J a = 0.5, and γ = 0.5. Figures 8(c) and 8(d) show the magnon dispersions under the 120 • antiferromagnetic ordering along high symmetry lines in the Brillouin zone in Fig. 8(b). The data in Fig. 8(c) is obtained at D = 0.2 and J a = 0 and that in Fig. 8(d) is at D = 0 and J a = 0.2. Although the interaction tensor under the 120 • antiferromagnetic ordering is different from that in the ferromagnetic ordering in Eq. (40), the functional form of the antisymmetric dispersions is the same with each other, which is characterized by q x (q 2 x − 3q 2 y ) satisfying threefold rotational symmetry in both cases in Figs. 8(c) and 8(d). The lowest-order contribution of F (s) q is of third order. In the case at D = 0 and J a = 0, F (3) q is given by and in the case at D = 0 and J a = 0, F (3) q is given by where we omit the expressions for the effective exchange interactions. The above results indicates that we obtain the different conditions in terms of the essential model parameters from the ferromagnetic state in Eqs. (49) and (50): The former are D and J a , while the latter are D and J a . In this way, our scheme can be applied to noncollinear antiferromagnetic orderings straightforwardly. V. SUMMARY To summarize, we have investigated the microscopic conditions for emergent nonreciprocal magnons on the basis of the model calculations. We presented the useful expression in Eqs. (14) and (15) to provide essential model parameters for nonreciprocal magnon excitations in an analytical way. The method does not require the diagonalization of the bosonic Hamiltonian. After presenting the generic results in the one-to four-sublattice cases, we tested the method to four magnetic systems: the ferromagnetic state on the breathing kagome lattice system, the staggered collinear antiferromagnetic state on the honeycomb lattice system, the up-up-down ferrimagnetic state on the breathing kagome lattice system, and the noncollinear 120 • antiferromagnetic state on the breathing kagome lattice system. We found that our scheme extracts the key model parameters, which are well consistent with the result by the direct diagonalization. The present expression can be applied to any magnetic structures including noncollinear one in the magnetic systems with any symmetric and antisymmetric bilinear exchange interactions. In particular, this method has an advantage of obtaining the analytical expressions for the essential model parameters in multisublattice systems with long-period magnetic structures that are difficult to obtain the analytical expressions of the magnon band dispersions. Moreover, the systematic analysis provides an insight to construct an effective spin model so as to include essential model parameters in real materials, where targeting materials are easily found by using magnetic structure database, MAGNDATA [89], and cluster multipole analyses [85,90], from the symmetry viewpoint. In this way, our result will not only give a deep understanding of nonreciprocal magnon excitations in noncentrosymmetric magnets, such as α-Cu 2 V 2 O 7 [91][92][93][94], but also be a good indicator to examine the microscopic origin under complicated magnetic orderings. In this Appendix, we show the lengthy expressions of H µq (µ = 1-7) in the three-sublattice case in Sec. III C and those of H µq (µ = 1-7) in the four-sublattice case in Sec. III D. For the three-sublattice case, H µq (µ = 1-7) are given by where H 3q and H 4q are obtained by replacing the superscript ⊥ in H 2q with v and xy, respectively, and multiplying −1.
2021-12-15T02:15:52.539Z
2021-12-14T00:00:00.000
{ "year": 2021, "sha1": "e414f0418a4ff7f179f184575742a8661f998071", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e414f0418a4ff7f179f184575742a8661f998071", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
198980825
pes2o/s2orc
v3-fos-license
Multi-Decadal to Short-Term Beach and Shoreline Mobility in a Complex River-Mouth Environment Affected by Mud From the Amazon On the 1500 km-long mud-dominated Guianas coast of South America, between the mouths of the two mega-rivers, the Amazon and the Orinoco, debouch numerous small rivers draining the humid tropical/equatorial Guiana Shield. The geomorphic development of the mouths of these rivers reflects interactions among water discharge, fluvial sediment load, and the alongshore migration of Amazon-derived mud banks alternating with inter-bank areas. The mouth of the Maroni River, astride the French Guiana-Suriname border, shows advanced estuarine infill and geomorphic development characterized by a western (downdrift) side comprising numerous recent cheniers and an eastern (updrift) side bound by an old ( > 2000 years B.P.) chenier. A multi-decadal analysis of the beach bounding this chenier shows little net overall mobility notwithstanding significant decadal to sub-decadal variation. The overall stability reflects the diversion of sand supply from the Maroni River toward the downdrift coast and limited sand supply by the smaller Mana River further east, and the south bank of which was contiguous with this beach. The variability in beach multi-decadal mobility reflects the influence, on waves, of alongshore-migrating banks (strong wave dissipation, limited beach mobility) and inter-bank areas (limited wave dissipation, larger beach mobility), highlighted by a comparison, in the current bank phase, of offshore and inshore waves. Erosion of the beach between 2011 and 2017 coincides with the sealing of the mouth of the Mana by muddy progradation in 2011 and mouth relocation several kilometers eastward. The morphodynamics of the beach and shorter-term fluctuations in budget are related to: (1) interaction with estuarine sand dunes mobilized by strong tidal currents on the adjacent shallow shoreface, (2) the influence of the Maroni channel, and (3) rapid encroachment of the leading edge of the shore-attached mud bank on the eastern part of the beach. The beach morphodynamics and evolution highlight, thus, embedded levels of influence: the Maroni at the local scale, and the net westward sediment-transport system and bank and inter-bank alternations that affect the Guianas coast at a regional scale. The recent erosion poses a threat to the local communities by reducing beach space available for recreation and turtle-nesting. INTRODUCTION The Guianas coast of South America is a unique system in the world characterized by large-scale muddy sedimentation in spite of the exposure of the coast to waves from the Atlantic ( Figure 1A). The mud is organized into a series of large banks that migrate along the coast under the influence of waves and currents, separated by "inter-bank" zones (Anthony et al., 2010). In this system, the inner part of a mud bank can weld onto the coast, creating new land that can be rapidly colonized by mangroves, whereas in inter-bank zones, higher incident wave energy can lead to intense shoreline erosion. Inter-bank erosion is mitigated, where beaches (cheniers) composed of sand and/or shells occur (Augustinus, 1978;Augustinus et al., 1989;Prost, 1989;Anthony et al., 2019). Cheniers are wave-reworked coarse-grained deposits resting stratigraphically on a muddy substrate (Otvos, 2018). There has been significant progress in unraveling the characteristics of mud banks (Abascal Zorrilla et al., 2018;Abascal Zorrilla, 2019) and their interaction with the Amazon-Orinoco coast (Anthony et al., 2010(Anthony et al., , 2014. Within this overwhelmingly mud-and mangrove-dominated 1500 km-long coast, cheniers are restricted in their development, and the mechanisms and pathways of sand supply responsible for their formation are still not clear because of this pervasive influence of mud. Cheniers are, nevertheless, an important resource. In addition to their role in coastal protection, they serve as zones of rural settlements and urban development, and as pathways for road and communication networks. Active cheniers assure recreational and ecological functions and services, notably by providing nesting sites for marine turtles and habitat for shorebirds and other wildlife. They also provide ready access to the sea for fishermen in this mud-dominated setting. Although sandy deposits, and locally shells, may form secondary sources reworked from inner shoreface deposits or from pre-existing cheniers attacked by erosion, the primary source of sand for the development of the cheniers on the Guianas coastal plain is the local rivers (Anthony et al., 2013(Anthony et al., , 2014. Between the two big Amazon and Orinoco Rivers occur numerous smaller rivers ( Figure 1A) reflecting the rainy tropical/equatorial catchments draining the Guiana Shield. In French Guiana, the heavy-mineral signatures of beach (chenier) sands are typical of the basement rocks drained by these local rivers (Pujos et al., 2000). The capacity of these rivers to supply sand to the coast depends on their ability to limit large-scale sedimentation of Amazon mud at their mouths. The smaller river mouths are commonly diverted by capes built from mud supplied by the Amazon (Augustinus, 1978;Plaziat and Augustinus, 2004;Gardel et al., 2019). Beaches are rare on this muddy coast and are systematically found downdrift (west to northwest) of river mouths on the east-west to southeastnorthwest trending Amazon-Orinoco coast, thus highlighting the importance of the regional longshore sand transport induced by waves from a relatively constant north to east quadrant (Figure 2). These beaches (cheniers) commonly run alongshore for a few kilometers to tens of kilometers, before petering out to give way to the more common muddy mangrove-colonized shorelines. The variability in the lengths of such cheniers reflects sand supply volume from updrift and interruptions by mudbank attachment to the shore that lead to chenier isolation inland . Coasts updrift of many river mouths are generally, thus, characterized by muddy shorelines impinging directly on the east (or southeast) bank, forming a continuous mangrove-colonized fringe that may line the estuarine reaches of the Guiana rivers for several kilometers upstream. Among the larger rivers is the Maroni (68,700 km 2 ; mean discharge: 1700 m 3 /s), between French Guiana and Suriname, characterized by a large funnel-shaped, sand-filled shallow estuary mouth, reflecting significant fluvial bedload supply ( Figure 1B). The Maroni is a good example of a river that has supplied sand for downdrift sandy chenier construction in Suriname . The east shore of the Maroni exhibits, however, a sandy beach fronting the village of Yalimapo, in contrast to the common pattern of pervasive muddy sedimentation. This beach was considered as one of the most important turtle-nesting sites on the Guianas coast (Peron, 2014), and still is the most important recreational beach in western French Guiana. This beach was hitherto linked to the south bank of the Mana River ( Figure 1B), a smaller river (catchment size: 12,090 km 2 ; mean discharge: 320 m 3 /s) diverted westward by a large muddy cape, Pointe Isère (Plaziat and Augustinus, 2004) that has been largely eroded over the last five decades . The gradual impingement of a large mud bank has been accompanied, in 2011, by sealing of the ancestral mouth of the Mana by mud and deflection of the present mouth of this river several kilometers east of the village of Awala, which is now completely isolated from the sea by the shore-attached remnant of Pointe Isère ( Figure 1B). In this paper, we analyze, from a combination of remote sensing and field datasets, this updrift sandy beach and show how its morphodynamics and short (seasonal) to mediumterm (multi-decadal) evolution have been modulated by the interplay of fluvial sand supply, river-mouth processes, mudbank sedimentation, updrift muddy erosion of Pointe Isère, and wave dampening. The study highlights the morphodynamic and sedimentary interactions between a sandy beach and estuarine infill. It also brings out complex multi-decadal regional shoreline change in a context associated with bank sedimentation and inter-bank erosion under the overarching influence of mud supply from the Amazon. MATERIALS AND METHODS We combined a mesoscale temporal analysis of shoreline change in the vicinity of the river mouth with a short-term approach based on field experiments conducted in 2017-2018 (Figure 3). Except where specified otherwise, all the data generated by the study were archived and analyzed using MATLAB software. Multi-Decadal Shoreline Change Multi-decadal shoreline change covering the beach ( Figure 3B) was analyzed from ten very-high resolution (0.4-1.5 m) aerial photographs covering 56 years . These included Figure 1A). three ortho-photographs and seven photographs in silverprint and digital format. This dataset was complemented for the most recent period (2011-2017) by four PLEIADE and SPOT6/7 satellite images, both with a very high resolution of 50 cm (Supplementary Table S1). The aerial photographs were georeferenced using a second-order polynomial transformation following generation of >500 ground control points (GCPs) and were then merged in order to obtain orthomosaics. As a reference, we used the Universal Transverse Mercator zone 22 North on the World Geodetic System 1984 ellipsoid. To quantify shoreline change, the limit between land and sea was first identified using beach sand vegetation, which stands out in good contrast with non-vegetated areas on French Guiana beaches (Anthony et al., 2002). This delimitation was facilitated by the high resolution of the images. Statistical analyses of shoreline variations were then conducted using the Digital Shoreline Analysis System DSAS v4.4 tool with Arcgis (Thieler et al., 2017). The analysis consisted of a total of 500 transects spaced 10 m apart and drawn perpendicular to a baseline ( Figure 3B). The accuracy of the derived shoreline changes depends on a combination of errors from image resolution (50 cm), vectoring (50 cm), and geo-referencing. Given the large number (10-15) of images for each date, we calculated the average of the root mean square errors derived from geo-referencing for each date, and obtained a global mean of 8 m for all dates. We then combined these three sources of error to obtain a total accuracy rounded to ±10 m. The recent period of observation (post 2011) corresponded to one of gradual impingement of a large mud bank migrating westwards toward Suriname . The leading edge of this shore-welded mud was determined from the satellite images. Figure 3B). The echosounder was synchronized with RTK-GPS with ±1 to 2 cm for X, Y coordinates and ±2 to 3 cm for the Z coordinate. From the corrected data, DEMs with 20 m-cells were computed using the ANUDEM interpolation (Hutchinson et al., 2011). Beach Photogrammetric Surveys Four photogrammetric surveys, aimed at highlighting beach morphological changes, sediment transport patterns, and shortterm fluctuations in the beach sediment budget, were conducted using a motorized ultralight aircraft on May 29, June 28, September 11, and November 21, 2018 ( Figure 3B). The frequency was designed to capture seasonal beach changes. In order to obtain as large a spatial coverage as possible (notably including the beach shoreface) all the surveys were conducted during spring low tides. Aerial photographs were obtained with a Sony Alpha7R camera with a 32 mega-pixel capacity, fixed on the wing of a motorized ultralight. Ground-size pixels, ground distances, and numbers of photographs per survey are shown in Supplementary Table S2. The images acquired were analyzed using the structure-from-Motion (SfM) workflow (Westoby et al., 2012), with stereo pair-alignment based on GCPs deployed on the beach. The GCPs were made of 50 × 50 cm strips of linoleum painted with a black and white checker pattern, and their centers were accurately geo-referenced with a Trimble Real Time Kinetics (RTK) DGPS. The SfM-photogrammetry workflow was operated using Agisoft Photoscan Professional Software and the dense point cloud processed with CloudCompare free software. The coordinates of the GCPs were used in the dense point cloud construction to obtain X, Y, Z references. We adopted the Frontiers in Earth Science | www.frontiersin.org Universal Transverse Mercator zone 22 North on the World Geodetic System 1984 ellipsoid as our project coordinate system. We obtained 4 DEMs with a 0.50 cm optimal resolution, and compared them for the quantification of temporal changes in morphology and sediment budgets. In order to assess the quality of our DEMs, elevations were obtained from ground truth points (GTPs) and transects using RTK-DGPS (Supplementary Table S2). We used the same linoleum strips for the GTPs as those used for the GCPs, but the GTP and transect data were not used in the SfM workflow for DEM construction. The RTK base station was deployed on a IGN (French ordnance datum) benchmark, with an X, Y error of 5 cm and a Z error of 0.5 cm. The constructor error margin for the RTK-DGPS is ±1 to 2 cm for X, Y coordinates and ±2 to 3 cm for the Z coordinate. The comparison between our GTPs and control transects with our DEMs highlight errors close to 5 cm for Z (Figure 3). We applied a total error margin, inclusive of the RTK-DGPS error, of 5 cm to our DEM results. Beach profiles and DEM differentials, named differences of DEMs (DoDs), were established from the 2D and 3D data yielded by these surveys. We did not collect data on beach grain-size characteristics. Peron (2014) conducted a detailed analysis of grain sizes on the beach between Awala and Yalimapo, and showed a fine to coarse sand range largely dominated by quartz (>90%). Wave and Current Measurements In order to monitor the wave conditions prevailing over the estuarine beach shoreface, two NKE-SP2T pressure sensors (S1, S2) were deployed ( Figure 3B) from 10/09/2018 to 15/09/2018 in the course of a spring-to-neap tide cycle on a transect running from the intertidal zone to subtidal shoreface-attached mud. The sensors sampled continuously at 2 Hz. Wave characteristics were evaluated using linear wave theory which is commonly employed for wind waves in shallow water. Wave spectra were calculated over bursts of 20 min using Fast Fourier transforms. A correction factor was applied with a cutoff at 0.5 Hz to account for the non-hydrostatic pressure field. For each burst, significant wave heights (Hs) and peak periods (Tp) were calculated in the spectral window [0.02; 0.5] Hz. A limit of 0.05 Hz was set between the gravity and infra-gravity wave domains. The pressure sensor accuracy is 0.02 m, and wave heights under this value were neglected. There are no offshore wave data available in the study area. We resorted to wave records (June 2016 to May 2018, but with gaps) from a buoy deployed off Cayenne, 205 km east of the mouth of the Maroni (Figure 1A), beyond the shelf zone of mud-bank influence, by CANDHIS, the French National In Situ Wave Data Archive (Centre d'Archivage National des Données de Houle In Situ 1 ). In order to gauge the influence of the Maroni estuary and tidal currents on the shoreface of the beach, we measured current velocities from 07/09/2017 to 08/09/2017. Three Nortek Aquadopp current profilers were deployed off the beach ( Figure 3B). C1 was deployed in a side-looking position 60 cm above the bed at the edge of the main Maroni channel cutting 1 http://candhis.cetmef.developpement-durable.gouv.fr/ across the large sand bank that has largely infilled the estuary. C2 and C3 were deployed further east over the estuarine sand bank in an up-looking position and 50 cm from the bed. C3 was moored at the limit between estuarine sand and the mud bank. The frequency of acquisition was 1 MHz with 30 cm bins for C1, and 2 MHz with 10 cm bins for C2 and C3. The instruments were set with a burst measurement of 60 s at 5 min intervals. Mean current directions were averaged for each vertical bin over 7.5 s and interpolated between bins. We discarded data closest to the bed (from 0 to 0.065 m) because of potential contamination by side-lobe interference. Compass precision after calibration was around ±2 • . Multi-Decadal Shoreline Change The data on the multi-decadal shoreline evolution highlight significant mobility over the 62 year period of analysis , but much of this mobility is encompassed in the aerial photographic coverage between 1955 and 1987, strongly declining thereafter ( Figure 4B1). The overall mobility over the entire period of observation exceeds 100 m at transects 105-127, but is within the error margin (±10 m) for 23% of the transects, and barely exceeds this margin (±10-20 m) for another 20% ( Figure 4C). The area of beach represented by the transects (75-230) evincing the largest mobility appears to be related to a change in the orientation of the Maroni river-mouth east bank shoreline and to be linked to a shore-attached shoal ( Figure 3B). This is a multi-decadal feature identifiable on aerial photographs and in the field at low tide. The latest period (2011-2017) has been dominantly characterized by erosion (83% of transects), peaking at nearly 40 m at transects 94 to 95 (Figure 4B2), and marking a clear turn-around from advance to retreat in this area. This period has also been characterized by the gradual encroachment of the front of the shore-attached mud bank on the beach, by nearly 1.5 km between 2011 and 2017 ( Figure 4B2). This encroachment has resulted in increasing inland isolation of the eastern part of the beach and the village of Awala, hitherto situated on the beach, by a dense forest of Avicennia germinans mangroves (Figure 5), and the present length of exposed beach has been reduced by about 50%. Estuarine Bathymetry and Shoreface Currents The Maroni estuary is characterized by a wide, relatively shallow platform (Figure 6) with large areas ranging in depth from 2 to 4 m below French hydrographic datum (0 m). The platform is cut by a moderately deep (down to −10 m) and relatively straight single channel running north-northeast. The grain size (sand or mud) of the platform was mapped from the echosounder frequency signal (33 MHz = sand; 210 MHz = mud). Although much of the platform is sandy, a large area east of the channel is composed of mud, highlighting the encroachment of the leading edge of a mud bank, parts of which have welded ashore (Figure 5). The beach shoreface bathymetry highlights several features (Figure 7): (1) the proximity of the Maroni channel to the beach in areas corresponding to transects 0 to 125 ( Figure 4A); (2) the shallow (−1 m below 0 datum) shoreface expression of the intertidal beach-attached shoal mentioned above (Figure 2), and shoreface deepening northeast of this feature; and (3) a second zone of more extensive shallowing (with large areas <1 m below 0 datum) corresponding to the shore-attached mud bank (Figures 7A1,A2,B). During the field experiments in 2017 and 2018, the intertidal beach-attached shoal was characterized by abundant dunes composed of medium to coarse quartz sand with broken shells (Figure 8B). The dunes migrated over a hard pavement of packed coarse sand. Based on the terminology of Ashley (1990), these dunes are medium-sized FIGURE 5 | Encroachment of a shore-attached mud bank on Yalimapo beach (see Figure 1B) since 2014, resulting in increasing inland isolation of the eastern part of the beach and of the hitherto beach-front village of Awala (A), and reworked shoreface mud in front of the beach (B). The mud bank is progressively colonized by an advancing front of Avicennia germinans mangroves. (0.5-2 m), 2D to 3D forms. They evinced net eastward migration under the influence of the strong tidal currents described below. The comparison of the DEMs shows that mild erosion has prevailed near the estuarine channel, although parts of the shoreface mud bank also exhibit lowering between 10/2017 and 04/2018 ( Figure 7B). Overall, the innermost muddy shoreface in contact with the beach in its eastern part exhibits stability or mild accretion. The currents measured during the survey are shown in Figure 8A. The largest current speeds were measured at C1. Directions were clearly bidirectional and controlled by the tide. Speeds were high, attaining a peak of 1.7 m/s in the upper part of the water column but systematically before high water, whereas the lowest speeds (0.2-0.7 m/s) occurred at high and low water. Currents flowed toward the northwest (300-275 • ) during the flood and to south-southeast (100-150 • ) during the ebb. C2 showed similar directions but speeds were less. C3 was dominated by currents flowing toward west-northwest (250 • ) during the flood, and east-northeast (60 • ) during the ebb. Short-Term Beach Morphodynamics and Sediment Budget There is a marked difference in wave heights and periods between the offshore wave characteristics and the data recorded by the pressure sensors inshore (Figure 9). The largest Hs values ( Figure 9A1) in winter (December to March) may correspond to northern hemisphere winter storms with relatively long-period waves (>12 s). Much of the spectrum corresponds to trade-wind waves with periods of 5 to 12 s ( Figure 9A2). Inshore wave heights recorded during the field experiments in September 2018 were much lower, <0.25 m, and only exceeded 0.45 m (Figure 9B2) in the course of one semi-diurnal tide ( Figure 9B1). The 2018 experiment showed wave periods (6-12 s) characteristic of the trade-wind wave regime, in addition to locally generated wind waves. The energy spectrum shows a dominant gravity component ( Figure 9B4). The DoDs constructed from the photogrammetric data are shown in Figure 10A, together with the corresponding sediment budget changes, which highlight net sand loss over the 8 month survey period. The three maps show overall beach sand loss in the March-June 2017 DoD, almost completely mirrored by accretion in the following June-September DoD, and a more alongshore variable pattern in the September-November 2018 DoD ( Figure 10A). The latter shows a clear accretion in the eastern half of the beach where erosion had prevailed over the period 2011-2017 (Figure 4B2), accretion along much of the rest of the beach, but loss on the upper beach. This variability is also evinced by the beach profiles depicted in Figure 10B. They show a relatively steep beach, characterized by an upper beach scarp. Profile sediment loss between March and June 2018 had been partially recovered by November 2018, except for the central profile P5. Profile P2, linking the beach and the shore-attached shoal, showed significant mobility. DISCUSSION Beaches flanking river mouths are commonly an outgrowth of accumulation of bedload that leads to river channel infill through both lateral accretion of the channel margins and vertical sedimentation (Anthony, 2009). Lateral accretion commonly forms tidal flats that may constitute a low-tide terrace generally flanked by estuarine beaches (e.g., Jackson et al., 2002). The west bank of the Maroni River (Suriname) shows a much more prograded chenier plain than the east bank (French Guiana) on which is located the beach studied here. This difference in chenier plain growth in the vicinity of the mouth of the Maroni (Figure 1B) highlights the westward transport of sandy sediments from this river (and others) consistent Frontiers in Earth Science | www.frontiersin.org FIGURE 6 | Bathymetry of the Maroni estuary showing advanced sandy infill and a single estuarine channel. Mud (enclosed area) is associated with encroachment, from the east, of a mud bank over the sandy estuarine infill. with the regional wave forcing from the north to northeast quadrant Gardel et al., 2019). It also highlights the limited supply of sand from the former diverted estuarine reach of the neighboring Mana River (Figure 1B). The Maroni river-mouth east-bank beach does not seem to be an outgrowth of the advanced estuarine infill of the Maroni estuary (Figure 6) but is part of an old chenier shoreline associated with the Mana River, which now exits several kilometers to the east of the Maroni (Figure 1B). This is attested by relatively old (>2000 years B.P.) optically stimulated luminescence ages obtained about 200 m behind the present beach . The distribution of the chenier ages and the presence of Pre-Columbian settlements in Yalimapo (Gérard Collomb, pers.com, June 2019) suggest relative stability of the beach over a long timescale. The short-term (2018 field surveys) also show a fluctuating pattern of accretion and erosion (Figures 7, 10), although the net negative sediment budgets are consistent with the recent prevailing erosional trend (2011)(2012)(2013)(2014)(2015)(2016)(2017). The two points that can be drawn from this are: (1) the extent to which both the meso-scale and short-term patterns of evolution of the beach are controlled by the template set by mud banks impinging on the Maroni estuary, and (2) the mechanisms that regulate beach adjustment to this template. The implications of the recent eastward shift of the Mana outlet will be discussed later. The multi-decadal mobility of the beach showed relatively large variability between 1955 and 1999, followed by more muted changes since. The large variability between 1955 and 1999 mirrored significant shoreline changes east of Yalimapo associated with the almost total demise of Pointe Isère (Figure 11), the large mud cape that had diverted the Mana River westwards up to 2011, and since at least the 19th Century (Plaziat and Augustinus, 2004). This erosion included pulses of reworking of inland cheniers behind Pointe Isère between 1972, 1979, and 2001-2011, as the muddy cape was progressively eroded, and some of the reworked sand from Pointe Isère transported toward the mouth of the Maroni River ( Figure 11B). It is interesting to note that erosion has been prevalent over much of the beach in the last few years Frontiers in Earth Science | www.frontiersin.org 9 July 2019 | Volume 7 | Article 187 Frontiers in Earth Science | www.frontiersin.org 10 July 2019 | Volume 7 | Article 187 FIGURE 8 | Currents measured near the beach (see Figure 2) over several tidal cycles (A) and photographs of bedforms (2D to 3D dunes) on the shoreface adjacent to the beach (B). (2011-2017, Figure 4B2), and in 2018 (Figure 10), generating anxiety regarding loss of land, threats to beach tourism, and diminishing space for nesting marine turtles in Yalimapo. However, given the large past shoreline mobility, prediction of the time frame over which erosion will prevail is hazardous. The reasons for the current prevalent erosion likely reside in modifications of alongshore sand transport on the beach generated by impingement of the mud bank on the eastern part of the beach (Figures 4B2, 6, 10, 11C). This is an outgrowth of the multi-decadal demise of Pointe Isère. Jolivet et al. (2019) documented the sealing of the former mouth of the Mana and the marked eastward shift of the present mouth of the river as a result of the shore-attachment of the bank east of Yalimapo, thus depriving the beach of potential sand supply by this river. These mesoscale changes reflect the utility of adopting a more regional, large-scale (big picture) approach in order to better elucidate local shoreline change. Overall shoreline change has been most important in the zone where the east bank of the Maroni estuary joins the beach, between transects 75 and 220 (Figure 4). A likely explanation for this local variability is that of mobility of the main Maroni estuarine channel in adjustment to pervasive mud supply and attendant changes in hydrodynamics induced by mud banks. This is reflected in the extremely high turbidity values within the channel resulting from the impingement of the present large mud bank (Sottolichio et al., 2018). Transects 0 to 200 correspond to the most constricted part of the estuary and where the main Maroni estuarine channel is in direct contact with the intertidal beach (Figure 7). In addition to (or as a result of) this channel proximity, much of the shoreline mobility in this sector has been engendered by the migration of the beach-attached shoal identified from both ground and aerial photographs (not shown here). The formation of this shoal was probably initially due to expansion of the fluvial-tidal hydraulic jet of the Maroni with widening of the estuary where the beach changes its orientation to eastward, to form part of the south bank of the hitherto diverted Mana River (Figure 1B). Sand shoal formation of this type is akin to that of a flying spit (Zenkovich, 1967), or a banner bank (Dyer and Huntley, 1999), although the latter type of bank is generally associated with headlands. The short-term data further highlight this mesoscale variability and show the seasonal cut-and-fill signature typical of the trade wind-wave-influenced French Guiana beaches during inter-bank phases (Dolique and Anthony, 2005), when mud banks do not affect the coast, or of many other tropical beaches with a well-defined seasonal trade-wind or monsoon wave regime (e.g., Tamura et al., 2010;Pereira et al., 2016;Anthony et al., 2017;Arrifin et al., 2018). The beach underwent significant sand loss (Figure 10; −14,579 m 3 ) in response to the high waves that prevailed between March and June 2018 ( Figure 9A). Filtering of wave energy does occur, as shown by the pronounced difference between, on the one hand, both the modeled offshore wave data provided by the WWIII model (Figure 2) and the CANDHIS buoy data (Figure 9A), and, on the other, data recorded by the pressure sensors near the beach (Figure 9B). Filtering is caused by the mud banks migrating alongshore but also by the large estuarine sand bank that is infilling the mouth of the Maroni. This filtering effect is proportional to FIGURE 9 | Hydrodynamic conditions during the course of the field experiments. From top to bottom: offshore significant wave heights (A1) and periods (A2) measured by the CANDHIS buoy offshore of Cayenne (see Figure 1A); water levels measured by the pressures sensors (B1); inshore wave heights (B2) and periods (B3) measured by the pressure sensors (see Figure 2); and examples of bursts of wave energy spectral density (B4). wave energy, and large waves succeed in impacting the beach, as seen from field observations, especially during large spring tides. The high waves between March and June 2018 (Figures 2, 9A) also included equinoctial spring tides during which high-tide dissipation was less. The estuarine sheltering effect from exposure to waves, which commonly results in short-fetch waves, is an overarching influence of estuaries on beach morphodynamics (Jackson et al., 2002). Wave measurements conducted on other beaches in the Guianas both in the vicinity of river mouths and well away from these river mouths show that incident wave energy is strongly modulated temporally and alongshore by the presence of banks (low waves) or their absence (high waves associated with interbank phases), resulting in large short-to mesoscale (>decade) shoreline mobility (Anthony et al., 2002). It is not clear from the available data whether this meso-scale wave variability has also been the case on the beach at Yalimapo but the commonality of this pattern on various other beaches in the Guianas suggests this likelihood, which is also reflected in the multi-decadal variability in beach mobility. As shown in Figure 10, short-term beach morphological change has been quite variable notwithstanding the ambient low wave-energy context, possibly reflecting a further influence of: (1) progressive impingement on the eastern part of the beach, of shore-attached mud ( Figure 10A), (2) sand exchanges between the estuarine bank and the beach, notably in the shoreattached shoal area (Profile P2, Figure 10B), and (3) antecedent beach profile characteristics in feeding back on morphological change. Estuarine beaches are commonly characterized by a relatively wide low gradient terrace, or "low tide terrace, " across which sediment mobilization may occur under relatively high energy (wave) conditions (Jackson et al., 2002). These beaches also commonly exhibit the following features: (1) longshore and transverse bars and biogenic features; (2) swash bars; (3) vegetation and wrack accumulations on the intertidal foreshore; (4) pebbles and/or shells; and (5) small aeolian dunes. On the Maroni mouth beach, the low-tide terrace corresponds to the adjoining estuarine sand bank (transects 175-500) but the features and patterns of sediment mobilization prevailing on this beach do not correspond to those enumerated above. Low wind speeds, high ambient humidity, and rapid growth and spread of creeping grasses on the backshore considerably limit aeolian activity on the beaches of the Guianas (Anthony et al., 2014). The sediment mobility on the beach is regulated by strong Frontiers in Earth Science | www.frontiersin.org alongshore tidal currents ( Figure 8A) that generate important tidal bedforms, notably the 2D-3D hydraulic dunes ( Figure 8B). The beach, which is also devoid of aeolian activity, clearly interacts with the strong tidal currents which are typically of a standing tide (notably at C1), and with these bedforms, especially in the shoal-attached area. The high current speeds during the rising tide associated with such a standing tide act not only on the bedforms of the shallow shoreface but also on the adjacent beach face. The estuarine sand bank serves as both a sand source and sink, resulting in the pulses of accretion and erosion evinced by the beach, which were further modulated, up to 2011, by limited sand supply by the Mana River. This relationship is, thus, mediated by the energetic tidal currents and by dissipation of waves over the terrace. Jackson and Nordstrom (1992) showed that under the extremely dissipative conditions prevailing on the low-tide terrace, beach profile change becomes restricted to the steep foreshore. This appears to be the case in the study area where the upper beach tends to show a scarped reflective profile ( Figure 10B). Tidal modulation of waves has been shown to be an important component of the hydrodynamics of Guiana beaches, generated by ambient permanent shoreface mud during both bank and inter-bank phases . Field measurements at Yalimapo clearly highlight this tidal modulation and show that high-tide waves are generated by a mix of both trade-wind waves (6-12 s) and short-fetch local sea breezes that generate choppy conditions with periods that do not exceed 5 s (Figure 9). Field observations further show that waves are only active on the upper beach, resulting in local high-tide scarping ( Figure 10B). Spring tides no doubt reinforce the high-tide scarping and low-tide dissipation. Under these erosive conditions, downslope transfer of beach sand to the estuarine sand bank is assured by the prevalence of steep reflective beach profiles, which function in a positive short-term morphodynamic feedback loop that maintains profile steepness and recession seen in the short-term (2018) data of parts of the beach (Figure 10B). The timescales and relationships involved in the multi-decadal evolution and short-term morphodynamics of the beach at Yalimapo, French Guiana, are conceptualized in Figure 12. The net westward sediment transport system and influence of bank and inter-bank alternations that affect the Guianas coast at a regional scale affect the beach and the dynamics of the mouth of the Maroni River. The latter further influences variations in beach morphology and stability through estuarine channel mobility, dissipation of waves by the large estuarine sand sink that is infilling this river mouth, and tidal currents and bedforms. It is important to note that the erosion that has prevailed along much of the beach in the last few years (2011-2017, Figure 4B2), and in 2018 (Figure 10) is having deleterious ecological and sociological effects in the coastal settlements of Awala and Yalimapo (Figure 1) (population in 2018: 1400). The beach was, up to the 1990s, one of the most important turtle-nesting sites on the South American coast, and as the available beach space has receded due to mud encroachment, the number of turtle landings has diminished drastically (Peron, 2014). The villages Frontiers in Earth Science | www.frontiersin.org 14 July 2019 | Volume 7 | Article 187 FIGURE 12 | Conceptual model of multi-decadal to short-term changes at Yalimapo beach, French Guiana. The morphology and dynamics of the beach are influenced, at embedded spatial (regional to local) and temporal (multi-decadal to daily) timescales, by the relationship and interactions among various sediment bodies and forces. These are the regional mud banks migrating under wave and wind influence at > decadal to multi-decadal timescales westward from the Amazon to the Orinoco, wave dissipation by these banks and by the large estuarine sand bank infilling the mouth of the Maroni River, and at a timescale of several years, mud intrusion into the mouth of the Maroni and local welding of mud released by the banks onto the beach. Seasonal modulation of wave activity, the local influence of the main Maroni estuarine channel, strong tidal currents on the inner shoreface adjacent to the beach, and sand exchanges, at fortnightly tidal to semi-daily scales between beach and inner-shoreface tidal bedforms, are reflected in beach profile and alongshore mobility at the local scale. are inhabited by indigenous Kali'na populations that became sedentary in the late 1940s. Risks posed by shoreline mobility in the Guianas prior to this sedentary lifestyle were accommodated by population mobility to safer areas. Present fears in reaction to the on-going coastal changes concern loss of land, largescale mangrove development, threats to beach recreation, and to revenue from tourism related to nesting marine turtles. Prior to encroachment of the bank that has welded onshore, Awala had access to the sandy beach and benefited, like Yalimapo, from the advantages of beach-related recreation and tourism. This has been the only accessible sandy beach in western coastal French Guiana (total population in 2018: 60,000), an area of very rapid demographic growth. Much of this tourism is related to turtle-watching during the nesting season. The now completely mud-bound beach of Awala implies that the village is presently totally deprived of revenue from tourism, and direct access to the sea for fishermen has been shut off by a wide mangrove front ( Figure 5A). The decrease in the number of turtle landings is also depriving neighboring Yalimapo (Figure 5A), now the only available beach spot in western French Guiana, of muchneeded tourist revenue. Given the large past shoreline mobility, prediction of the time frame over which erosion will continue to prevail is hazardous. CONCLUSION The multi-decadal evolution and short-term morphodynamics of the beach at Yalimapo, French Guiana, illustrate the influence of the dynamics of the mouth of the Maroni River at the local scale, and the net westward sediment transport system and influence of bank and inter-bank alternations that affect the Guianas coast at a regional scale. The influence of the Maroni is reflected in variations in beach morphology and stability induced by channel dynamics, tidal currents and bedforms, and dissipation of waves by the large estuarine sand sink that is infilling this river mouth. The bank and inter-bank fluctuations have determined multi-decadal beach mobility through their influence on the wave regime and in mediating sand supply alongshore. These two scales of analysis show how local changes are embedded in, and overprinted by, the pervasive influence of changing mud-bank and inter-bank phases alongshore, a hallmark of the 1500 km-long Guianas shoreline. Changes resulting from these controls include complete isolation of the village of Awala from the sea by mangrove-colonized mud, disastrous for tourism, fishing activities in terms of access to the sea, and to beach recreation. The shortening of the beach as a result of mud encroachment now poses a threat to the village of Yalimapo, the only site in western French Guiana with a still available sandy beach. Reduction of the available beach space is having negative ecological and socio-economic consequences. This reduction is detrimental to turtle-landings on the beach for nesting, the number of which has diminished dramatically over the last decade, and to the income generated by beach tourism geared toward watching turtles nest. It is also nefarious to beach recreation and to fishing. DATA AVAILABILITY All datasets generated for this study are included in the manuscript/Supplementary Files.
2019-07-31T13:03:19.435Z
2019-07-31T00:00:00.000
{ "year": 2019, "sha1": "b6b6cd146f7efa707ed9e39638b83bb3b5e4d3ea", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2019.00187/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e611b8aa29c2d7211b8ee566425487516ebfee25", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Geography" ], "extfieldsofstudy": [ "Geology" ] }