id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
31679506
pes2o/s2orc
v3-fos-license
CXCR2 Blockade Influences Anaplasma phagocytophilum Propagation but Not Histopathology in the Mouse Model of Human Granulocytic Anaplasmosis ABSTRACT Anaplasma phagocytophilum is an obligate intracellular bacterium that infects neutrophils and causes human granulocytic anaplasmosis. Infection induces neutrophil secretion of interleukin-8 or murine homologs and perpetuates infection by recruiting susceptible neutrophils. We hypothesized that antibody blockade of CXCR2 would decrease A. phagocytophilum tissue load by interrupting neutrophil recruitment but would not influence murine hepatic pathology. C3H-scid mice were treated with CXCR2 antiserum or control prior to or on day 14 after infection. Quantitative PCR and immunohistochemistry for A. phagocytophilum were performed and severity of liver histopathology was ranked. Control mice had more infected cells in tissues than the anti-CXCR2-treated group. The histopathological rank was not different between treated and control animals. Infected cells of control mice clustered in tissue more than in treated mice. The results support the hypothesis of bacterial propagation through chemokine induction and confirm that tissue injury is unrelated to A. phagocytophilum tissue load. MATERIALS AND METHODS C3H-scid mice (SCID) were purchased from the Frederick Cancer Research Center (Frederick, Md.) and from Jackson Laboratory (Bar Harbor, Maine). C3H/HeJ mice were obtained from Jackson Laboratory. Procedures involving mice were approved in a protocol by the Institutional Animal Care and Use Committee at The Johns Hopkins University School of Medicine. The initial experimental design involved two groups of mice under two treatments to test the hypothesis that CXCR2 receptor antibody administration diminishes infected neutrophil tissue infiltration but not histopathology (1). For the initial experiments, A. phagocytophilum (NCH-1) was cultured in HL-60 cells, and 2 ϫ 10 6 infected cells in 1 ml of RPMI 1640 medium were intraperitoneally injected into SCID mice. The challenge inoculum was 100 l of A. phagocytophiluminfected SCID mouse blood (5); when pooled, 8% of neutrophils contained morulae for an approximate inoculum of 2.4 ϫ 10 4 infected neutrophils. For confirmatory experiments, A. phagocytophilum Webster strain was propagated in HL-60 cells, and when Ͼ50% of the cells were infected; SCID and C3H/HeJ mice were inoculated intraperitoneally with 500 l that contained 10 6 infected cells. For initial experiments, mice were assigned to one of two groups, prevention or treatment, with experimental and control subgroups. The experimental design was intended to evaluate (i) the ability to prevent new infection (prevention) or (ii) the potential for inhibiting the propagation of established infection (treatment). Each experimental and control group consisted of three mice. Mice in the prevention group were treated with 500 l of goat anti-murine CXCR2 (MIP-2/KC receptor) antiserum (a kind gift of R. M. Strieter, School of Medicine, University of California-Los Angeles) by intraperitoneal injection 2 h before challenge with A. phagocytophilum, and controls received 500 l of normal goat serum (NGS) 2 h before intraperitoneal challenge (day 0) and on days 2 and 4, followed by necropsy on day 5. The treatment group was challenged on day 0 and then given the CXCR2 antiserum or NGS on day 14 and then four times at 36-h intervals prior to necropsy on day 19 (1). The interval of antibody administration was based upon prior studies that showed a blockade of CXCR2 for at least 48 h (11). Confirmatory experiments were similarly designed, except that four mice were used per group, an additional group of C3H/HeJ mice was tested, the treatment group was excluded, and necropsy was conducted on day 7 (10). After necropsy, tissues from SCID and C3H/HeJ mice were formalin fixed and paraffin embedded before sections (5 m) were prepared. Slides were stained with hematoxylin and eosin (H&E) or examined by immunohistochemistry (IHC) by using polyclonal rabbit anti-A. phagocytophilum (6,9). Because of the ambiguity in detecting splenic inflammation, histopathologic changes were assessed in the liver only and included evaluation for size and density of lesions, cellular content of inflammatory infiltrates, degree of necrosis and/or apoptosis, and number of inflammatory foci. A quantitative assessment of the degree of histopathologic findings was performed by ranking the severity of hepatic lesions, where severe lesions had a higher rank (9). All evaluations were performed with investigators blinded to the identity of mouse treatment status and were conducted separately by two different microscopists (D.G.S. and J.S.D.). The area and volume of tissue were calculated by image analysis (Scion Image; Scion Corporation, Frederick, Md.). The total number of infected neutrophils was counted and tabulated per volume of tissue. Infected neutrophils were also identified by location in splenic tissues by using paired x and y coordinates. Cluster analysis was performed by using a computer algorithm that evaluated distance between the x and y coordinates recorded for each infected cell on a single slide. A cluster was defined as three or more cells separated by no more than 200 or 500 m (200 m was chosen as the minimum sensitive distance assessed by microscopy, and 500 m was chosen as the maximum distance assessed per one high-power field). The clustering metric is defined as the proportion of clustered cells relative to the total number of cells. The 200-and 500-m threshold values were chosen to assess the sensitivity of the clustering metric and to account for the natural clustering that would occur with dissemination of bacteria from a single infected focus. To demonstrate changes in bacterial tissue and blood load, a quantitative PCR based upon amplification of the multicopy msp2 gene was performed in confirmatory experiments. In brief, EDTA-anticoagulated blood that was obtained by intracardiac puncture and splenic tissue were acquired. DNA was prepared from approximately 200 l of blood and 10 mg of splenic tissue by using a DNA minikit (QIAGEN, Valencia, Calif.). DNA concentrations were measured by PicoGreen DNA assay (Molecular Probes, Eugene, Oreg.). Quantitative PCR was conducted in an ABI Biosystems (Foster City, Calif.) TaqMan 7700 instrument, based on a method modified from that of Martin et al. (10) using the msp2 primers msp2con33f (5Ј-GAAGATGAWGCTGATACAGTA-3Ј) and msp2con151r (5Ј-CAACHGCCTTAGCAAACT-3Ј) and the TaqMan probe msp2con86p (5Ј-TTATCAGTCTGTCCAGTAACA-3Ј) labeled with 5Ј FAM and 3Ј TAMRA. Preliminary experiments demonstrated the ability to detect as few as 1 infected cell/l (data not shown). Blood samples were tested in duplicate, whereas splenic tissue samples were tested in duplicate twice to confirm repeatability. Results were expressed as the quantity of infected cells per microliter of blood or per milligram of spleen. Since few infected cells could be detected in the liver in initial immunohistochemical and PCR studies (data not shown), the spleen and blood were assayed for bacterial quantitation of the tissue-marginated and circulating pool of infected neutrophils, respectively. Mice in confirmatory experiments were subjected to a battery of clinical chemistry tests on sera in which specific attention was given to the assessment of hepatocyte injury (alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, gamma glutamyltransferase, total bilirubin, and albumin; all from Antech Diagnostics, Lake Success, N.Y.) as correlates of histopathologic findings. Statistical analysis of hepatic rank, immunohistochemical quantitation of infected cells, and quantitative PCR were performed by using paired Wilcoxon tests for nonparametric data, since questions regarding normal distribution of these data existed. The P values for the Wilcoxon tests were calculated with the one-way test chi-square approximation. The significance of difference in clinical chemistry tests was determined with the Student's t test. Cluster analysis significance was determined by the chi-square goodness-of-fit test comparing experimental results with tissue-and cell number-matched computer-generated randomized cell distributions. Correlation between infected cell quantities and histopathologic rank was assessed by using Pearson's coefficient of correlation. P values of less than 0.05 were considered significant. Initial experiments. Histopathology rank analysis and IHC comparing the two groups of mice (prevention and treatment) and controls are summarized in Fig. 1. Overall, the hepatic histopathology rank was worse for the prevention group that was examined at day 5 than for the treatment group that was examined at day 19 postinfection (P Ͻ 0.004 [Wilcoxon test]). When histopathologic ranks of anti-CXCR2-and control antibody-treated mice were compared, no significant differences were noted in either the prevention (P ϭ 0.83) or treatment (P ϭ 0.51) groups. When animals in both prevention and treatment groups were considered together, more infected cells were observed by IHC in the spleen than in the liver and in those administered NGS compared to antibody-treated mice, although the differences were not significant (P ϭ 0.08 [Wilcoxon test]). However, IHC showed significantly more infected Higher ranks correspond to higher degrees of severity. Differences in histopathologic rank did not differ significantly between anti-CXCR2-treated and NGS control antibody-treated animals in prevention and treatment groups. P values for differences between splenic organism load in prevention and treatment groups are shown. Error bars indicate standard deviations. P values (paired Wilcoxon test for nonparametric data) between columns compare infected cell load (black type) and histopathology ranks (gray type). Experiments were done in duplicate. Shown are the results of the initial experiment; confirmatory experiment data were similar. Fig. 2. Confirmatory experiments. In confirmatory experiments investigating prevention only (antibody administered before infection), quantitative PCR results revealed significantly fewer infected cells in the blood and spleens of antibody-treated mice than in NGS-treated mice (P ϭ 0.0209 [Wilcoxon test]); however, this difference was not detected to be significant by IHC (P ϭ 0.2482 [Wilcoxon test]). Although nearly significantly different when examined by IHC (P ϭ 0.066 [Wilcoxon test]), antibody-treated C3H/HeJ mice had fewer infected cells than did NGS-treated control mice, although this difference was not evident with quantitative PCR (P ϭ 0.29 for blood, P ϭ 0.48 for spleen [Wilcoxon test]). No differences in histopathologic ranking were detected between groups, regardless of whether the mouse strain was C3H/HeJ or SCID, confirming the initial experiments. When data were pooled with those of the prevention group from initial experiments, a nearly significant difference was detected by IHC, with more infected cells in splenic tissues of NGS-treated mice than in CXCR2 antibody- Figure 3 shows the pooled data of the initial and confirmatory experiments and demonstrates the betweenexperiment reproducibility and the lack of association between histopathologic rank and immunohistochemical bacterial load. Cluster analysis. Clustering of infected cells was found to occur at two levels. When a nonstringent cluster (500 m between cells) was used, a nonrandom distribution was obtained regardless of treatment or prevention groups and regardless of anti-CXCR2 treatment (chi-square test; P Ͻ 0.001). When a stringent 200-m cluster definition was used, mice examined 5 days after infection with anti-CXCR2 treatment showed no significant differences in the clustering of infected cells compared to clustering based on a random distribution (P ϭ 1.0). Figure 4 illustrates the distribution and clustering of infected cells (Fig. 4A and B) compared with one randomly generated distribution. In contrast, control antibody-treated mice still demonstrated significant clustering (P Ͻ 0.001). Clinical chemistry. In the confirmatory experiments, values for alanine aminotransferase, alkaline phosphatase, gamma glutamyltransferase, and total bilirubin were within normal limits for all animals, and no significant differences were detected between NGS-treated control mice and anti-CXCR2treated mice in both the C3H and SCID mouse groups (P Ն 0.45). Levels of aspartate aminotransferase were elevated for six of seven NGS control antibody-and seven of eight anti-CXCR2-treated animals, but the values were not statistically different (P ϭ 0.10 for SCID and P ϭ 0.75 for C3H). DISCUSSION The neutrophil is the host cell for A. phagocytophilum. Thus, modifications in neutrophil function due to infection could ultimately benefit bacterial survival. Chemokines are key products of human and mouse neutrophils. In particular, interleukin-8 in humans and macrophage inflammatory protein 2 (MIP-2) and KC in mice mediate inflammatory cell recruitment via binding to CXCRs (15) on neutrophils. Other chemokines and cytokines such as MIP-1␣, MIP-1␤, monocyte chemoattractant protein 1, and RANTES have also been suggested to be important in A. phagocytophilum pathogenesis (8). Klein et al. speculated that chemokines produced after infection of HL-60 cells with A. phagocytophilum played a role in the recruitment of other susceptible cells and enhancement of phagocytosis (8). Analysis of CXCR2 antibody blockade of MIP-2 and KC stimulation in A. phagocytophilum-infected mice further confirms the speculation of Klein and coworkers (8) by demonstrating a decrease in the quantity of infected cells in blood of CXCR2 knockout mice (1). This survival tactic appears to be an important dissemination mechanism for A. phagocytophilum and other intracellular and extracellular bacteria such as Salmonella spp. and Borrelia burgdorferi (12,14). Of interest, immunohistologic observations suggested a clustering of infected cells in tissues of animals not treated with anti-CXCR2, a finding consistent with the role of interleukin-8 in the propagation of A. phagocytophilum. A low level of clustering (within 500 m) was observed regardless of whether CXCR2 was blocked, consistent with focal release of bacteria from infected cells. However, with functional CXC chemokine ligand activity in animals that did not receive CXCR2 antibody, a higher degree of clustering (within 200 m) was observed, confirming the important role for this chemokine in the propagation of A. phagocytophilum in vivo. The key unresolved question is whether the increased quantities of A. phagocytophilum that result from induced chemokine production by infected neutrophils contribute to disease or virulence. Martin et al. have presented data that demonstrate a role for immunopathologic injury, regardless of A. phagocytophilum quantity, as the most important aspect in the development of histopathologic lesions in the murine HGA model (9,10). Such data suggest that a basal quantity of A. phagocytophilum triggers a poorly regulated host immune and inflammatory response. Furthermore, the data suggest that an increasing quantity of A. phagocytophilum in blood or tissues is irrelevant for subsequent histopathology and disease manifestations. The data presented here strongly support this hypothesis, as CXC chemokine-induced recruitment of host cells resulted in increased A. phagocytophilum loads but did not alter the degree of tissue histopathology, further confirmed by the lack of significant increases in serum markers of hepatocyte injury or hepatic inflammation. These findings could have implications with regard to the course of disease and treatment options. Therapeutic strategies that target the inhibition of inflammation, not typically considered for infectious diseases, may be an option for treatment of HGA. One could speculate that the use of anti-inflammatory agents or corticosteroids inhibits disease, and concurrent use of specific antimicrobial agents would help to control the infection. Quan et al. (13) have suggested that limiting neutrophil chemotaxis and migration in inflammation by the use of chemokine receptor antagonists or monoclonal antibodies may also be another treatment alternative for infectious diseases with prominent inflammation. However, before evaluating other potential treatment options for HGA, further studies will be needed to delineate the mechanisms of inflammation, immune induction, and tissue injury. FIG. 4. A. phagocytophilum-infected neutrophils cluster in tissues of infected SCID mice compared to randomly generated distributions of infected cells. A CXCR2 antibody-treated SCID mouse (A to C) and a control antibody-treated SCID mouse (D to F) are shown. (A and D) Low-resolution H&E-stained spleens from infected SCID mice; (B and E) the splenic outlines superimposed with coordinates of all infected cells identified by IHC; (C and F) coordinates of a computer-generated random distribution based upon the number of infected cells detected in each mouse. Note the relative lack of clustering of random distributions compared with experimentally generated data and the more intense clustering obtained in the absence of CXCR2 blockade (D to F). Similar maps were generated for each spleen examined in the initial experiments only.
2018-04-03T01:57:07.997Z
2004-09-01T00:00:00.000
{ "year": 2004, "sha1": "f7c44148a48d9b908e774ea45b8d53a0b7a88247", "oa_license": null, "oa_url": "https://cvi.asm.org/content/11/5/963.full.pdf", "oa_status": "GOLD", "pdf_src": "ASMUSA", "pdf_hash": "3722c1642cecda26b22924b33796fc2ae933678e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
235260974
pes2o/s2orc
v3-fos-license
Association between Proton Pump Inhibitors and Hearing Impairment: A Nested Case-Control Study This study investigated the association of previous use of proton pump inhibitors (PPIs) with the rate of hearing impairment. The ≥40-year-old population in the Korean National Health Insurance Service-Health Screening Cohort was enrolled. The 6626 registered hearing-impaired patients were matched with 508,240 control participants for age, sex, income, region of residence, and index date (date of hearing impairment diagnosis). The prescription histories of PPIs were collected for 2 years before the index date. The odds ratios of the duration of PPI use for hearing impairment were analyzed using conditional logistic regression. Subgroups of age/sex and severity of hearing impairments were additionally analyzed for the relation of PPI use with hearing impairment. PPI use for 30–365 days was associated with a 1.65-times higher odds of hearing impairment (95% confidence interval (CI) = 1.47–1.86 for 30–365 days of PPI medication). PPI use for ≥365 days was also related to 1.52-times higher odds of hearing impairment (95% CI = 1.35–1.72, p < 0.001). All age and sex subgroups demonstrated a positive association between PPI use and hearing impairment. Severe hearing impairment showed consistently higher odds of a relation with PPI use. PPI use was associated with an increased rate of hearing impairment. Introduction Proton pump inhibitors (PPIs) have been widely used for the treatment of several acid-related disorders, such as gastroesophageal reflux disease (GERD), gastric ulcer, duodenal ulcer, erosive esophagitis, and laryngeal reflux diseases [1,2]. PPIs act as antacids by irreversibly inhibiting H+/K+ ATPase [3]. Because PPIs have been shown to have superior remedial effects than other antacids, such as histamine-2 receptor antagonists (H2RAs) [4], the prevalence of PPI prescriptions has reached approximately 2.9-7.8% in the middle-aged adult population in the US [5]. However, an increasing number of researchers have suggested the possibility of adverse effects of PPIs [6,7]. In addition to mild side effects, such as diarrhea, nausea, vomiting, and headache, several recent studies have reported electrolyte imbalance [6], kidney injury [7], and dementia [8] as possible side effects. PPIs modify pH, which may impede protease and lysosomal activities. These protease and lysosomal dysfunctions could result in beta-amyloid accumulation and neural degeneration [8]. Additionally, a retrospective study on adverse event reports demonstrated a higher rate of neurologic side effects, including cognitive dysfunction, vision loss, and hearing impairment [5]. Hearing impairment is one of the most common sensorineural disorders worldwide, affecting approximately 6.1% of the world's population [9]. The etiologies of hearing impairment are complex, but more than 70% of sensorineural hearing impairment has been known to be associated with cochlear dysfunction. Because the cochlea is supplied by the labyrinthine artery without collateral blood supply and is an oxygen-demanding organ, it is vulnerable to ischemic insults, such as those caused by ototoxic drugs, noise exposure, and the aging processes [10][11][12]. In addition, the cochlear endolymphatic potential generates and regulates mechanoelectrical signal transduction from outer hair cells to spiral ganglion cells, and electrical imbalance and perturbation of endolymph vs. perilymph homeostasis could result in hearing impairment [13]. Because PPIs have adverse impacts on neurologic disorders via ischemia and electric imbalance, adverse effects of PPIs on cochlear function could be predicted [6,14]. Therefore, we hypothesized that the prolonged use of PPIs could increase the occurrence of hearing impairment. To evaluate this hypothesis, hearing impairment patients were evaluated for the previous use of PPIs and were compared to the matched control group. In addition, confounders for hearing impairment, such as smoking and comorbidities of cardiovascular and neurologic diseases, were considered. Ethics The present study was approved by the ethics committee of Hallym University (2019-10-023: approval date: 5 November 2019). The requirement for written informed consent was exempted by the ethics committee of Hallym University. All studies were conducted according to the guidelines and regulations of the ethics committee of Hallym University. Study Population and Participant Selection This study used Korean National Health Insurance Service-Health Screening Cohort data [15]. Participants with hearing impairment were selected from 514,866 participants with 615,488,428 medical claim codes from 2002 through 2015 (n = 6626). Participants were included in the control group if they were not defined as having hearing impairment from 2002 through 2015 (n = 508,240). Participants who were diagnosed with other disabilities were excluded (n = 79 for hearing impairment participants, n = 43,673 for control participants). To measure PPI history in the previous 2 years, we excluded participants with hearing impairment who were diagnosed with hearing impairment before 2003 (n = 2160). Hearing impairment participants were 1:4 matched with control participants for age, sex, income, and region of residence. The control participants were randomly selected. The date of hearing impairment diagnosis was defined as the index date, and the same index date was used for the matched control participants. The 447,019 control participants whose index date did not match that of the hearing impairment participants were excluded. A total of 4387 participants with hearing impairment and 17,548 control participants were enrolled ( Figure 1). Exposure (Days of Proton Pump Inhibitor Prescription) The days of PPI prescription were defined as the total prescription days during the 2 years before the index date. Prescription days of PPIs were categorized as <30 days, ≥30 to <365 days, and ≤365 days. To prevent duplicate prescriptions of PPIs, of the prescription days that started on the same day, only the longest prescription duration was included. Exposure (Days of Proton Pump Inhibitor Prescription) The days of PPI prescription were defined as the total prescription days during the 2 years before the index date. Prescription days of PPIs were categorized as <30 days, ≥30 to <365 days, and ≤365 days. To prevent duplicate prescriptions of PPIs, of the prescription days that started on the same day, only the longest prescription duration was included. Outcome (Hearing Impairment) Participants with hearing impairment who were registered as having hearing impairment by the Ministry of Health and Welfare were selected. Participants who had other disabilities were excluded. According to the degree of hearing impairment, severe hearing impairment was classified as hearing thresholds of ≥60 dB in both ears or hearing thresholds of ≥80 dB in one ear and ≥40 dB in one ear. Profound hearing impairment was classified as a hearing threshold of ≥90 dB in both ears [16]. All hearing impairment participants underwent three pure-tone audiometry tests (PTAs) and auditory brainstem responses [16]. Covariates Age groups were classified into 5-year intervals. Income groups were divided into 5 classes (class 1 (lowest income) to 5 (highest income)). The region of residence was classified as urban or rural [17]. Tobacco smoking, alcohol consumption, and obesity according to body mass index (BMI, kg/m 2 ) were categorized as previously described [18]. The records of total cholesterol (mg/dL), systolic blood pressure (SBP, mmHg), diastolic blood pressure (DBP, mmHg), and fasting blood glucose (mg/dL) were used. Missing fasting blood glucose (n = 2 (0.009%)) and total cholesterol (n = 3 (0.013%)) values were substituted by the average values of the study participants. The Charlson Comorbidity Index (CCI) was calculated for 17 comorbidities as a continuous variable (0 (no comorbidities) through 29 (multiple comorbidities)) [19]. Dementia was not included in the CCI score. Outcome (Hearing Impairment) Participants with hearing impairment who were registered as having hearing impairment by the Ministry of Health and Welfare were selected. Participants who had other disabilities were excluded. According to the degree of hearing impairment, severe hearing impairment was classified as hearing thresholds of ≥60 dB in both ears or hearing thresholds of ≥80 dB in one ear and ≥40 dB in one ear. Profound hearing impairment was classified as a hearing threshold of ≥90 dB in both ears [16]. All hearing impairment participants underwent three pure-tone audiometry tests (PTAs) and auditory brainstem responses [16]. Covariates Age groups were classified into 5-year intervals. Income groups were divided into 5 classes (class 1 (lowest income) to 5 (highest income)). The region of residence was classified as urban or rural [17]. Tobacco smoking, alcohol consumption, and obesity according to body mass index (BMI, kg/m 2 ) were categorized as previously described [18]. The records of total cholesterol (mg/dL), systolic blood pressure (SBP, mmHg), diastolic blood pressure (DBP, mmHg), and fasting blood glucose (mg/dL) were used. Missing fasting blood glucose (n = 2 (0.009%)) and total cholesterol (n = 3 (0.013%)) values were substituted by the average values of the study participants. The Charlson Comorbidity Index (CCI) was calculated for 17 comorbidities as a continuous variable (0 (no comorbidities) through 29 (multiple comorbidities)) [19]. Dementia was not included in the CCI score. Regarding PPIs, the number of patients diagnosed with GERD (ICD-10 code: K21, treated ≥2 times and prescribed a PPI for ≥2 weeks) and the dates of H2 blocker prescription were additionally assessed. The number of patients diagnosed with GERD and the prescription dates of H2 blockers were assessed for the 2 years prior to the index date. Statistical Analyses The hearing impairment and control groups were compared using the chi-square test for categorical variables and the independent t test for continuous variables. Conditional logistic regression analysis was conducted, and the odds ratios (ORs) and 95% confidence intervals (Cis) of the prescription days of PPIs for hearing impairment were calculated. A crude model (simple), model 1 (SBP, DBP, fasting blood glucose, and total cholesterol), model 2 (model 1 plus obesity, smoking, alcohol consumption, and CCI scores), and model 3 (model 2 plus gastroesophageal reflux disease and H2 blocker) were used. The matched variables were stratified. Age and sex (<70 years old and ≥70 years old, men and women) subgroups were analyzed. We further analyzed the ORs of proton pump inhibitor prescription days for hearing impairment according to severity of hearing impairment. Two-tailed analyses were performed. p-values less than 0.05 were defined as statistically significant. Statistical analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). The participants with prior histories of PPI prescription demonstrated higher odds of hearing impairment ( Table 2). The ≥30 to <365 days of PPI prescription was related with 1.65 higher odds for hearing impairment (95% CI = 1.47-1.86, p < 0.001). The ≥365 days of PPI prescription was associated with 1.52 higher odds for hearing impairment (95% CI = 1.35-1.72, p < 0.001). The previous histories of PPI prescription were associated with higher odds for hearing impairment in all age and sex subgroups. The <70-year-old group who had a PPI prescription for ≥30 to <365 days showed 1.70 (95% CI = 1.49-1.95) higher odds of hearing impairment than those with a PPI prescription for ≥0 to <30 days. The men's group who had a PPI prescription of ≥30 to <365 days showed 1.80 (95%CI = 1.57-2.07) higher odds of hearing impairment than those with a PPI prescription for ≥0 to <30 days. According to the degree of hearing impairment, the severe hearing impairment group, but not the profound hearing impairment group, exhibited a relationship between a longer duration of PPI prescription and a higher rate of hearing impairment (Table 3). Abbreviations: CCI, Charlson comorbidity index; * conditional logistic regression analysis, significance at p < 0.05. † Stratified model for age, sex, income, and region of residence. ‡ Model 1 was adjusted for systolic blood pressure, diastolic blood pressure, fasting blood glucose, and total cholesterol. § Model 2 was adjusted for model 1 plus obesity, smoking, alcohol consumption, and CCI scores. Model 3 was adjusted for model 2 plus gastroesophageal reflux disease and H2 blocker. Discussion The long-term use of PPIs was linked with an increased rate of hearing impairment in the adult population. This relation of PPI with hearing impairment was maintained in all age and sex subgroups. This is a pioneering study on the potential impacts of PPI use on hearing impairment in a large population. We comprehensively considered possible confounders, such as lifestyle factors of smoking and alcohol consumption and comorbidities. Two previous studies suggested the association of PPIs with hearing impairment. The National Health and Nutrition Examination Survey study on the adverse effects of PPI revealed significantly increased risks of hearing impairment, dementia, migraine, and other peripheral neuropathies associated with PPI use [5]. However, this study had limitations because it was based on the adverse effects of PPI use. Another prospective cohort study in middle-aged women investigated the association of PPI use with selfreported hearing impairment [20]. Although PPI use was not associated with self-reported hearing impairment in that study after adjusting for GERD symptoms, that study did not objectively measure either hearing impairment or GERD. All variables, including PPI use, were based on self-reported survey items, which limited the fidelity of their data. The present study objectively measured PTAs, and prescription data of PPIs were collected, thereby improving the fidelity of the data. In addition, prior studies have reported the association of long-term PPI use with dementia and sensory disorders, such as vision and smell losses [5,21]. Compared to histamine−2 receptor antagonist, PPI use was related with increased propensity for adverse neurological effects, including migraine, severe peripheral neuropathies, and visual abnormalities [5]. A few plausible pathophysiological mechanisms could link PPI use with hearing impairment. Insufficient blood supply could mediate ischemic injury in the inner ear [22]. PPIs inhibit endothelial nitric oxide synthetase, which reduces nitric oxide in circulation [14]. Vascular endothelium-derived nitric oxide is essential for the regulation of vasodilation, platelet adhesion/aggregation, and antiatherosclerotic and anti-inflammatory effects [23]. Thus, the decreased level of endothelium-derived nitric oxide could increase ischemic injury and oxidative stress. In addition, PPI is known as a competitive inhibitor of cytochrome CYP2C19, which also interacts with clopidogrel [24]. The inhibition of the efficacy of clopidogrel could increase the risk of thromboembolism and coronary syndrome [25]. Because of impaired vasodilation and ischemic changes associated with PPIs, a number of previous studies have demonstrated an increased risk of cardiovascular diseases related to PPI use [26,27]. A review study described that PPI use was related to excess mortality from cardiovascular diseases in 15/1000 persons [27]. As the cochlea is susceptible to ischemic injury due to the high oxygen demands and blood supply from the end artery [10,11], ischemia following PPI use could impact cochlear dysfunction and hearing impairment. Metabolic disturbances and the malabsorption of micronutrients may induce neuronal degeneration. A number of prior studies suggested that the metabolic disturbances associated with PPI use were linked with an increased risk of dementia [8,28]. The modulation of protease activities due to changes in pH resulting from PPI use could induce the accumulation of beta-amyloid [29]. In addition, inhibition of lysosomal activities due to the inhibition of vacuolar H+-ATPase was suggested to decrease the clearance of beta-amyloid peptides. Because dementia and neuronal degeneration have been linked with neural presbycusis-type hearing impairment, these metabolic changes could increase the risk of hearing impairment [30]. Moreover, PPIs interfere with the absorption of micronutrients, such as iron, vitamin B12, and nitric oxide [31]. The malabsorption of micronutrients has been reported to be related to cochlear dysfunction [32]. The dysfunction of H, K-ATPase in the cochlear lateral wall could induce electrical imbalance and disturb homeostasis of the endolymphatic fluid of the cochlea. It was reported that a proton pump identical to that in the stomach is expressed in the lateral wall of the cochlea [33]. The cochlear proton pump was suggested to play a crucial role in maintaining a high potassium ion concentration in the endolymph, which sustains the cochlear endolymphatic potential [34]. PPI use could inhibit the cochlear proton pump as well as the gastric proton pump, dysregulating the inner ear potential and mediating hearing impairment. This study used data from a large nationwide representative cohort. Many control participants could be selected and matched for demographic and socioeconomic factors. The degree of hearing loss was based on three PTAs tests and auditory brainstem response test results. The use of multiple objective hearing measures prevented the misdiagnosis of hearing impairment in this study. Because registered hearing-impaired persons receive support for the cost of health care, including hearing aids, most hearing-impaired persons were included in our hearing-impaired group. However, some limitations should be considered when interpreting the current results. The degree and etiologies of hearing loss could not be detailed in this study. For histories of PPI prescriptions, the types and doses of PPIs could not be differentiated in the current data. Although past medical histories, including history of GERD, were adjusted, the potential confounding effects of reflux symptoms remained. Further studies on the associations of dose and types of PPI with the specific types of hearing loss will unravel the current questions. The analyses using machine learning approach could facilitate the manipulations of huge data. Conclusions Longer durations of PPI use were related to hearing impairment in adult Koreans. This relation of PPIs with hearing impairment was valid in all age and sex groups. Therefore, patients who need long-term PPI medication should be consulted for their hearing preservation. Future study with randomized controlled trial study design could delineate the causal relation between PPI use and hearing impairment. Informed Consent Statement: Patient consent was waived due to the retrospective study design. The requirement for written informed consent was exempted by the ethics committee of Hallym University. Data Availability Statement: Releasing of the data by the researcher is not allowed legally. All data are available from the database of National Health Insurance Sharing Service (NHISS) https: //nhiss.nhis.or.kr/ NHISS allows access to all of this data for the any researcher who promises to follow the research ethics at some cost. If you want to access the data of this article, you can download it from the website after promising to follow the research ethics. Conflicts of Interest: The authors declare no conflict of interest.
2021-05-31T16:22:58.890Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "a02ead6a04ea5fd555216804880f49bc39175b4c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1467-3045/43/1/12/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a02ead6a04ea5fd555216804880f49bc39175b4c", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225377933
pes2o/s2orc
v3-fos-license
Eleusine indica Linn, Baertin (Poaceae) Ethanol Leaf Extract and Its Ethyl Acetate Fraction Display Potential Anti-inflammatory Activities Objective: Inflammation is the underlying cause of most of the chronic diseases that occur with aging. Although many drugs are available for the management of inflammatory disorders and their symptoms, most of these drugs possess serious adverse effects that limit their usefulness. This has encouraged the unending search for potent anti-inflammatory drugs from plant sources as alternatives to conventional drug treatment of inflammation. This study investigated the anti-inflammatory activities of the ethanol leaf extract of E. indica and the ethylacetate fraction in rodents. Materials and Methods: The leaves were extracted with ethanol by cold maceration and the extract was fractionated with n-hexane, ethylacetate, butanol and water. The oral acute toxicity (LD50) of the extract and the phytochemical constituents of the extract and the fractions were determined. The anti-inflammatory activities of the ethanol extract (EE) and ethylacetate fraction (ETF) and their possible mechanisms of actions were investigated. Results: The oral LD50 of the extract was above 5000 mg/kg. Both the EE and ETF displayed dose-dependent inhibition of the rat paw edema, with ETF producing between 48-54% edema inhibition. Xylene-induced topical edema was significantly (p < 0.05) reduced by both the EE and ETF, with ETF causing between 48 and 65% inhibition. The EE and ETF preserved the integrity of gastric mucosa. Their average ulcer index (1.37±0.02) was significantly lower than that of indomethacin (5.20±0.23). Pre-treatment with the EE and ETF significantly (p < 0.05) reduced leucocyte migration, especially the neutrophils. Both heat- and hypotonicity-induced hemolysis of RBC membrane were remarkably inhibited. Conclusion: The mechanisms of the anti-inflammatory activity may involve among others inhibition of leukocyte migration and membrane stabilization. INTRODUCTION Inflammatory reactilons underline a vast variety of human diseases [1]. Inflammation is a normal protective response to tissue injury. It is the body's effort to inactivate or destroy invading organism, remove irritants, and set the stage for tissue repair [2]. Common signs and symptoms of inflammation includes, pain, redness, bruising, fever, chills, stuffy nose and head, joint swelling and stiffness, tender muscle parts, fluid retention and loss of joint function [3]. It is also associated with flu-like symptoms, fatigue, headache, loss of appetite and muscle weakness [4]. Inflammation may be caused by microbial infection [5], physical agents [6], irritants and corrosives chemicals [7], tissue necrosis [8] and hypersensitivity reactions [9]. The immune system is often involved in inflammatory disorders, demonstrated in both allergic reactions and some myopathies. Inappropriate activation of the immune system can result in inflammation leading to rheumatoid arthritis [10]. Non-immune diseases with etiological origins in inflammatory processes include cancer, atherosclerosis, and ischaemic heart disease [11]. A large variety of proteins are involved in inflammation, and any one of them is open to genetic mutation which impairs or otherwise deregulates the normal function and expression of that protein [12]. Inflammation occurs in orderly sequence involving the initiation of inflammatory process by a foreign substance or physical injury, recruitment and chemo-attraction of the inflammatory cells and activation of these cells to release inflammatory mediators capable of damaging or killing invading microbes or tumour [13]. The cardinal signs of inflammation include hot inflamed site due to increase in blood flow towards the region, redness, and swelling due to vascular permeability, pain caused by the activation and sensitization of primary afferent neurons and lasting loss of function and mobility (14]. Pain and fever are the most common complaints associated with inflammation. The non steroidal anti-inflammatory drugs (NSAIDs) are widely used in the management of pain and other inflammatory conditions. They however do not cure and remove the underlying causes of the disease but, only modify the inflammatory responses to the disease [14]. The plethoras of adverse effects associated with these agents have led to a search for effective and safer antiinflammatory drugs from plant sources as alternatives to conventional drug treatment [15]. In the south eastern Nigeria, a good number of plants are used traditionally to treat inflammatory conditions especially rheumatoid arthritis. Eleusine indica Linn (Poaceace) commonly known as wire grass, goose grass or crowfoot grass is one of the popular traditional antiinflammatory folk remedy. It is locally known as "Ese" in the south eastern Nigeria. In this region, the leaves of E. indica are applied externally to open wounds to arrest bleeding. A poultice of the leaves is applied to sprains and back pains, while a decoction of the macerated leaves is used to treat skin rashes, painful swelling, fevers and asthma. Few biological studies on the plant reported activities such as anti-diabetic [16], antiinflammatory [17], anti-plasmodial [16,18], antioxidant and antibacterial [19] and analgesia [20]. The anti-inflammatory activity of E. indica leaves and the possible underlying mechanisms of action were investigated in this study. Animals Swiss albino rats (150 -180 g) and mice (25 -30 g) of both sexes were obtained from the Animal Facility Center of the Department of Pharmacology and Toxicology, Faculty of Pharmaceutical Science, Nnamdi Azikiwe University Awka. The animals were maintained at laboratory animal conditions and were allowed free access to food and water. Extraction and Fractionation About 1.5 kg of the pulverised leaves was extracted by cold maceration using 70% ethanol for 48 hr with intermittent shaking. The extract was filtered using Whitman filter paper and the filtrate concentrated using rotary evaporator at 40°C. Part of the concentrated extract was subjected to liquid-liquid chromatographic fractionation using solvents of different polarity to obtain n-hexane, ethylacetate, butanol and aqueous fractions. The resulting fractions were concentrated using rotary evaporator at 40°C except water fraction that was concentrated with freeze dryer. Preliminary studies indicated ethanol extract (EE) and the ethylacetate fraction (ETF) as the most effective, and were thus subjected to further studies. Acute Toxicity (LD 50 ) and Phytochemical Studies The oral acute toxicity (LD 50 ) of the ethanol extract (EE) in rats was determined [21], while the phytochemical analysis of both EE and the fractions were performed using standard methods [22,23]. Anti-inflammatory Evaluations Evaluation of the anti-inflammatory potentials of the extract and fraction were done using acute systemic inflammatory model (egg-albumininduced edema) and topical (xylene-induced edema) model. This would indicate if the extract and fraction possess both systemic and topical anti-inflammatory activty Effect on egg albumin induced rat paw edema Thirty (30) minutes before sub planter injection of 0.1 ml freshly undiluted egg albumin in the rats' right paw, oral administrations of 200, 400, and 600 mg/kg of EE or ETF were given to 3 different groups of rats (n=6). Aspirin (100 mg/kg) and 10% Tween 80 (10 ml/kg) was given to the positive and negative control groups respectively. Paw volumes (edema) were measured by water displacement methods [24]. Edema volume was measured before and at 1, 2, 3, 4, 5 and 6 hr after sub planter injection of egg albumin. The anti-inflammatory effect was expressed as percent inhibition of edema in the treated animals in comparison with the vehicle treated animals [25]. The percent inhibition of edema was calculated using the formula [25]: (Vc-Vt)/Vc x 100 Where Vc is the mean edema (paw volume) of the control and Vt the mean edema of the treated, and mean edema is the mean difference of paw volume displaced at time t and that of basal paw volume at time zero. Effect on topical edema induced by xylene in the mouse ear The effect of the EE and ETF on acute topical inflammation was evaluated [26]. Eight groups of adult Swiss albino mice of either sex (n=6) were used for the study. The extract or fraction 200, 400 and 600 μg was applied on the anterior surface of the right ear of 6 groups of mice. Topical inflammation was immediately induced on the posterior surface of the same ear by application of xylene (0.05 ml). Control groups were given either the vehicle (Tween 80) or indomethacin 200 μg/ear. Two hours after induction of inflammation, the mice were sacrificed by overdose of chloroform anesthesia and both ears removed. Circular sections (7 mm diameter) of both the right (treated) and left (untreated) ears were punch out using a cork borer and weighed. The values obtained from the left ears were subtracted from that of the right ears. Edema was quantified as the weight difference between the two earplugs. The antiinflammatory activity was evaluated as percentage edema inhibition in the treated animals relative to control animals using the relation Ulcerogenic effects in rats This study was performed as described by Cashin et al. [28]. Adult Swiss albino rats were fasted for 24 hr, and doses of the EE or ETF (200, 400 and 600 mg/kg) were administered orally to treatment groups (n=6). Control animals were giving either indomethacin 40 mg/kg or 10% Tween 80 (10 ml/kg). Three hours after drug administration, animals were sacrificed with ether. The stomachs were removed and cut along the greater curvature to expose the mucosal surface. The mucosa was washed with normal saline and observed with magnifying lens (x10). Injury to the mucosa was scored 0-4 on an arbitrary scale; 0 = normal coloured stomach, 0.5 = Red coloration, 1= one or two spot ulcer, 2 = severe lesions, 3 = very severe lesions, 4 = mucosa full of lesions/perforation [28]. In vivo leucocytes migration test The effect of the EE and ETF on cell migration in vivo was evaluated in albino rats using the method described by Ribeiro et al. [29]. One hour after oral administration of 200, 400 and 600 mg/kg of the EE or ETF, the animals were given intraperitoneal injection of 1 ml of 3%, w/v agar suspension in normal saline. Four hours later, they were sacrificed and the peritoneal cavities washed with 5 ml of phosphate buffer saline containing 0.5 ml of 10% EDTA. The peritoneal fluid was recovered and the total and differential leucocytes counts (TLC and DLC) were performed on the perfusates. Determination of total leucocytes count A 1:20 dilution of the peritoneal fluid in the diluting fluid (2% acetic acid tinged with gentian violet) was prepared by adding 0.02 ml of peritoneal fluid to 0.38 ml of the diluted fluid. The Neubauer counting chamber was charged with the well-mixed diluted peritoneal fluid. The first 3-5 drops were discarded before charging the chamber. The cells were allowed to settle in a moist chamber for 3-5 minutes. Using 10x objective of the microscope, four large corner squares were located and the total numbers of white cells in the four large corner squares were counted using a manual cell counter after staining with Wright's stain. Differential leucocytes count The staining procedure is the same with the total leucocytes count, however, the cells were differentiated based on their morphological variations and slight difference in stains. Membrane stabilization effects Fresh whole human blood (5 ml) was collected from healthy human volunteers that had not taken any medications within the past two weeks. The blood was transferred to EDTA centrifuge tube. The tubes were centrifuged at 2000 rpm for 5 min, and washed three times with equal volume of normal saline. The volume of the blood was measured and reconstituted as a 40% v/v suspension with isotonic buffer solution (pH 7.4), of the following composition/L of distilled water [30]; NaCl (9 g), NaH 2 PO 4 (0.2 g) and Na 2 HPO 4 (1.15 g). Heat-induced hemolysis The isotonic buffer solution (5 ml) each containing 200 400 and 600 μg/ml of the EE or ETF was put in four set of centrifuge tubes. Control tubes contained 5 ml Tween 80 or 5 ml of 200 μg/ml of prednisolone. Erythrocyte suspension (0.05 ml) was added to each tube and gently mixed. A pair of each sample was incubated at 54°C for 20 min in a regulated water bath. The other pair was maintained at 0-4°C in a freezer for 20 min. At the end of the incubation, the reaction mixture was centrifuged at 1000 rpm for 3 min and the absorbance of the supernatant measured spectrophotometrically at 540 nm using Spectronic 21D (Milton Roy) Spectrophtometer [30]. The percentage inhibition of hemolysis was calculated as follows; Inhibition of hemolysis (%) = [1-(OD2-OD1/OD3-OD1)] x 100 OD1 = absorbance of test sample unheated OD2 = absorbance of test sample heated OD3 = absorbance of control sample heated Hypotonicity-induced hemolysis The hypotonic solution (distilled water, 5 ml) containing 200, 400 and 600 μg/ml of EE or ETF was arranged in two pairs. Control tubes contained 5 ml of distilled water or 200 μg/ml indomethacin. Erythrocyte suspension (0.005 ml) was added to each tube and after gentle mixing, the mixture was incubated for 1hr at room temperature (30°C). At the end of the incubation, the reaction mixture was centrifuged at 1000 rpm for 3 min and the absorbance of the supernatant was measured at 540 nm using spectrophotometer Statistical Analysis The results were presented as mean + SEM. The statistical analyses were performed using ANOVA (SPSS 11.5). Dunnett's test was performed as post-hock test. Differences between means of groups were considered significant at p < 0.05. Acute Toxicity and Phytochemical Analysis Oral administration of the ethanol extract up to 5000 mg/kg did not produce obvious signs of toxicity or mortality in rats. The LD 50 of the extract is thus greater than 5000 mg/kg. The ethanol extract had high amount of saponins and tannins and moderate amount of alkaloids, flavonoids, reducing sugars, steroids and cardiac glycosides. The phytoconstituents present in the ethylacetate fraction were mainly flavonoids and tannins (Table 1). Effect of EE and ETF on Egg Albumin Induced Acute Inflammation Sub-planta injection of egg albumin evoked a progressive increase in the rat paw edema. Both the extract and ethylacetate fraction caused a progressive and significant (p < 0.05) inhibition of edema which peaked by the 4 th hour ( Table 2). The percent edema inhibition by at the 4 th hour by EE was 16.07 and 19.64 for 400 and 600 mg/kg respectively, while that of ETF at the same time and corresponding doses was 50.88 and 49.1 respectively. The inhibition by ETF was more than that of aspirin (43.5%) ( Figure 1) Effect on Xylene-induced Edema Topical application of the EE and ETF significantly (p < 0.05) reduced xylene-induced inflammation in a dose related manner. The percent edema inhibition by ETF at all the doses were greater than that of EE at equivalent doses and also greater than that of indomethacin (Table 3) Ulcerogenic Effects in Rats The extract and ethylacetate fraction produced minimal deleterious effects on the integrity of gastric mucosa compared to indomethacin. The ulcer index of EE and ETF-treated rats (1.2-1.8) were significantly (p < 0.05) less than that of indomethacin treated group (5.2) ( Table 4) Membrane Stabilizing Effect Both EE and ETF significantly (p < 0.05) inhibited heat-induced heamolysis of human RBC membrane, while hypotonicity-induced haemolysis was not significantly affected (Tables 6 and 7) DISCUSSION The very high LD 50 of the extract (> 5000 mg/kg) in addition to its lack of obvious toxicity indicate that the EE and ETF will be relatively safe in the management of chronic inflammatory disorders such as rheumatoid arthritis. Earlier study on the cytotoxicity of the extract of this plant confirmed that it did not induce cell death [19]. Xylene is a phlogistic agent that act on target cells in the periphery like mast cells, immune cells, and vascular smooth muscle, and promote neurogenic inflammation [40]. Flavanones have been documented to ameliorate ear edema induced by xylene [40]. The ability of flavonoids to inhibit NADH oxidase system in mast cells is believed to play a role in their anti-inflammatory properties [41]. The presence of flavonoids in the extract and its abundance in the ethylacetate fraction may have contributed to the topical effect recorded in this study. Anti-inflammatory agents that inhibit prostaglandin synthesis are prone to cause irritation or ulceration of the gastrointestinal tract. Also generation of reactive oxygen species as occurs with some NSAIDs can result in gastrointestinal track injury. The extract and ethylacetate fraction produced minimal deleterious effects on the integrity of gastric mucosa. The plant could thus be beneficial for inflammatory pains and swelling as seen in rheumatoid arthritis with reduced risk of gastric ulcer. Pre-treatment with the extract and ethylacetate fraction significantly (p < 0.05) reduced leucocyte migration. The later phase of acute inflammatory response involves cellular migration to the site of inflammatory stimulus [42]. Inhibition of leucocyte migration by the extract and fraction is suggestive of effect in the later phase of inflammation. Neutrophils which engulf and eliminate microorganisms were the most suppressed leukocytes. They are the first line of defence in the immunological response against pathogens. In inflammatory conditions, they present a potential risk of tissue damage [43], by their interaction with local inflammatory mediators which leads to production of several other mediators that amplify the inflammatory response and tissue damage [44]. These results suggest that the anti-inflammatory effect of the extract and fraction may result from the alteration of inflammatory responses via modulation of cellular and molecular mediators involved in inflammatory pathways. The haemolytic effect of hypotonic solution is related to excessive accumulation of fluid within the cell resulting in the rupture and busting of its membrane while heat leads to shrinkage and oxidative busting of the cell [45]. Leakage of serum proteins and fluids into the tissue during a period of inflammation-induced increased vascular permeability is reduced by membrane stabilization [46]. Though the precise mechanism of this membrane stabilization has not been ascertained, it is possible that increase in surface area/volume ratio of the cells as a result of membrane expansion or shrinkage of the cell, and an interaction with cell membrane proteins may have occurred. It is also possible that the membrane stabilization effect of the extract and fraction may be traced to their ability to alter the influx of calcium into the erythrocytes [30]. In addition to inhibition of leucocytes migration, the extract and the ethylacetate fraction also may prevent the release of cytoplasmic proinflammatory mediators through membrane stabilization. CONCLUSION These results revealed that the leaves of Eleusine indicia are endowed with potent antiinflammatory activity. The anti-inflammatory activity resides mainly in the ethylacetate fraction. The extract and ethylacetate fraction lack gastrointestinal irritation and ulceration, a limitation in the use of NSAIDs. Inhibition of leucocytes migration and membrane stabilization appear to mediate the anti-inflammatory activity of the leaves of Eleusine indica. CONSENT It is not applicable. ETHICAL APPROVAL All animal experiments were conducted in compliance with the NIH guide for care and use of laboratory animals (Pub. No. 85-23 revised 1985), and approved by the Ethical Committee on the use of Laboratory Animal of Nnamdi Azikiwe University, Awka, Nigeria. FUNDING This study did not receive any specific grant from funding agencies in the public, commercial or non-profit sector.
2020-08-13T10:01:59.235Z
2020-08-04T00:00:00.000
{ "year": 2020, "sha1": "9a1dde63b806179b3edb9d3546835c63bc33577d", "oa_license": "CCBY", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/30587/57374", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b92a71c274d9bd854d0735072b211abbe972fac0", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
244893514
pes2o/s2orc
v3-fos-license
On the assumptions behind metacognitive measurements: Implications for theory and practice Theories of visual confidence have largely been grounded in the gaussian signal detection framework. This framework is so dominant that idiosyncratic consequences from this distributional assumption have remained unappreciated. This article reports systematic comparisons of the gaussian signal detection framework to its logistic counterpart in the measurement of metacognitive accuracy. Because of the difference in their distribution kurtosis, these frameworks are found to provide different perspectives regarding the efficiency of confidence rating relative to objective decision (the logistic model intrinsically gives greater meta-dʹ/dʹ ratio than the gaussian model). These frameworks can also provide opposing conclusions regarding the metacognitive inefficiency along the internal evidence continuum (whether meta-dʹ is larger or smaller for higher levels of confidence). Previous theories developed on these lines of analysis may need to be revisited as the gaussian and logistic metacognitive models received somewhat equivalent support in our quantitative model comparisons. Despite these discrepancies, however, we found that across-condition or across-participant comparisons of metacognitive measures are relatively robust against the distributional assumptions, which provides much assurance to conventional research practice. We hope this article promotes the awareness for the significance of hidden modeling assumptions, contributing to the cumulative development of the relevant field. Analyses with different inclusion criteria We have analyzed 5160 individual cases by using inclusion criteria different from those in the main manuscript. For the gaussian SDTs, we have included the cases for which both gaussian d' and meta-d' were estimated to be greater than 0.5. For the logistic SDTs, we have included the cases of logistic d' and meta-d' > 0.8. This is because gaussian d' of 0.5 and logistic d' of 0.8 approximately correspond to the type-1 accuracy of 0.6 under the optimal type-1 criterion setting. Table S1 summarizes the results, where paired t tests showed that mean gaussian meta-d' was significantly smaller than mean gaussian d' (t (3142) = -9.58, p < .001), while mean logistic meta-d' was not significantly different from mean logistic d' ( t (3538) = 1.29, p < .198). 2 Dependency of metacognitive accuracy on raw confidence criteria As in the main manuscript, we have conducted analyses based on the 26927 binary reformatted data, for which the gaussian and logistic meta-SDTs converged and exhibited above-chance type-1 and type-2 performances. Here, instead of considering the normalized confidence criteria, we have evaluated the dependence of meta-d' on raw values of estimated criteria. For each binary reformatted data, an estimate of confidence criterion was obtained respectively for S1 and S2 responses. As there is no unique way to integrate these two estimates, we have evaluated the criterion dependency respectively for each response class. We conducted linear regression to explain meta-d' values from raw criteria estimates and tested if the slope is significantly different from 0 (26927 data points are aggregated). The gaussian meta-d' was estimated to be smaller for higher confidence criteria for the response class of S1 (t = 28.98, p < .001) and S2 (t = 28.99, p < .001). The logistic meta-d' tended to be larger for higher criteria for S1 (t = 19.92, p < .001) and S2 (t = 19.69, p < .001). Namely, the analyses replicated the reversed criterion dependency between the models. Model estimates sorted by dataset features The present study is grounded in the analyses of the existing datasets, which are characterized by a Table S2 reports model estimates sorted by these two factors, where "yes" means that an explicit trial-by-trial manipulation was made on stimulus difficulty (e.g., jittered contrast in visual discrimination), "no" means that stimulus difficulty was constant across trials, and "na" means that variability was rather unintentionally invited by the use of naturalistic stimuli (e.g., variable memorability for naturalistic face or scene stimuli). It seems safe to say that there was little evidence for the effect of the difficulty manipulation at least in the present datasets (comparison between "perception no" and "perception yes"). Also, metacognition seems rather accurate in those cases that employed naturalistic stimuli, although it is not certain if this enhancement comes from varying trial difficulty or other properties equipped in naturalistic stimuli. Note that the results need to be interpreted with caution because the datasets differ in other features than these two dimensions. Analyses on "perception no" datasets We have explored the datasets belonging to the "perception no" condition, which is free from the potential confounding effect of varying difficulty. We have identified 989 individual cases of the condition, to which we have fitted the four different SDT models (Table S3). For model comparison, we have examined the number of the cases for which each model declared the best AIC/BIC fit (Table S4). As was found in the main manuscript, the logistic models were generally favored over the gaussian counterparts, and the type-1 logistic SDT was revealed to be a clear victor. Additional subset analysis We have further narrowed down the analysis and identified three datasets of the "perception no" condition, for which all the four models converged for every included participant ("Maniscalco, 2017, experiment 1", "Maniscalco, 2017, experiment 2, contrast 3", and "Massoni, unpub, study 1, difficulty 2). These datasets include 90 individual cases in total, and summed AIC/BIC measures generally favored the logistic models over the gaussian counterparts ( Model fits to aggregated data For each of the 105 targeted datasets, we have fitted the SDT models by aggregating data from individual participants (Table S6). Although not free from aggregation artifacts, fittings to aggregated data can supplement the individual analysis, allowing for better model convergence and fuller use of available data (e.g., Cohen, Sanborn, & Shiffrin, 2008;Kellen, Klauer, & Bröder, 2013). The models converged and showed above-chance performances for all the 105 aggregated datasets, which enabled us to use summed information criteria for quantitative model comparisons. Summed AIC and BIC best favored the logistic meta-SDT model, which was closely followed by the gaussian meta-SDT model ( Figure S1). Unlike the analyses on the individual data, the meta-SDT models were much favored over the type-1 SDT models. This should be partly because aggregated data offer greater statistical power to justify extra model complexity. It would also be possible that data aggregation across individuals, each of whose placement of confidence criteria should be somewhat idiosyncratic, artifactually impairs the diagnosticity for confidence rating to discriminate correct and incorrect type-1 decisions; this makes meta-d' much smaller than d' and the type-1 models suffered difficulty in explaining data under the constraint of meta-d' = d'. In the comparison between the type-1 models, the logistic model is far ahead of the gaussian counterpart, presumably because it can naturally capture the zROC nonlinearity. Figure S1. Summed AIC and BIC difference relative to the best model. The measures were aggregated across the 105 datasets. Generalized gaussian SDT fits to aggregated data The comparisons of the gaussian and logistic SDT frameworks indicate that the kurtosis of the underlying distribution has major impact on the estimation of metacognitive accuracy (kurtosis is 0 for the gaussian distribution while the logistic distribution has the kurtosis of 1.2). To verify this insight, we have employed generalized gaussian distributions, which allowed us to systematically modulate the kurtosis parameter (calculation was implemented by gnorm package on R). We have fitted generalized gaussian SDT models to datasets aggregated across participants. We set the β parameter of the distribution at 2, 1.576, and 1.34, which gave the kurtosis approximately of 0, 0.6, and 1.2; these β values also gave the standard deviation approximately of 0.71, 0.83, and 0.96, which is not relevant in the present analysis. As expected, the criterion dependency of metacognitive performances was modulated by distribution kurtosis (Figure S4). Under the kurtosis of 0, negative criterion dependency was found for meta-d' (t = -3.83, p < .001) and m-ratio (t = -4.63, p < .001). The kurtosis of 0.6 gave no significant criterion-dependency for meta-d' (t = 1.04, p = .298) and m-ratio (t = 1.38, p = .168). Analysis on non-2AFC datasets We have analyzed left/right dot motion discrimination data (Gherman, 2018) and left/right gabor orientation discrimination data (Mazor, 2020), which are not incorporated with trial-by-trial varying difficulty. The results are consistent with our main findings made on the 2AFC data in that the logistic meta-SDT gave a greater averaged m-ratio with slightly better model convergence than the gaussian meta-SDT. Performance measures were averaged across the cases where each model converged and showed above-chance type-1 and type-2 performances.
2021-12-05T16:05:53.673Z
2021-12-03T00:00:00.000
{ "year": 2022, "sha1": "77455b50280b27878e99f7bb102dd185f64d9b9a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/jov.22.10.18", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5ae441800917e036a2d5bbf234ddfd96254ac71", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
254104350
pes2o/s2orc
v3-fos-license
High dimensional parameter tuning for event generators Monte Carlo Event Generators are important tools for the understanding of physics at particle colliders like the LHC. In order to best predict a wide variety of observables, the optimization of parameters in the Event Generators based on precision data is crucial. However, the simultaneous optimization of many parameters is computationally challenging. We present an algorithm that allows to tune Monte Carlo Event Generators for high dimensional parameter spaces. To achieve this we first split the parameter space algorithmically in subspaces and perform a Professor tuning on the subspaces with binwise weights to enhance the influence of relevant observables. We test the algorithm in ideal conditions and in real life examples including tuning of the event generators Herwig 7 and Pythia 8 for LEP observables. Further, we tune parts of the Herwig 7 event generator with the Lund string model. Introduction and motivation The amount of data taken at the LHC allows measuring observables that can be calculated perturbatively to high precision. This is beneficial for the comparison as well as the improvement of phenomenologically motivated nonperturbative models and also the searches for new physics. With the increasing precision made available in recent years through perturbative higher order calculations, theoretical uncertainties have reduced dramatically. In the comparison of these theory predictions and experimental data, Monte Carlo event generators (MCEG) [1] like Herwig 7 [2][3][4][5], Sherpa [6] or Pythia 8 [7,8] play an important role. If possible a matched calculation that includes the perturbative corrections and the effects described by the MCEG can give an improved picture of the event structure measured by the experiment. a e-mail: johannes.bellm@thep.lu.se b e-mail: leif.gellersen@thep.lu.se Here event generators typically include additional phenomenological models to include effects that are not part of specialised fixed order and resummed calculations. Uncertainties of these additional modelled, but factorised, parts of the simulation can be estimated from lower order simulations. The MCEG contain various, usually factorized (e.g. by energy scales) components. In the development, these parts can be improved individually. The aforementioned matching to perturbative calculations is an example of recombining parts usually separated in the event generation, namely the parton shower and hard matrix element calculation. While it is possible to make such modifications and improvements, it is also necessary to keep other parts of the simulation in mind. Even though the generation is factorized, various parts of the simulation will have an impact on other ingredients of the generator. Any modification can, in general, have an impact on the full events. Calculated, or at least theoretically motived improvements will lead to a reduction of freedom that eventually also restricts the parameter ranges of the phenomenological models that could be used to compensate the variations of the perturbative side [9][10][11][12][13][14][15][16]. The capability to describe data needs to be reviewed with the modifications made in order to use the event generator for future predictions or concept designs for new experiments. The procedure of adjusting the parameters of the simulation to measured data is called tuning. Various contributions for the tuning of MCEGs have been made [17][18][19][20][21][22][23][24][25], and the importance of these studies can be deduced from the recognition received. More recently, new techniques have been presented that can improve the performance of tuning [26][27][28][29][30]. To be able to perform the comparison of simulation and data, the data needs to be collected and it needs to be possible to analyse the simulations similar to the experimental setup. Here, the hepdata project [31] and analysis programs like Rivet [32] are of great importance to the high energy physics community. Once the data and the possibility to analyse is given, the 'art' of tuning is to choose the 'right' data, possibly enhance the importance of some data sets over others, and to modify the parameters of the simulation such to reduce the difference of data and simulation. A prominent tool to allow the experienced physicist to perform the tuning is the Professor [21] package that allows performing most of the procedure automatically. The complexity of the MCEG tuning depends on the dimension of the parameter space used as an input to the event generation. Further, the measured observables are in general functions of many of the parameters used in the simulation. In this contribution, we address the problems of high dimensional parameter determination. We propose a method to choose subsets of parameters to reduce the complexity. We further aim to automatize the tuning process, to be able to retune with minimal effort once improvement is made to the MCEG in use. We call this automation of the tuning process and the algorithm to perform it the Autotunes method. 1 As possible real life scenarios we then tune the Herwig 7 and Pythia 8 models and also a hybrid form, namely the Herwig 7 showers with the Pythia 8's Lund String model [33,34]. We structure the paper as follows: In Sect. 2 we define the problem and questions that we want to solve and answer. We then describe Professor and its the capabilities and restrictions. In Sect. 3 we explicitly define the algorithm and point out how the methods used will act mathematically. In Sect. 4 we show how the algorithm was tested. Results of tuning the event generators Herwig 7 and Pythia 8 are presented in Sect. 5. We conclude in Sect. 6 and specify the possible next steps. Current state Improving the choice of parameters -commonly referred to as tuning -is required to produce the most reliable theory predictions. The Rivet toolkit allows comparing Monte Carlo event generator output to data from a variety of physics analyses. Based on this input, different tuning approaches can be followed. A most elaborate approach is the tuning 'by hand'. It requires a thorough understanding of the physical processes involved in the generation of events and the identification of suitable observables to adjust every single parameter. A detailed example of such a manual approach is given by the Monash tune [22], the current default tune of the event generator Pythia 8 [7,8]. However, in order to simplify and systematize tuning efforts, a more automated approach is desirable. The Professor [21] tuning tool was developed for this purpose. This allows to tune multiple parameters simultaneously. Professor: capabilities and restrictions The Professor method of systematic generator tuning is described in detail in [21]. The basic idea is to define a goodness of fit function between data generated with a Monte Carlo event generator and reference data that is provided by experimental measurements through Rivet. This function is then minimized. Due to the high computational cost of generating events, a direct evaluation of the generator response in the goodness of fit function should be avoided. This is done by using a parametrization function, usually a polynomial, which is fitted to the generator response to give an interpolation which allows for efficient minimization. The following χ 2 measure is used as a goodness of fit function between each bin i of observables O as predicted by the Monte Carlo generator f i , depending on the chosen parameter vector p and as given by the reference data R i . To simplify the notation, each bin in each histogram is now -without loosing generalitycalled an observable, with prediction f i and reference data value R i : The uncertainty of the reference observable is denoted by i . Furthermore, a weight w i is introduced for every observable. These weights can be chosen arbitrarily to bias the influence of each observable in the tuning process. The approach of the Professor method allows to tune multiple parameters simultaneously, and drastically reduces the time needed to perform a tune. The number of parameters and the polynomial approximation of the generator response limits the efficiency for a high number of parameters or a high degree of polynomial power. The formula is given in [21] and already a set of 15 parameters and an approximation using third power requires at least 816 generator samples to form an interpolation. To test the stability of such interpolations the method of runcombinations can be used to check how well the minimization is performed. Here a higher number than the minimal set of points is needed. However, further effort is needed to overcome some of the restrictions that remain: • The polynomial approximation of the generator response is well suited for up to about ten parameters. Further simultaneous tuning requires many parameter points as input for the polynomial fit, typically exceeding the available computing resources. This is often circumvented by identifying a subset of correlated parameters 2 that should be tuned simultaneously. • The assignment of weights requires the identification of relevant observables for the set of parameters. Different choices and methods can possibly bias the tuning result. • Correlations in the data need to be identified in order to reduce the weight of equivalent data in the tune, and thus avoid bias by over-represented data. • The polynomial approach is reasonable in sufficiently small intervals in the parameters, but might fail if the initial ranges for the sampled parameters are chosen too large and the parameter variation shows a nonpolynomial behaviour. 3 Suggested improvements In the Autotunes approach we aim to address some of the issues mentioned above. For high-dimensional problems, we suggest a generic way to identify correlated high-impact parameters that need to be tuned simultaneously, and divide the problem into suitable subsets. Instead of setting weights for every observable by hand, we propose an automatic method that sets a high weight on highly influential observables for every sub-tune, reducing the bias by observables that are better optimized by parameters in another sub-tune. This procedure makes the tuning process more easily reproducible. As a further improvement, we implement an automated iteration of the tuning process, that takes refined ranges from the preceding tune as a starting point. By a stepwise reduction of the parameter ranges, we improve the stability and reliability of our first order approximation of parameter impact, and the polynomial interpolation implemented in Professor. The algorithm In this section we formulate the algorithm proposed to improve the tuning of the high dimensional parameter space. We propose to organize the algorithm as: (A) Reduce the dimensionality of the problem by splitting the parameters into subsets, defining sub-spaces and sub-tunes. Here the algorithm should cluster parameters that are correlated. (B) Assign weights to observables, such that the current sub-tune predominantly acts to reduce the weighted χ 2 calculation for the corresponding sub-space. (C) Run Professor on the sub-tunes. 3 For the method it would be beneficial to reformulate the theoretical model such that the parameter response on typical experimental observables is of polynomial type. For general observables and event generators such a reformulation is in general not given. Additional work to identify such behaviour could be worth pursuing. (D) Automatically find new parameter ranges for an iterative tuning. Reduce the dimensionality (chunking) The goal of this step is to split up a high dimensional space (N dimensional) into subspaces (n dimensional 4 ), such that the clustered parameters are correlated on the observable level. To achieve this we have to define a quantity M that can be maximized or minimized to allow the algorithmic treatment. The parameter space we work with is a hyper-rectangle. The observable definitions usually allow to access one dimensional projections. Here, the 'projection' is the model (implemented in an event generator) at hand. Two issues directly come to mind: First, we explicitly describe the parameter space p ∈ [ p min , p max ] as a hyperrectangle rather than a hyper-cube. Some of the parameters could have been measured externally, others are pure model specific. A measure, which allows comparisons between the parameters, needs to be corrected for the initial ranges ([ p min , p max ]) defined by the input. To overcome this first problem, we first definep α ∈ [0, 1] as the vector 5 normalized to the input range and will describe below how a rescaling is performed to regain the information lost by this normalisation and relate it to the variations on the observables. The second issue is the generic observable definition. Some of the observable bins are parts of normalized distributions, or even related to other histograms (as is the case for e.g. centrality definitions in heavy ion collisions [35]). In other words, the height f i of observables again does not define a good measure to define a generic quantity to minimize. In order to overcome the second problem, we test the observable space with N search random points in the parameter space projected with the model to the observables. The spread for each observable is used to normalize the values tof i ∈ [0, 1]. Note that an influential parameter can be shadowed by a less important parameter if the latter has a too large initial range. After the normalizationsp α andf i are performed, we use the N search -projections to perform linear regression fits for each parameter, and for each observable bin. Here, the linear dependence defines the slope of each parameter in each bin df i /dp α . Due to the normalization of the f i -range, this slope is influenced not only by the parameter itself, but also by the spread produced by the other parameters. The reduction of the slope includes a correlation of parameters to other parameters on the observable level. We use the absolute value 6 of the slope to define an averaged gradient or slope-vector S i . The sum S Sum = i S i has in general unequal entries, one for each parameter in the tune. This indicates that the input ranges [ p min , p max ] are of unequal influence on the observables. To correct for this choice and to improve the clustering of parameters with higher correlation, we normalize each S i element-wise with S Sum to create N i , To illustrate the effect of the element-wise normalisation we show in Fig. 1 how the sum of all normalised vectors of the observable bins (here 3) reaches the parameter-space point (1) N ( here N = 2). In bin i the component to a parameter α of the new vector N i is reduced if other observables are sensitive to the same parameter. The direction of N i indicates the correlation of parameters . We can now use N i to chunk the dimensionality of the problem. 7 Therefore, we calculate the projection for each of the N i on all possible n dimensional sub-spaces. This is done by multiplication with combination vectors J . Here, J is defined as one of all possible N -dimensional vectors with N − n zero entries and n unit entries, where n is again the dimension of the desired subspace, e.g. J = (1, 0, 0, . . . , 1, 0, 1). The sub-space then defines a sub-tune. The sum over all projections, can serve as a good measure to be maximized. However, due to the normalization of N i the sum is equal J for k = 1. For the quantity M mentioned at the start of the section we use k = 2 giving, in order to define the sub-tunes. This choice of k > 1 selects a few strongly correlated parameters over many less correlated ones. The maximal M( J Step1 ) defines the first of the sub-tunes (Step 1). For other steps, we require no overlap between the sub-spaces. This we enforce by requiring a vanishing scalar product J StepN · J StepM . It is now possible to perform the tuning in the same order as the maximal measures of Eq. (4) are found. This would first fix parameters that can modify the description of fewer observables, and In order to first constrain globally important parameters, and then fix specialized parameters, we invert the order of found sub-tunes. We thus have split the dimensionality of the problem, and will ensure, in the following, that observables used in the various sub-tunes are described by the set of influential parameters. Assign weights (improved importance) In the last paragraph, we described how we split up the dimensionality of the full parameter set to allow us to tune subsets, such that parameters with higher correlation on the observable level are tuned simultaneously. To increase the importance of observables that are relevant for the sub-tune, we now try to enhance the relative weight with respect to other observables. Here, we use the same vectors N i defined in the last paragraph. These vectors, obtained by linear regression, and normalized to the overall range of observable vectors have the properties, that they point in the parameter space, and, due to the normalization, they correlate the importance of other measured observables to the current bin. We define the weight of the observable bins later used to minimize the χ 2 as where J Step is the combinatorial vector defined in Sect. 3.1, corresponding to the sub-tune. This weight has the properties that the multiplication in the numerator increases the weight of the important bins for the sub-tune, while the sum over components of N i in the denominator reduces the importance of bins that are equally or more important to other parameters. Note that the N i itself are not normalised, only the sum over i is normalised in each component. This weighting enhances the effect of bins that have been identified as influential with respect to the parameters tuned in the given sub-tune. It also reduces the effect of bins that are expected to be relevant in other sub-tunes. Thereby, the algorithm reduces bias by not yet optimized data bins and less relevant distributions. This weighting is applied for each tune step individually and does not take into account the physicists knowledge of relevant or unsuitable data from a physics perspective. An additional global weighting based on physical motivations can be performed on top of this algorithmic treatment, but is not considered in this work. We want to highlight that the splitting of the parameter space, described in Sect. 3.1 as well as the assignment of weights described here are blind to the data measured in the experiment. Only the generator response on the defined observables are needed. Run Professor (tune-steps) Before we start the first iteration and step, we perform a second order Professor tune as starting condition, referenced to as BestGuessTune. Here, we make use of the N search sampled points used to determine the splitting of the parameter space and the weight setting described in the previous sections. Instead of giving ranges and a starting parameter point, only ranges are required as starting conditions. As the starting point has an impact on the sub-tunes that are performed in the beginning, the BestGuessTune aims to reduce user interference. After splitting the parameter space and enhancing the weights for important observables for the sub-tunes we use the capability of Professor to tune the parameter space of each step. When a step is performed, we use the Professor result of this and all previous steps to fix the parameters for the following step. For the individual sub-tunes, we make use of the runcombination method of Professor, to build subsets of the randomly sampled parameter points. This produces modified polynomial interpolations and gives a spread in the χ 2 values of the best fit values. We choose the result associated with the best χ 2 as the best tune value. To give a measure for the stability of the tune, we choose the runcombinations that give the best 80% of the χ 2 values. For those we extract the corresponding parameter range, and add a 20% margin on both sides. To elucidate the effect, an example for the tuning of the strong coupling constant α S is given in Fig. 2. Here, the blue points correspond to the 80% best combinations and the green dashed lines give the measure of stability. Diagrams like Fig. 2 are automatically produced by the program, for each parameter and tune-step. In Fig. 2 three iterations are shown as it is described in the next section. Find new ranges and iterate the procedure (Iteration) The measure of stability defined in the Sect. 3.3 also serves as input for the next iterations. Here we make use of the redefined ranges. An iterative tuning is important, since the first set of parameters has been influenced by the users choices, and a next iteration can have significant impact on the parameter value. For very expensive simulations, at least a retuning of the first step's parameter space seems desirable. The program is setup such that one can use the output of the first full tune as input for the next iteration. Testing and findings Before applying the Autotunes framework to perform a LEP retune of Pythia 8, Herwig 7, and a combination of both in Sect. 5, we test the method under idealized conditions. First, we tune the coefficients of a set of polynomials. The observables used for the tune are constructed from the polynomials for a random choice of coefficients, see Sect. 4.1. As a second test, we tune the Pythia 8 event generator to pseudo data generated with randomized parameter values. In both scenarios, it is desirable to recover the randomly chosen parameter values that were used to generate the observables. Testing the algorithm under ideal conditions To test the algorithm, we first introduce a simplified and fast generator. We define the projection, with m-dimensional tensors G ··· m,a , correlation matrices 8 C ir a , and parameter points p i . Upper indices sum over the parameter dimensions. We fill G ··· m with random numbers and use C ir a to correlate subsets of parameters. Here C ir a is a diagonal matrix with constant entries k > 1 if the bin a should be enhanced for this parameter i and one if not. By building ranges, we can define enhanced parameter sets. As an example, we use a d = 15 dimensional parameter space, . Under these ideal conditions, we search for the correlations with the procedure described in Sect. 3.1. In Fig. 3(left), the weights for the parameter correlations are shown. The ideal combinations defined above create the highest weights, and would therefore be detected as correlated by the algorithm. In a real life MCEG tune the correlations are much less pronounced. In the right panel of Fig. 3, we show the weight distribution for the example of the Herwig 7 tune described in Sect. 5.2.1. Once the correlated combinations are found, the algorithm continues with the procedure described in Sects. 3.3 and 3.2. As the result of each full tune serving as input to a next iteration, it is possible to visualize the outcome as a function of tune iterations. Figure 4 shows this visualisation as produced by the program. Each parameter (A-O) is normalised to the initial range, and plotted with an offset. In this example, it is possible to show the input values of the pseudo data with dashed lines. This is not possible when tuning is performed to real data. As Professor is very well capable of finding polynomial behaviour, the parameter point that the method aims to find is already well constrained after the first itera-tion. However, next iterations still improve the result. This may be seen for example in the third and last line. The procedure to split the parameter space into smaller subsets, and to assign weights can suffer from numerical and statistical noise if we consider many observables. In Appendix A, we discuss the range dependence and show that the weight distributions are fairly stable if the same parameters are found to be correlated. It is further possible to ask for weights, if all parameters should be tuned independently. From the tuning perspective this seems an unnecessary feature, but can help to find observables that are likely influenced by a model parameter, e.g it is possible to identify the range of bins where the bottom mass has influence in jet rates. Tuning Pythia 8 to pseudo data As a second test of our method, we use Pythia 8 to generate pseudo data for a random choice of 18 relevant parameter values. We then use three different methods to tune Pythia 8 to this set of pseudo data, and try to recover the true parameters. In all methods, we divide the tuning into three subtunes. The first method is a random selection of parameters Fig. 4 Iterated tuning to polynomial pseudo data using the Autotunes method out of the full set, with unit weights on all observables. In the second method, we choose the simultaneously tuned parameters based on physical motivation, but still use unit weights on all observables. Finally, we use the Autotunes method to divide the parameters into steps, and automatically set weights as described in Sect. 3. The choice of parameters used in the physically motivated method is given in Table 1. The first step collects parameters that have a significant influence on many observables, combining shower and Pythia 8 string parameters. The second step gathers additional properties of the string model [33,34], focusing on the flavor composition. The last step then tunes the ratio of vector-to-pseudoscalar meson production. The results of the three tuning approaches that aim to recover the Pythia 8 pseudo data parameters are shown in Fig. 5a-d. None of the approaches is capable of exactly recovering all of the original parameter values. This suggests that close-by points in parameter space are well suited to reproduce the pseudo data observable distributions. However, the iterated Autotunes method improves the agreement of the recovered parameters by avoiding large mismatches, e.g. in the StringFlav-mesonBvector parameter. In the physically motivated and random approaches, there is a certain chance that parameters are strongly constrained by observables that also depend on other parameters. If these are not identified and included in the same sub-tune, both parameters get constrained. Thus, the optimal configuration is not necessarily recovered, as can be observed by the seemingly fast convergence of the random method. Figure 5a shows that some parameters are constrained rather quickly, not recovering the original value. By iteratively identifying such sets of parameters, the Autotunes method attempts to avoid these mismatches. On the other side, Fig. 5b indicates that the fixed physically motivated combinations lead to a rather slow convergence of parameters. Overall, it is difficult to assess the tune quality from Fig. 5a to c. Figure 5d shows the summed, squared and normalized deviation of the recovered to the true parameter values. Each approach is performed three times to access the stability of the results. The random approach uses random combinations of parameters for the tuning steps, so we see a wide spread of results. The iterative tuning using our fixed physically motivated parameter choice is more reliable, showing a lower spread and better results. The Autotunes method leads to the best agreement with the original parameters. More stable results in the physically motivated and the Autotunes method could be achieved by using higher statistics for both the event generation and the sampling. We see that in the physically motivated and the Autotunes approach, a second tuning iteration affects the results, mostly -but not necessarily -improving the parameter agreement. Further iterations have a minor impact. Results We use the Autotunes framework to perform five distinct tunes to LEP observables. We tuned to a rather inclusive list of analyses 9 available within the Rivet framework for the collider energy at the Z-Pole. To this point we do not weight the LEP observables, but make use of the sub-tune weights described in Sect. 3.2. 10 The tunes make use of the default hadronisation models of the event generators Herwig 7 and Pythia 8. We further present a new tune of the Herwig 7 event generator interfaced to the Pythia 8 string hadronisation model. The details of the simulations can be found in the following sections. The results are presented in Table 2 and Table 3, listing default values, tuning ranges of the parameters, as well as the tuning results using the Autotunes method. Random Physically Autotunes (d) Comparison in summed deviation from true parameters, each normalized to a range between 0 and 1. Iterating the Autotunes approach leads to better agreement with initially chosen parameters. Fig. 5 Parameter development as a function of tune iterations. Each iteration consists of a full tuning procedure with three sub-tunes, using the optimized parameter values and ranges of the preceding iteration as starting conditions. The dashed lines in Fig. 5a-c show the true parameter point that was used to produce the pseudo data. The uncertainty bands are given by 80% of the best fit values in the Professor runcombinations and an additional 20% margin. In Fig. 5d we compare the the summed deviation for three distinct tunes for the random, physically motivated and Autotunes method 5.1 Retuning of Pythia 8 The tune of Pythia 8.235 is performed by using LEP data. We use Pythia 8's standard configuration as described in the manual, including a one-loop running of α S in the parton shower. The tuned parameters, initial ranges and tune results are given in Table 2 in Appendix B. The given ranges on the tune results, obtained from the variation of the optimal tune in different run combinations, can be interpreted as a measure of the stability of the best tune. A wide range suggests that different configurations give tunes of similar χ 2 . The extraction of the strong coupling α S is the most stable result in the tune. The modification of the longitudinal lightcone fraction distribution in the string fragmentation model for strange quarks (StringZ:aExtraSQuark) is very loosely constrained, suggesting that the data that is employed in the tune is not suitable to extract this parameter. We tune 18 parameters in three sets of six parameters each. In the Pythia 8 tune, the parton shower cutoff pTmin is surprisingly loosely constrained. Checking the combinations of parameters that the Autotunes method chooses, we note that pTmin is found to be correlated with the string fragmentation parameters aLund and bLund in every iteration, which are also rather loosely constrained. This suggests that different choices for these three parameters can provide tunes of similar χ 2 . Retuning of Herwig 7 As another real life example we tune the Herwig 7 event generator to LEP data. Here the tune is based upon version Herwig 7.1.4 and ThePEG 2.1.4. We perform two tunes -cluster and string model -for both showers, the QTilde shower [68] and the dipole shower [69]. For the presented tunes we do not employ the CMW scheme [70], but keep the α S (M Z ) value a free parameter. This results in the enhanced value compared to the world average [71]. Tuning Herwig 7 with cluster model We retune the cluster-model with a 22 dimensional parameter space. Here, we require tree sub-tunes and performed four iterations. The results are listed in Appendix B. Comparing the results, we note that the method is in general able to find values outside of the given initial parameter ranges, see e.g. the α S (M Z ) or the nominal b-mass. This can be caused by Professor interpolation outside the given bounds or in the determination of the new ranges for the next iteration. Apart from the parameters that influence the cluster fission process of heavy clusters involving charm quarks (ClPowCharm and PSplitCharm), the parameters are comparable between the two shower models. Further in the cluster-model, the fission parameters are correlated. It is reasonable to assume possible local minima in the χ 2 measure. Tuning Herwig 7 with Pythia 8 Strings The usual setup of the event generators are genuinely welltuned and even though the tests of Sect. 4 allow the conclusion that relatively arbitrary starting points lead to similar results, ignoring the previous knowledge completely seems undesirable. To create a real live example and further allow useful future studies we employed the fact that the C++ version of the Ariadne shower but also the Herwig 7 event generator is based on ThePEG. Furthermore, with minor modifications, the unpublished interface between ThePEG and Pythia 8 (called TheP8I, written by L. Lönnblad), allowed the internal use of Pythia 8-stings with Herwig 7 events. 11 Since no tuning for this setup was attempted before the starting conditions needed to be chosen with less bias compared to the other results of this section. When we compare the values received for the Herwig 7 showers to the Pythia 8 shower, we note a comparably large value for the Pythia 8 α S value. In contrast, the cutoff in the transverse momentum in Pythia 8 is rather small. The reason for this contradicting behaviour 12 can be found in the order at which the two codes evaluate the running of the strong coupling. While Herwig 7 chooses an NLO running, Pythia 8 evolves α S with LO running, and therefore suppresses the radiation for low energies. Even though the shower models are rather different, the difference in the response in the best fit values of the parameters are moderate. Less constrained parameters like the popcornRate, which influences part of the baryon production or the additional strange quark parameter aExtraSQuark show a corresponding large uncertainty. It can be concluded that the data used for tuning is hardly constraining these parameters. Conclusion and outlook We presented an algorithm that allows a semi-automatic Monte Carlo Event generator tuning of high dimensional parameter space. Here, we motivated and described how the parameter space can be split into sub-spaces, based on the projections to and variations in the observable space. We then assigned increased weights when we perform the sub-tunes, such that influential observables are highlighted. It is then possible to use the output of any tune step as starting conditions for next steps. Therefore the procedure is iterative. In ideal conditions, we performed tests to check that the algorithm finds correlated parameters and showed in realistic environment that pseudo data could be reproduced better by the algorithm than by random or physically motivated tunes. As real life examples we tuned the Pythia 8 and Herwig 7 showers with their standard hadronisations models and modified the Herwig 7 generator to allow consistent hadronisation with the Pythia 8's Lund String model. The method allows to perform tuning with far less human interaction. It also allows different models to be tuned with a similar bias. Such tunes can then be used to identify mismodelling, with the assurance that the origin of the difference in data description is less likely part of a better or worse tuning. At the current stage we did not assign weights or uncertainties other than the sub-tune weights and the uncertainties given by the experimental collaborations. We note that the difference between higher multiplicity merged simulations to the pure parton shower simulations can serve as an excellent reduction weight to suppress observables influenced by higher order calculations. However, the investigation of such procedures goes beyond the scope of this paper and will be subject to future work. Further, we did not address the third point of the mentioned restrictions in Sect. 2.1 that describes over-represented data. We postpone such studies, that include clustering of slope-vectors to reduce such an influence, to future work. Data Availability Statement This manuscript has no associated data or the data will not be deposited. [Author's comment: For this work, no data was measured that would justify additional material.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . Appendix A: Range dependence The algorithm to split the dimensions and to assign weights to sub-tunes is constructed such that correlations should still be found when the parameter ranges are varied. This is not always possible if the parameter ranges are strongly modified. It is possible that the slope vectors, that are evaluated by averaging over the full n −1 (other) dimensions, are modified by the newly defined initial ranges. It is even possible that the range of other parameters influence the slope as the spread modifies the normalisation. In order to show such behaviour (and also to illustrate the weight distributions), we choose three different setups for the event generator Herwig 7. We choose d = 4 and try to split the dimensions in half. Here we choose the parameters and initial ranges as, The result for the parameter grouping and the weight distributions are depicted in Fig. 6. While the algorithm to split the parameter space in setup 1 and setup 2 such that Cl light max and p T min should be tuned in the first step and then α S (M Z ) and g CM 13 in a second step, the modification to the initial ranges has the effect that the algorithm favours the pairing (Cl light max , g CM ) and (α S , p T min ) for steps 1 and 2 for setup 3. While it is possible that by changing the initial ranges the pairing flips and other parameter groups are found, the fact that neighbouring bins have a similar behaviour supports the concept of meaningful weight distributions. It would be possible to correlate neighbouring bins or introduce a smoothing algorithm to make the weights more stable but such a modification can be introduced once issues with the current algorithm appear. In principle, it is possible to visualize for each parameter the weights of the sub-tune choice that we want. This choice can help to identify observables that are influential for individual parameters, and give insights in unexpected behaviours. Already from the weight distributions shown in Fig. 6, we can deduce that p T min is of great importance for the transverse momentum out-of-plane, see upper left panel. Further modifications of the constituent mass of the gluon g CM will influence the difference in the hemisphere masses, see lower right panel. Step Setup 1&2(dashed), 2.
2022-12-01T15:55:21.391Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b0ace39684c5d6baf14b0a6c69bd4e1027c4b367", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-019-7579-5.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b0ace39684c5d6baf14b0a6c69bd4e1027c4b367", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
52906849
pes2o/s2orc
v3-fos-license
Hybrid Graphene-Plasmonic Gratings to Achieve Enhanced Nonlinear Effects at Terahertz Frequencies High input intensities are usually required to efficiently excite optical nonlinear effects in ultrathin structures. This problem is particularly critical at terahertz (THz) frequencies because high input power THz sources are not available. The demonstration of enhanced nonlinear effects at THz frequencies is particularly important since these nonlinear mechanisms promise to play a significant role in the development and design of new reconfigurable planar THz nonlinear devices. In this work, we present a novel class of ultrathin nonlinear hybrid planar THz devices based on graphene-covered plasmonic gratings exhibiting very large nonlinear response. The robust localization and enhancement of the electric field along the graphene monolayer, combined with the large nonlinear conductivity of graphene, can lead to boosted third harmonic generation (THG) and four-wave mixing (FWM) nonlinear processes at THz frequencies. These interesting nonlinear effects exhibit very high nonlinear conversion efficiencies and are triggered by realistic input intensities with relative low values. In addition, the THG and FWM processes can be significantly tuned by the dimensions of the proposed hybrid structures, the doping level of graphene, or the input intensity values, whereas the nonlinear radiated power remains relatively insensitive to the incident angle of the excitation source. The presented nonlinear hybrid graphene-covered plasmonic gratings have a relative simple geometry and can be used to realize efficient third-order THz effects with a limited fabrication complexity. Several new nonlinear THz devices are envisioned based on the proposed hybrid nonlinear structures, such as frequency generators, all-optical signal processors, and wave mixers. These devices are expected to be useful for nonlinear THz spectroscopy, noninvasive THz subwavelength imaging, and THz communication applications. I. Introduction Graphene is a two-dimensional (2D) material with unique electric and optical properties [1,2]. Surface plasmons are formed at its surface when excited by THz radiation [3], a property of paramount interest for a material to be used in the envisioned integrated THz plasmonic systems [4]. In addition, the conductivity of graphene can be dynamically controlled and tuned by electrostatic doping, usually by using a gating voltage configuration [5]. The tunable and reconfigurable functionalities of graphene have recently been widely investigated with the goal to design different adaptive graphene-based THz devices, such as polarizers [6], cloaks [7], phase shifters [8], optical modulators [9], and absorbers [10]. However, light absorption along a graphene monolayer is usually very weak due to its single-atom thickness, which consists a deleterious property towards the practical applications of graphene-based devices [11]. Fortunately, the absorption can be enhanced by patterning doped graphene monolayers into periodic nanodisks [12], combining graphene with insulating layers [13], or integrating graphene with microcavities [14]. Nevertheless, complicated fabrication procedures are required to fabricate most of the aforementioned graphene-based configurations making these designs prone to fabrication imperfections and other limitations. Recently, the magnetic resonance of a metallic grating was used to enhance the absorption of graphene monolayers at THz frequencies [15]. This structure is easier to be fabricated and can potentially be more practical towards the design of compact THz devices. By placing a graphene monolayer over a metallic grating, strong and localized electric fields are obtained along the graphene when the plasmon modes of both graphene and grating coincide. Note that this grating design is different compared to the recently proposed hybrid plasmonic waveguide modulator loaded with graphene [16]. Interestingly, graphene has been found to possess strong nonlinear electromagnetic properties [17,18]. The second-order nonlinear response of a graphene monolayer usually vanishes within the dipole approximation [19], since graphene has centrosymmetric properties. However, graphene has been experimentally demonstrated to possess a remarkably strong third-order nonlinear susceptibility (3)  at THz frequencies [20]. The strong third-order nonlinear response originates from the intraband electron transitions [21], as well as the resonant nature of the light-graphene interactions. Both of these effects are dominant under THz radiation illumination. Specifically, the Kerr nonlinear susceptibility (3)  of graphene was found to reach high values   in recent experiments [22]. Two of the most common third-order nonlinear processes are third-harmonic generation (THG) and four-wave mixing (FWM). In the case of THG, an incident wave    interacts with the system to produce a wave with three times the incident wave frequency   3 [23]. The significant advantage of THG compared to other nonlinear processes is that it can be achieved by using a single-wavelength source. This nonlinear process can be used to realize higher transverse resolution for nonlinear imaging and microscopy techniques [24] and improved sensing [25]. THG has also been reported to be generated by graphene and few-layer graphite films but with relative low efficiency despite the large nonlinear graphene susceptibility [26,27]. FWM is another interesting third-order nonlinear process that has found a plethora of applications in nonlinear imaging, wavelength conversion, optical switching, and phase-sensitive amplification [28][29][30][31][32]. Different to THG, during the process of FWM, two pump 1  and one probe 2  photons are absorbed and mixed at the nonlinear medium and a photon at 12 2 FWM     is generated and re-emitted. The FWM efficiency strongly depends on the field enhancement at the input pump and probe frequencies and proper phase matching conditions [33]. The field enhancement along graphene and other 2D materials is usually very weak due to the poor coupling of the incident electromagnetic radiation to these ultrathin media. However, the phase matching condition can be relaxed in the case of 2D materials, since they are extremely thin and phase cannot be accumulated along their thickness in contrast to conventional bulk nonlinear materials. The increase in the nonlinear efficiency of 2D materials still remains elusive and is subject of intense on-going research [34,35]. Usually, very high input intensities are required to excite third-order nonlinear effects from ultrathin nonlinear materials due to their extremely weak nature leading to very poor nonlinear efficiency [36]. This detrimental effect is particular acute at low THz frequencies, since high input power THz sources do not exist [37]. The use of different plasmonic configurations has been demonstrated to efficiently boost optical nonlinear effects mainly due to the enhanced local electric fields and relaxed phase matching conditions at the nanoscale [38][39][40][41][42][43][44][45][46][47][48][49][50][51][52]. Plasmonic effects can indeed lead to enhanced effective nonlinear susceptibilities based on different configurations which are composed of materials with weak intrinsic nonlinear properties. Yet, the plasmonic boosting of nonlinear effects has been mainly achieved in the infrared and visible spectrum and the enhancement of nonlinearities at THz frequencies still remains limited. In the current work, we study an alternative approach to boost nonlinear effects specifically at THz frequencies based on ultrathin hybrid plasmonic structures. In particular, we investigate the potential of graphene-covered metallic gratings to dramatically amplify the inherently weak nonlinear response of conventional metallic gratings and isolated 2D materials [34]. More specifically, it will be demonstrated that the addition of graphene in the proposed hybrid plasmonic structures is of paramount importance to the boosting of different nonlinear procedures. This is mainly due to three reasons: a) the large field enhancement and confinement arising from the strong interference between the surface plasmon excited due to the graphene monolayer and the localized plasmon confined in the metallic grating [53]. b) The high nonlinear response of graphene [54], which is further boosted in the presence of the strong localized fields despite its one-atom thickness. c) The perfect phase matching condition that is achieved along the subwavelength thickness of graphene. Due to these reasons, the efficiency of both THG and FWM processes will be greatly enhanced with the proposed graphene-covered grating structures. Furthermore, the third-order nonlinear graphene conductivity is a function of the Fermi energy or doping level of graphene [55]. This effect will be utilized to dynamically control the presented THz third-order nonlinear processes by tuning the doping level via a gate voltage configuration [3], [6]. The voltage values required for the aforementioned tuning are relative low, as will be demonstrated later. This tunable mechanism allows the efficient generation of enhanced reconfigurable THz nonlinearities and provide the possibility to develop adaptive graphene-based nonlinear THz devices [56][57][58][59][60]. In order to investigate the enhancement in the nonlinear response of the proposed graphenecovered plasmonic grating, we employ full-wave numerical simulations based on the finite element method (FEM) by using COMSOL Multiphysics. The FEM equations are substantially modified to include the appropriate nonlinear polarizabilities and currents in Maxwell's equations in order to be able to accurately simulate the presented nonlinear effects. The presented full-wave simulations are ideal to calculate the nonlinear radiated power by integrating the power outflow over a surface that surrounds the device under study. This consists ideal simulation scenario to accurately compare the obtained theoretical results with the potential experimental results that will be attained by nonlinear experiments. The proposed hybrid structure can be easily experimentally verified with conventional fabrication techniques. For example, the graphene and metallic grating can be separately fabricated, using chemical vapor deposition on a copper foil and e-beam lithography [61,62], respectively, and then combined to create the proposed hybrid structure. Note that the presented plasmonic grating and graphene can only support plasmons when they are excited by transverse magnetic (TM) polarized waves [63] in the currently used THz frequency range. Hence, TM polarized incident waves are considered as the excitation source of the proposed system throughout this work. The paper is organized as follows: first, we theoretically analyze the linear response of the proposed hybrid graphene-metal system by using its equivalent circuit model. The validity of the theoretical method is verified by comparing several analytical results with full-wave simulations in Section II. The effects of the geometrical dimensions, graphene Fermi level, and different incident angles are also investigated in the same section. Next, we present in Sections III and IV the enhancement of the THG and FWM nonlinear processes, respectively, due to the proposed hybrid nonlinear graphene-covered plasmonic grating. By comparing the nonlinear performance of the proposed hybrid gratings to other conventional non-hybrid structures, such as a bare graphene monolayer or a plain metallic grating, it is demonstrated that the efficiencies of both THG and FWM are increased by multiple orders of magnitude. Moreover, it is shown that the presented enhanced nonlinear responses can become tunable by varying the geometry of the proposed hybrid grating or the graphene doping level. Finally, we also demonstrate the relative insensitivity of the proposed system to the angle of incidence when oblique incident illumination is considered. II. Perfect Absorption of THz Radiation The geometry of the proposed hybrid graphene-covered metallic grating is illustrated in Fig. 1(a). The grating is periodic in the x-direction with period p and is assumed to be extended to infinity in the y-direction. It is made of gold (Au) with THz optical constants calculated by using the Drude model: [64]. The height and trench width of the grating are equal to d and b, respectively. The ground plane is thick enough to be considered opaque to the impinging THz radiation leading to zero transmission. During our theoretical analysis, we initially assume the metallic grating to be made of perfect electric conductor (PEC), which is a good approximation since the electromagnetic fields minimally penetrate metals at low THz frequencies [65]. The grating is covered by a graphene monolayer sheet with doping or Fermi level F E . Intraband transitions dominate the graphene response in the low THz frequency range and the linear conductivity of graphene can be expressed by using the Drude formalism: is the angular frequency, and  is the relaxation time, which is assumed to be equal to 13 10 s    throughout this work. The proposed structure is illuminated by a TM polarized wave (the magnetic field in the ydirection) with an incident angle  with respect to the z-direction. The inset at the left side of Fig. 1(b) shows the equivalent circuit [67] being the corresponding characteristic admittances of the surrounding air and grating trench regions, respectively. In these formulas, n is the refractive index of air around the grating and inside the grating trench and k0 is the free space wavenumber. The graphene sheet is modeled as an additional shunt admittance S Y in the equivalent circuit model that can be calculated by the simple formula: Note that the trench width b is much smaller than the period p in all our designs and the graphene can be placed over the grating without being bended at the corrugations. In order to verify the accuracy of the presented equivalent circuit model, we also compute the response of the proposed structure by using numerical simulations based on COMSOL Multiphysics. The structure is again assumed to be infinite along the y-direction in the numerical modeling case, as shown in Fig 1(a), and is modeled as a 2D system to accelerate the calculations. Periodic boundary conditions are employed in the x-direction and port boundaries are placed in the z-direction to create the incident plane wave. Graphene is modeled as a surface current, due to its planar (2D) nature, described as , where E is the electric field along its surface and g  is the linear graphene conductivity given before by the Drude model. The computed absorptance based on both theoretical and numerical methods are shown in Fig. 1(b) with black and blue lines, respectively, and are found to be in good agreement. The small frequency shift between the theoretical and simulation results can be attributed to the approximation of gold with PEC in the theoretical model, as well as to the used finite size mesh during the full-wave modeling. However, both results are very similar and one pronounced perfect absorptance peak is demonstrated in Fig 1(b) at the resonance of the hybrid grating. At this resonant point, a magnetic plasmon mode is formed due to the generation of highly localized magnetic fields inside the grating's trench accompanied by high electric fields that are expected to boost nonlinearities [15]. The electric field enhancement distribution along the structure at the resonant frequency is demonstrated by the right inset in Fig. 1(b) which is computed by calculating the ratio 0 || EE , where 0 E represents the amplitude of the incident electric field. Interestingly, the electric field can be enhanced by approximately nineteen times inside the grating's trench and, more importantly, along its surface, where graphene will be deposited. The obtained perfect absorption indicates a strong coupling and interference between the THz plasmons of graphene and metallic grating. In addition, we calculate the absorptance of a plain metallic grating (i.e., without graphene on top of it) by using the same numerical method. The result is shown by the green line in Fig. 1(b). It is interesting that a substantial frequency blueshift is obtained when graphene is introduced over the grating. The addition of graphene will also lead to dynamic tunability in the absorption resonant frequency, as will be demonstrated later. When the polarization of the incident wave is switched and the magnetic field is oriented along the length of the grating (transverse electric (TE) polarization), no resonance is observed since both the graphene and metallic grating cannot support TE plasmons and, as a result, cannot couple to the incident TE radiation. We also verified that the absoprtance of a flat gold substrate with and without graphene on top is very low because there is no plasmon formation along the flat interface [73]. Next, we investigate the effect of the grating's geometry to the calculated perfect absorption of the proposed hybrid graphene grating. Throughout this work and only if otherwise specified after this point, we always assume the following realistic graphene parameters: 13 10 s    and the following practical to realize grating microscale dimensions: p=8um, d=8um, and b=0.6um. Figure 2(a) displays a contour plot of the computed absorptance for TM polarized plane waves impinging at normal incidence as a function of the frequency and period, where the grating's dimensions d=8um and b=0.6um have fixed values. Clearly, the absorptance remains strong as we vary the period p from 5um to 25um, indicating a strong coupling between grating and graphene plasmons independent of the periodicity. We have also verified that higher-order diffraction modes or surface waves are not excited by the proposed grating if the incident wave has frequencies in the currently used range of 6-12 THz. The resonant frequency of the perfect absorptance is slightly affected by the period and the bandwidth of the resonance peak is decreased as the period is increased. Interestingly, the resonant frequency of the perfect absorption is found to be more sensitive to the height of the grating. It is decreased as the grating height is increased, as illustrated in Fig. 2(b), where the computed absorptance is plotted as a function of the frequency and grating height. Thus, it is possible to tune the perfect absorption to different frequencies by modulating the grating geometry. The perfect absorptance can also be tuned without changing the grating geometry but by electrostatically gating the graphene leading to a change in its doping level. The effect of different graphene Fermi level values on the perfect absorptance of the proposed graphene-covered gratings is demonstrated in Fig. 2(c). There is a substantial shift in the resonant frequency of the perfect absorptance as the Fermi level is increased. The modification in the Fermi level leads to different graphene properties and, as a result, to a frequency shift in the resonant response of the graphene plasmons. However, it is interesting that the absorptance remains perfect between 0.1 eV to 0.45 eV and only blueshifts as the doping level is increased. This doping variation can be achieved by electrostatically gating the graphene monolayer with a pair of transparent electrodes, as explained in the next paragraph [74]. Finally, we investigate the performance of the proposed graphenecovered grating under different incident angles of the excitation wave. The calculated absorptance is shown in Fig. 2(d) as a function of the incident angle and frequency. Evidently, the absorptance remains almost perfect located at the same resonant frequency point over a wide range of incident angles, in particular between 80  . The currently proposed structure can be realized with existing well-established fabrication methods, since just a graphene monolayer needs to be deposited on a microscale metallic grating. The gating voltage can be applied only on the suspended part of the graphene sheet because the remaining graphene part will be shorted while touching the metallic grating. Note that the portion of graphene along the grating ridges has no effect on the absorption and nonlinear response of the proposed hybrid structure. Hence, the nonuniform doping profile of graphene will not affect the response of the presented configuration. This is due to the fact that only the suspended part of graphene over the trenches strongly interacts with the incident absorbed power, as it is clearly shown by the field and power profiles plotted in the Supplemental Material [73]. Surface waves are not excited along the grating ridges at the perfect absorption frequency and only localized power is formed on the upper part of the trenches at this frequency point, as it is depicted in the Supplemental Material [73]. As a result, the fields along the trenches of the grating are the strongest [73], where the graphene monolayer is located. Thus, the nonlinear signal will be mainly generated by the strong fields interacting with graphene in these nanoregions. The metallic trenches can be and the highest gate voltage value to achieve the maximum used Fermi level (0.45 eV) is computed to be 123 V, which is realistic and relative low value paving the way towards a potential experimental verification of the proposed tunable THz absorber [76]. III. Third Harmonic Generation During the analysis presented in the previous Section II, we proved that perfect and tunable absorption can be achieved by using a hybrid metallic grating covered by graphene. It was demonstrated that the electric field is greatly enhanced at the absorption resonance due to the strong coupling between the graphene and grating plasmonic responses. The increased light-matter interactions achieved by the proposed hybrid structure and obtained at the perfect absorptance frequency point have the potential to dramatically enhance the nonlinear response of graphene at low THz frequencies. Towards this goal, in this section, we investigate the THG efficiency of the proposed graphene-covered grating when all the nonlinear properties of the used materials are included in our nonlinear simulations. The fundamental frequency (FF) that excites the nonlinear system is always set to coincide with the perfect absorptance resonant frequency of the proposed hybrid structure. The strong electric fields at the resonance will subsequently boost the excited nonlinear effects. Gold can be assumed to exhibit PEC-like response at low THz frequencies, since it has very high conductivity and the fields minimally penetrate its bulk volume. However, in order to ensure the accuracy of our nonlinear simulations, we include its nonlinear susceptibility at the infrared frequency region [77], in addition to its linear Drude model, as it was described in the previous section. Hence, the Kerr nonlinear permittivity of gold is given by: where FF E is the enhanced electric field induced at the FF and shown in the right inset of Fig. 1(b). We will demonstrate later that the nonlinear response of the proposed system is dominated by the nonlinear properties of graphene and not the gold grating nonlinear permittivity. The third-order nonlinear surface conductivity of graphene at THz frequencies is calculated by the formula [55]: respectively. An additional electromagnetic wave solver needs to be included in COMSOL and coupled to the FF solver in order to accurately compute the THG radiation, which will solve the nonlinear Maxwell's equations at the TH frequency 3 TH   . The surface current formalism used to model the nonlinear graphene leads to more accurate simulation results combined with less stringent mesh quality requirements. This type of simulations are faster and more accurate compared to the widely used conventional numerical three-dimensions (3D) graphene modeling, where graphene is considered to be a bulk material [59]. The undepleted pump approximation is used in all our nonlinear simulations, since the nonlinear signals are expected to be weaker compared to their linear counterparts. The schematic of the THG process is illustrated in Fig. 3(a). In this case, a wave with frequency 3 will be generated when an incident wave with FF  excites the proposed nonlinear graphenecovered grating. The insets in Fig. 3(b) demonstrate the electric field enhancement distributions for this structure at FF and TH frequencies, respectively. Clearly, enhanced localized electric fields are obtained both at the FF resonance, as well as the TH frequency. In order to take advantage of the strong field enhancement and boost THG, the frequency of the FF excitation wave is located close to the perfect absorption resonance which is computed to be around 8.8 f THz  in this case. The proposed nonlinear structure is illuminated at normal incidence with a relative low input intensity equal to 2 10 kW cm . This value is substantially lower compared to intensity values used in previous works based on just the nonlinear properties of graphene without the addition of plasmonic structures [27]. By calculating the integral C Sn   over the entire surface of the structure and at the far field, where S is the Poynting vector crossing the boundary surface C and n is the boundary norm vector, the radiated output power of the TH wave with radiation frequency 3 26.4 f THz  is computed. Note that we assume a fictitious large distance of one meter for the y-direction length of the proposed structure in all our 2D simulations, thus the output power is always computed in Watt units. The result of the radiated output power of the TH wave is shown in Fig. 3(b). Interestingly, there is a peak at the TH radiation around the absorption resonance, which coincides with the FF, and its maximum value can reach 1.6 W. In addition, the TH radiation power shown in Fig. 3(b) follows the same trend with the absorptance shown before in Fig. 1(b). Finally, we would like to stress that reflected waves do not exist at the fundamental frequency 8.8 f THz  due to the high absorptance of the proposed hybrid grating but there are strong THG reflected waves due to low absorptance at the third-harmonic frequency 3 26.4 f THz  . Next, we compute the THG conversion efficiency (CE) which is a suitable metric to describe the THG power strength in a more quantitatively way. It is defined as the ratio of the radiated THG power outflow substantial improvement compared to the previously proposed strong THG obtained by patterned nonlinear graphene metasurfaces at similar THz frequencies [80]. It is even more interesting that this high efficiency can be achieved by the currently proposed more realistic and easy to fabricate configuration. In the proposed structure, a graphene monolayer is used to obtain strong nonlinear response instead of patterned graphene microribbons that can suffer from detrimental edge loss effects at their discontinuities and other fabrication imperfections. The used input intensities have realistic values and even higher THz radiation intensities on the order of several 2 MW cm have been reported in previous works with specialized configurations [81,82]. We would like to stress that the presented THG efficiencies are on the order of a few percent and these values are higher compared to most nonlinear plasmonic devices presented so far [38]. Note that the computed CE relationship as a function of the input intensity has a quadratic shape (Fig. 4), an expected trend for THG nonlinear conversion efficiency. The input intensity of the FF wave will always be fixed to the low value of 2 20 kW cm at all the following calculations unless otherwise specified. According to Eq. (1), it is expected that a stronger nonlinear response will be obtained by using lower Fermi level values, i.e., less doped graphene. The THG CE will also be affected by the FF    since the value of the nonlinear surface conductivity of graphene given by Eq. (1) is inversely proportional to  . As a consequence, stronger nonlinear response and higher CE are expected to be achieved at lower THz frequencies. However, the enhanced electric fields at the FF resonance will also affect the THG process. In order to verify how the THG CE will be affected by all these different parameters, the CE of the proposed nonlinear structure is computed by sweeping the graphene Fermi level and the fundamental frequency. The contour plot of this result is shown in Fig. 5. Clearly, the THG CE is decreased as the Fermi level is increased or in the case of offresonance operation. The maximum CE value is obtained for approximately 8.8 . This trend is consistent with the absorptance analysis performed in the previous section. It is interesting that low doped graphene, which is easier to be produced, can lead to enhanced nonlinear effects based on the proposed configuration. We have discussed in Section II the effect of the proposed hybrid structure's geometry on the linear absorptance spectrum. In this section, we also investigate the effect of the geometry to the THG nonlinear process. Towards this goal, the THG CE is computed by sweeping the FF and the period or height of the grating, as shown in Figs. 6(a) and (b), respectively. The CE (plotted in dB) is tunable and follows the same trend with the linear absorptance enhancement illustrated before in Figs. 2(a) and (b). When the FF is tuned around the resonant frequency, a noticeable enhancement in the THG CE is observed for every period or height of the plasmonic grating with results shown in Fig. 6. This is directly related to the enhancement of the electric field at the absorptance resonance peak that boosts the nonlinear response of the structure translated to enhanced CE. In all the above simulations, we set the intensity of the illuminating wave to very low values   2 20 kW cm , where the THG CE is relative high and equal to -22dB at the resonance. Hence, it is possible to also tune the THG nonlinear waves by changing the plasmonic grating's geometrical parameters and without altering the graphene properties. Note that there is no special requirement in the fabrication of the graphene monolayer used in the proposed hybrid structure since different values of graphene relaxation times were found not to affect the absorptance and THG conversion efficiency. We provide more details about this advantageous towards the practical implementation feature in the Supplemental Material [73]. Finally, the TH output power is computed with and without the graphene monolayer to prove that the addition of graphene is crucial in order to obtain enhanced nonlinear effects. The comparison results of the proposed hybrid structure and a similar structure made of a flat metallic substrate (no grating/ geometry shown in the inset of Fig. 7) with and without graphene on top are plotted in Fig. 7. In order to have a fair comparison, all the results are obtained under a varying incident angle wave and by using the same fundamental frequency of 8.8 THz. Moreover, the grating height and the substrate thickness are also kept identical during this comparison. The TH output power of the proposed graphene-covered hybrid grating [blue line in Fig. 7] has by far the highest value compared to all the other scenarios: i) the plasmonic grating with the same dimensions but without graphene on top [black line in Fig. 7], ii) the graphene-covered flat metallic substrate without the grating corrugations [green line in Fig. 7], and iii) the flat metallic substrate without the graphene monolayer on top [red line in Fig. 7]. Under normal incidence, the TH radiation generated by the proposed graphene-covered plasmonic grating has an impressive twenty eight orders of magnitude THG enhancement compared to the same plasmonic grating but without graphene. In a similar way, the case of the flat metallic substrate has much higher TH radiation when graphene is placed on top of it but still much lower values compared to the plasmonic grating case. Thus, it can be concluded that graphene plays a crucial role in the strong enhancement of the THG process. The same structure without the key nonlinear element of graphene will not produce such significant THG radiation. These results directly demonstrate the great potential of graphene in THz nonlinear optics. In addition, the THG process is much stronger in the case of the plasmonic grating configuration compared to the flat metallic substrate without corrugations since the grating structure can enable strong localized electric fields at its plasmonic resonance that strongly couple to graphene, as was mentioned before. Note that the THG output power remains relative insensitive across a broad incident angle range, especially between [ 30 ,30 ]  , in agreement to the linear absorptance spectrum. IV. Four Wave Mixing Another interesting third-order optical nonlinear procedure is FWM which typical requires very high input intensities to be excited. A feasible way to improve the efficiency of the FWM process is to increase the local field intensity at both input waves by using an artificially engineered structure [39,83]. In the following, we demonstrate that the proposed graphene-covered grating can serve as an excellent platform to also boost this nonlinear process at THz frequencies. The strong coupling and interference between the plasmonic resonance of the metallic grating and the THz surface plasmon along the graphene monolayer can lead to strong local field enhancement, as was demonstrated before, which is expected to enhance FWM. During the FWM process, two 1  and one 2  photons are mixed and a photon is emitted at 3 1 2 2     . In order to take advantage of the strong field enhancement at the resonance and boost the FWM process, the two incident wave frequencies are chosen to be equal to 1 . The frequencies of the incident and generated waves are all very close to the maximum absorptance resonant frequency (8.8 THz). Hence, the electric fields induced by the incident and generated waves are expected to be greatly enhanced at these frequencies. The computed electric field distribution enhancement at the generated FWM wave 3 8.4 f THz  is demonstrated in the inset of Fig. 8(b). Again, we use COMSOL to investigate the enhanced FWM nonlinear process based on the proposed nonlinear graphene-covered grating. The relevant schematic of this nonlinear procedure is illustrated in Fig. 8(a). The boundary conditions are the same with the THG simulations presented before except that one more electromagnetic wave solver is required to take into account the mixing mechanism introduced by the additional 2  input wave. Both incident waves are TM polarized and have incident angles 1  and 2  , respectively. In addition, both input intensities are selected to have low values equal to 2 20 kW cm . We measure the FWM radiated power through the upper boundary of the simulation domain by integrating the power outflow over the surface that surrounds the nonlinear structure, as it was performed before in the case of the THG process. The computed FWM output results are shown in Fig. 8(b). The incident angle 2  is kept constant and equal to zero and the other incident angle 1  varies from 0 to 90  . The maximum output power is found to be 381 W at 1 0   and remains close to this peak value within a relative broad range of incident angles ( 30 ,30 )  . The FWM output power is symmetric with respect to 1  and relatively insensitive to this incident angle. The same result is also obtained when the incident angles 1  and 2  of both input waves are simultaneously swept. The computed contour plot of this study is shown in Fig. 9. In this case, the used incident frequencies are the same with the results presented before in Fig. 8(b). It can be concluded that the FWM efficiency of the proposed graphene-covered grating is very high and relative insensitive to the excitation angles of both incident waves. 20 kW cm . To the best of our knowledge, such high nonlinear efficiency has never been reported before in the literature of both theoretical and experimental nonlinear plasmonic devices [38]. This efficiency is even higher compared to the THG process presented before because one more incident wave is contributing its power in the FWM process. The obtained high nonlinear efficiency is one of the major advantages of the proposed hybrid THz nonlinear graphene-plasmonic configuration. The FWM output power generated by the metallic grating without graphene on top is also calculated and plotted in Fig. 8(b) (blue line). Clearly, the FWM output power is dramatically increased with the proposed graphene-covered grating by approximately twenty five orders of magnitude compared to a plain grating without graphene. This comparison provides an additional proof of the key contribution of graphene in the boosted nonlinear response of the proposed plasmonic system. As indicated before in Section III, the Fermi level F E will have a pronounced effect on the nonlinear response of the proposed structure. It will lead to different values in the nonlinear conductivity of graphene, which is given by Eq. (1). This effect is also predominant in the FWM process. The variation in the FWM output power due to increased F E is computed and shown in Fig. 10(a). The incident frequencies are again fixed to 1 8.8 f THz  and 2 9.2 f THz  in this case. The FWM output power is decreased by five orders of magnitude when the Fermi level is increased from 0.1 eV to 0.45 eV. This trend is consistent to the formula of the third-order nonlinear surface conductivity of graphene given by Eq. (1). Moreover, we explore the effect of the proposed hybrid structure's geometry on the FWM process, similar to our previous THG analysis. The results are shown in Figs. 10(b) and (c). The FWM output power is relative high when the period varies from 5 um to 25 um and reaches a maximum value of 1150 W for approximately 15 um period. This trend is similar to the THG CE contour plot shown before in Fig. 6(a), where the FF wave was fixed to 8.8 THz. The FWM output power changes dramatically with the grating's height, as demonstrated in Fig. 10(c). It reaches a maximum value of 382 W for d=8um and a minimum value of 11 2 10 W   for d=16um. This trend is again similar to the THG CE contour plot shown in Fig. 6(b). The absorptance resonant frequency is strongly shifted when we change the grating height d [see Fig. 2(b)] and this leads to a dramatic variation in the FWM output power. Thus, it can be concluded that the output radiated power of the generated FWM wave can be tuned by either changing the graphene's Fermi level or the geometry of the proposed hybrid structure. Finally, an alternative robust way to control the output radiation power of the generated FWM wave is achieved by varying the incident power of both excitation waves. The FMW signal is expected to follow a square power law behavior as a function of the input power P1 of the first incident wave 1  and a linear power law relation as a function of the input power P2 of the second incident wave 2  [33]. The effect of the input pump intensities P1 and P2 of the two incident waves with frequencies of 1 8.8 f THz  and 2 9.2 f THz  , respectively, on the output power of the generated FWM wave is illustrated in Fig. 11. Indeed, the generated FWM power is approximately a square function of P1 and a linear function of P2. This result also demonstrates that P1 has a stronger effect on the generated FWM power [39]. Hence, the generated FWM power can be the further enhanced by increasing the input power of both incident waves. Thermal damage to gold grating can occur only for very high input intensities in the order of GW/cm 2 . In addition, graphene has an even higher melting point compared to gold and is not expected to be affected by thermal effects. V. Conclusions In this work, we have analyzed and demonstrated enhanced nonlinear THz effects based on a new hybrid THz planar nonlinear device composed of a graphene monolayer placed over a metallic grating. The presented strong nonlinear response is mainly due to the localization and enhancement of the electric field at the absorption resonance of the proposed structure and the large nonlinear conductivity of graphene at low THz frequencies. It is demonstrated that the efficiency of both THG and FWM nonlinear processes can be dramatically enhanced by many orders of magnitude compared to other conventional non-hybrid metallic gratings and substrates. The presented nonlinear efficiencies are computed to be very large on the order of a few percent. They are higher compared to the majority of the state-of-the-art nonlinear planar plasmonic devices. Another major advantage of the proposed hybrid THz nonlinear configuration is that its nonlinear response can be dynamically tuned by using different mechanisms. In particular, it is presented that the output powers of both THG and FWM processes can be tuned by varying the metallic grating dimensions. This is due to the fact that the geometrical variations can lead to significant shifts in the absorptance resonant frequency, where the electric field, that triggers the nonlinear response, is greatly enhanced. Even more importantly, the nonlinear response can be dynamically modulated without altering the geometry of the proposed device and just by varying the graphene doping level. Finally, we demonstrate that the efficiencies of both THG and FWM processes can be further improved by increasing the input intensity of the incident waves. The proposed graphene-covered metallic gratings can be realized with commonly used fabrication techniques. They can be built by a combination of chemical vapor deposition to efficient exfoliate graphene and the deposition of the graphene monolayer over a microscale gold grating which can be constructed by using conventional lithography techniques. The presented optimized grating design usually requires a relative high aspect ratio (groove depth d over groove width b) with values between 10-20 to achieve strong absorption and nonlinear response but similar metallic gratings have recently been built based on photolithography [84], deep reactive-ion etching Bosch process [85], or nanoimprint lithography [86]. The proposed hybrid nonlinear graphene-plasmonic devices are envisioned to have several applications relevant to the new field of nonlinear optics based on 2D materials. They can be used in the design of THz frequency generators, all-optical signal processors, and wave mixers. Moreover, they are expected to be valuable components in the design of new nonlinear THz spectroscopy and noninvasive THz subwavelength imaging devices. Finally, the strong field confinement inside the nanoscale trenches and along graphene, achieved by the proposed hybrid grating, can be used to enhance dipole forbidden transitions on the atomic scale [87,88]. , which corresponds to a fixed low mobility of Hybrid Graphene-Plasmonic Gratings to Achieve Enhanced Nonlinear Effects at Terahertz Frequencies In order to investigate the effect of graphene's electron relaxation time in the absorptance and nonlinear response of the proposed structure, we varied the relaxation time from 0.01ps to 2 ps , which corresponds to a mobility value ranging from ( ) m V s , and computed the linear absorptance and third harmonic generation (THG) conversion efficiency (CE). The results are shown in Fig. S1. The input intensity used to compute the presented THG CE is low and equal to 2 20 kW cm . It can be seen in Fig. S1 that the resonant frequency of the proposed structure is always constant except for a minor redshift in both the absorptance and THG conversion efficiency for low electron relaxation time values. Furthermore, there is almost no change in their peak values with respect to the relaxation time. These results prove that there are no particular special requirements in the fabrication of the graphene monolayer used in the proposed hybrid grating structures, which is a major advantage towards their practical implementation. Figure S1 -(a) Computed absorptance and (b) THG CE contour plots in logarithmic scale at normal incidence as a function of the fundamental frequency and graphene's electron relaxation time. II. Numerical method details We employed the full-wave simulation software COMSOL Multiphysics to solve the linear and nonlinear Maxwell's equations and investigate the perfect absorption and enhancement of the nonlinear effects by the proposed graphene-covered plasmonic gratings. Geometry: The proposed structure is composed of a gold grating covered by a graphene sheet, as shown in Fig. S2(a). This structure can be modeled as a two-dimensional (2D) periodic system since it is assumed to be infinite (or very large) along the y-direction. Periodic boundary conditions (PBC) are employed in both sides at the x-direction and port boundaries are placed up and down in the z-direction to create the incident plane wave. The PBC boundaries are indicated with green lines and the used ports 1 and 2 are shown by light blue lines in Fig. S2(b). The proposed structure is illuminated by a transverse magnetic (TM) polarized wave (the magnetic field is in the ydirection) with an incident angle  with respect to the z-direction. Materials: Graphene is modeled as a surface current due to its planar ultrathin 2D nature, which is represented by a red line in Fig. S2(b). The graphene's surface current is described by E are the electric fields induced at the fundamental frequency (FF) and the third harmonic (TH) frequency, respectively. Note that these fields are monitored along It is obvious from Fig. S4 that all the energy is absorbed inside the corrugations and there are no surface waves travelling along the grating. To further prove this point, we create two new simulation models to clearly demonstrate that all the power is absorbed inside the trenches and there are no surface waves. In the first model, we consider a finite number of corrugations to simulate a more realistic situation where 16 corrugations are used and a continuous graphene sheet is placed on top of them. Perfect electric conductor (PEC) boundary conditions are employed in both sides at the x-direction and port boundaries are placed up and down in the z-direction to create the incident plane wave. The computed absorptance shown in Fig. S5(a) perfectly matches the single unit cell surrounded by PBCs simulation results presented in the main paper. Note that there is a small difference in the absorption resonance frequency (9.1THz) in this case compared to the infinite structure due to the minor detuning introduced by the finite structure of Fig. S5(a) and the slightly different grating parameters used. The black arrows in Fig. S5(b) depict the power flow of the finite structure at the absorption resonance frequency and clearly demonstrate that all the power is absorbed inside the corrugation trenches, similar to the results presented in Fig. S4. The power is not reflected back and it is fully absorbed as it travels along the trenches. This further proves the predicted perfect absorption response at the resonance frequency of the proposed hybrid grating and provides a clear evidence that there are no additional surface waves travelling along its interface. Figure S5 -(a) Computed absorptance spectra of the proposed hybrid grating made by a finite number of corrugations. The inset represents the simulated structure. (b) The y-component of the real part of the electric field enhancement distribution normalized at its maximum value at the absorption resonance frequency and the flowing of the power (depicted by arrows) inside the structure. The results are obtained for grating parameters p=8um, b=0.6um, d=8um. The graphene's Fermi level F In the second model, we employ a different way to calculate the total absorbed energy by integrating the total power dissipation density over the volume of the structure. This type of absorption energy calculations are usually performed in scattering problems and not in reflection/transmission problems similar to the currently proposed hybrid grating configuration. The total power dissipation density is calculated by using the formula EJ  , which can be obtained by the COMSOL predefined variable Qh. Thus, the total absorbed power of the finite hybrid grating is computed by 1 Re{ } 2 abs S P J E    [S22], where J is the current density, S represents the surface of the proposed structure since our model is two-dimensional. By integrating the total power dissipation density over the blue region in Fig. S6(a), which includes both the lossy gold and graphene materials, we obtain the total dissipation power of the hybrid grating. In the finite grating simulation presented in Fig. S6(a), the background electric field formalism is employed to create the normal incident plane wave and perfect matched layers (PML) are used to fully absorb all outgoing waves scattered by the grating. The computed total dissipation energy by the proposed structure is shown in Fig. S6(b), where it is obvious that the trend and peak position of this curve are similar with the previous absorptance result shown in Fig. S5(a) for an identical structure but obtained by using the reflectance and transmittance values. The absorbed power reaches to a maximum value at the same resonance frequency and then drops to very low values, as it is expected and also predicted in the main paper. Thus, we can conclude that the total dissipation energy result computed in Fig. S6 matches almost perfectly the absorptance results computed by the models in Fig. S5 and the results (Fig. 3) presented in the main paper. Figure S6 -(a) Scattering and absorption model of the finite hybrid graphene-covered grating. (b) The calculated over a broad frequency range total dissipation energy along the hybrid grating. The computed absorption peak is identical to the absorptance peak shown in Fig. S5(a) and the main paper. The results are obtained for grating parameters p=8um, b=0.6um, d=8um. The graphene's Fermi level F E is equal to 0.1eV .
2018-09-26T23:26:50.785Z
2018-09-21T00:00:00.000
{ "year": 2018, "sha1": "675efb49847e59920869a57d212fd977452cf12f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.08194", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "675efb49847e59920869a57d212fd977452cf12f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
204836727
pes2o/s2orc
v3-fos-license
High fat meals increases postprandial fat oxidation rate but not postprandial lipemia Background This study investigated the effects of ingesting meals with the same calorie intake but distinct nutritional contents after exercise on postprandial lipemia the next day. Methods Eight healthy male participants completed two 2-day trials in a random order. On day 1, the participants underwent five 12 min bouts of cycling exercise with a bout of higher intensity exercise (4 min) after each and then a bout of lower intensity cycling (2 min). The total exercise time was 90 min. After the exercise, the participants ingested three high-fat or low-fat meals. On Day 2, the participants were asked to rest in the laboratory and ingest a high-fat meal. Their postprandial reaction after a high-fat meal was observed. Results Postprandial triglyceride concentrations in the high-fat diet trial and low-fat diet trial exhibited nonsignificant differences. Total TG AUC were no significantly different on HF trial and LF trial (HF: 6.63 ± 3.2; LF: 7.20 ± 3.4 mmol/L*4 h. p = 0.586). However, the postprandial fat oxidation rate total AUC (HF: 0.58 ± 0.1; LF: 0.39 ± 0.2 g/min*4 h. p = 0.045), plasma glucose, and insulin concentration of the high-fat trial were significantly higher than those of the low-fat trial. Conclusions This study revealed that meals with distinct nutritional contents after a 90-min exercise increased the postprandial fat oxidation rate but did not influence the postprandial lipemia after a high-fat meal the next day. Introduction Elevated postprandial triglyceride (TG) concentrations have been suggested to significantly increase the risk of metabolic disease [1]. A single session of exercise can decrease postprandial TG concentrations the next day [2,3]. Numerous studies have found that energy expenditure during exercise may play a vital role in postprandial TG response [4,5]. Exercise decreases postprandial lipemia the next day by enhancing the lipoprotein lipase (LPL) activity [6], increasing the postprandial fat oxidation rate [7], and improving the insulin sensitivity after exercise [8]. However, the exact mechanism underlying this phenomenon remains unknown. Diverse types of meals with a varying nutritional content may influence postprandial TG concentration. Under nonexercise conditions, high-carbohydrate diets have been suggested to decrease the hepatic fatty acid oxidation and increase plasma TG concentration [9]. After exercise, high-carbohydrate diets induce a higher postprandial TG concentration compared with low-carbohydrate diets [10]. This may be because high-carbohydrate diets decrease postprandial fat oxidation [10]. However, high-fat (HF) postexercise meals have also been found to increase postprandial fat oxidation [11]. The relationship between a diet's varying nutritional content and postprandial fat oxidation remains unclear. Postprandial fat oxidation may play a major role in postprandial lipemia. High-intensity interval exercise may increase postprandial fat oxidation and reduce postprandial TG concentration the next day [12,13]. In addition, HF postexercise meals increased postprandial fat oxidation [11]. The effect of a higher postprandial fat oxidation rate induced by HF meals after exercise on postprandial TG concentration remains unclear. The objective of this study was to investigate the effects of ingesting HF or low-fat (LF) meals with the same calorie intake after exercise on postprandial TG concentration and postprandial fat oxidation based on an oral fat tolerance test (OFTT) the next day. Participants Eight healthy male participants were recruited (age 22 ± 1.3 yr, height 170.1 ± 4.7 cm, weight 75.4 ± 17.5 kg; Table 1). No participant received professional exercise training, but had the habit of exercising two to three times a week. The participants did not present any metabolic disorders, lipemia, or other problems rendering them unfit to engage in exercise. A questionnaire was used to screen for physical activity level and any potential health issues before testing. After completely understanding the experiment, the participants signed an informed consent form. This study was approved by the Institutional Review Board of Changhua Christian Hospital (CCH IRB No 151221) in Taiwan. Design A crossover design approach was adopted in this study. The experiment involved two trials, namely an LF diet trial and an HF diet trial. Participants first underwent a pretest to measure their VO 2max and calculate the intensity of their interval training during the formal experiment. The pretest and formal experiment had to take place at least 7 days apart. The participants exercised at 66% VO 2max for 90 min in the morning on the first day of the formal experiment. Interval training was incorporated five times during the process, and at the end of the exercise, three LF or HF meals with equal calorie intakes were administered. The experimental sequences occurred in a random order, with each test conducted at least 7 days apart from the others to avoid influences. Protocol Pretest The pretest in this study involved using stationary bicycles to measure VO 2max and assess exercise intensity. Participants arrived at the laboratory in the afternoon and were asked to wear a heart rate monitor wristband (Polar Electro, Kempele, Finland) and a precalibrated breath-by-breath gas analyzer (Cortex, Metamax 3B, Leipzig, Germany), which were used to collect relevant measurements during the exercise. First, participants' gas samples during the resting state (sitting) were collected for 5 min to determine their energy expenditure at the resting state. Subsequently, a VO 2max test was conducted at a fixed cadence and during an incremental amount of pedal power (in W) on a cycle ergometer. Specifically, cadence was maintained at 70 to 80 rpm under an intensity of 75 W, while the power output was incremented by 25 W every 3 min until the participant was exhausted. During the test period, the oxygen amount, partial pressure of oxygen (PO 2 ), partial pressure of carbon dioxide (PCO 2 ), energy expenditure, and heart rate were recorded at each stage to calculate the amount of energy expended at 66% VO 2max and the usage of carbohydrate and fat. The fat and carbohydrate oxidation rates were calculated using the following formula [14]: Fat oxidation g= min ð Þ¼1:695 Â VO 2 À 1:701 Â VCO 2 : Carbohydrate oxidation g= min ð Þ¼4:585 Â VCO 2 À 3:226 Â VO 2 : Formal experiment The experiment was conducted over 2 days. Four days before the first formal experiment, a nutritionist individually provided all of the participants with diet-related knowledge and asked them to avoid ingesting an excessive amount of fat and calories as well as alcohol and caffeine. To facilitate dietary control, the participants were asked to record the meals they had ingested during the 3 days preceding the formal experiment and to ingest the same meals 3 days before the subsequent formal experiment. All participants were also asked to avoid excessive physical activities and heavy training 3 days before the formal experiment. Participants arrived at the laboratory between 08:00 and 09:00 in the morning on the first day of the formal experiment. They rested for 10 min before putting on a polar watch and gas analyzer to determine the actual exercise intensity. First, participants rode a cycle ergometer for 12 min at 66% VO 2max , after which the intensity was increased to 85% VO 2max for 4 min and then decreased to 50% VO 2max for 2 min. Completing these three intensities was considered a cycle, and there was in total five cycles. During the exercise, 200 mL of drinking water was provided to the participants every 20 min to prevent dehydration. At the end of the exercise, an LF or HF meal was administered to the participants from 09:45-10:45, at 12: 30, and at 19:00. All meals were prepared by a nutritionist. In the HF trial, the meals had a total calorie intake of 2437.7 kcal and included breakfast (full-cream milk, peanut butter toast, and 8 g of nuts), lunch (bubble tea, creamy bacon pasta, and kiwi), and dinner (110 g of KFC Chizza and a KFC Zinger). The amounts of fat, protein, and carbohydrate in the three meals were 44% (119.7 g), 12% (71.9 g), and 44% (268.2 g) of the total calorie intake, respectively. In the LF trial, the meals had a total calorie intake of 2448.2 kcal and included breakfast (40 g of whey protein, kiwi, banana, Laba congee, and lemon tea), lunch (40 g of whey protein, 200 g of white rice, 150 g of sweet mung bean soup, and kiwi), and dinner (40 g of whey protein, boiled vegetables, 200 g of white rice, a tea egg, black tea, and banana). The amounts of fat, protein, and carbohydrate in the three meals were 6% (15 g), 20% (126.3 g), and 74% (452 g) of the total calorie intake, respectively. The macronutrient consumption for LF and HF were listed in Table 2. The participants returned to the laboratory at approximately 08:00 AM on the second day of the formal experiment to undertake an OFTT in the fasting state. After 10 min of rest, the participants' fasting blood samples were collected through venipuncture. Subsequently, the participants were given a fixed HF meal and rested in the laboratory for 4 h. Further blood samples were collected at 0.5, 1, 2, 3, and 4 h after the end of the meal. Postprandial gaseous samples were collected by a precalibrated breath-by-breath gas analyzer (Cortex, Metamax 3B, Leipzig, Germany) from the resting sitting position for 5 min at each time point to calculate the participants' postprandial fat oxidation rate. Blood sample collection In the experiment, 10-mL blood samples were collected using an intravenous catheter (Venflon 20G cannula, Sweden) and three-way connector (Connecta Ltd., Sweden). Samples were collected 30 min before and immediately and 1, 2, 3, and 4 h after a meal. The blood samples were collected into collection Vacutainers containing ethylenediaminetetraacetic acid (EDTA). To prevent the blood from clotting in the catheter, we used 10 mL of isotonic saline to clean the catheter. The Vacutainers were centrifuged for 20 min at 2000×g at 4°C. Blood plasma was extracted and stored at − 80°C for subsequent biochemical analysis. Oral fat tolerance test (OFTT) All the meals provided for the OFTT were designed by a nutritionist and have been used in a previous study [7,15]. The meals were composed of toast, butter, cheese, muesli, and fresh cream. The meals provided 1.2 g of fat per kg body weight, 1.1 g of carbohydrate, 0.33 g of protein, and 16.5 kcal of energy. The nutritional contents of the meals were obtained from the packaging labels. During the experiment, the participants were required to ingest their OFTT meals within 15 min. Statistical analysis All data were presented as mean ± standard deviation. The t-test was used to test the concentration difference in the area under the curve (AUC) of each dependent variable between the two groups. Two-way ANOVA with repeated measures was performed to analyze the difference in blood biochemical values between the groups and at various time points. A statistically significant difference required posthoc comparison using the Bonferroni method. Significance was defined as α = 0.05. The G*Power 3 software program was used to calculate the sufficient sample size with an α value of 5% and a power of 0.8.The sufficient sample size obtained was eight participants. Result The participants physiological information and fasting plasma biochemistry. There were no significantly different between HF and LF in the average heart rate (p = 0.414) and energy expenditure (p = 0.527) during exercie. The fasting concentrations from the plasma biochemistry did not differ on the morning of Day 2 in all trials (Table 1). TG concentrations, fat oxidation and carbohydrate oxidation There were no differences between HF and LF in TG concentrations (trial × time, p = 0.219; trial, p = 0.501; time, p < 0.001; Fig. 1a), TG AUC (p = 0.586; Fig. 1b), Fig. 1c). Figure 1d demonstrates the fat oxidation rate AUC in the HF trial was significantly higher than that in the LF trial (p = 0.045). There were no differences between HF and LF in the carbohydrate oxidation rate (trial × time, p = 0.479; trial, p = 0.387; time, p = 0.239; Fig. 1e) and the AUC of carbohydrate oxidation rate (p = 0.216; Fig. 1f). GLU and insulin Plasma GLU concentrations exhibited no significant differences between trials (trial × time, p = 0.822; trial, p = 0.021; time, p = 0.321; Fig. 2a). Figure 2b indicates that the plasma GLU AUC was higher in the HF trial than in the LF trial (p = 0.007). There were no differences between HF and LF in the concentrations of insulin (trial × time, p = 0.503; trial, p = 0.284; time, p < 0.001; Fig. 2c), but the plasma insulin AUC was higher in the HF trial than in the LF trial (p = 0.015; Fig. 2d). Discussion He present study revealed that among exercise interventions with different intensities and the same energy expenditure, HIIE is more capable of reducing the postprandial TG concentrations. This study revealed that various contents in meals after a 90-min exercise significantly raised the fat oxidation rate after an HF meal the next day, but it did not affect the plasma TG concentration. In addition, the results demonstrated that ingesting an HF meal after This study revealed that when the same amount of energy expended during exercise and the same calorie intake on the previous day, meals with dissimilar fat contents did not influence the postprandial TG concentration the next day. In a previous study, low-carbohydrate diets increased the postprandial fat oxidation and decreased the postprandial TG concentration compared with highcarbohydrate diets [10]. However, the fat content in the low-carbohydrate diet trial was 72.2% in this study. Eating high-fat-content meals in the daily life is difficult. Therefore, we decreased the fat content to 44% in the meals of the HF trial and successfully increased the postprandial fat oxidation compared with the LF trial, but there were no differences in the postprandial TG concentration between the HF and LF trial. The higher concentration of insulin observed in the HF trial may play a role in the absence of change in the postprandial TG concentration. The higher insulin concentration in the postprandial period may decrease the LPL activity and influence the postprandial TG response. Previous findings have suggested that ingesting HF meals results in reduced insulin sensitivity [16][17][18]. Bachmann et al. (2001) fed 12 participants HF and LF meals for 3 days in a row and assessed their insulin sensitivity. The results indicated that the insulin sensitivity fell below 83.3 ± 5.6% of the baseline, and insulin sensitivity after an LF diet exhibited a nonsignificant difference [19]. Although we did not calculate the insulin sensitivity in this study, our results demonstrated that the GLU and insulin concentrations of the HF group were considerably higher than those of the LF group, indicating that the HF group was less sensitive to insulin. Based on other data from the present study, the postprandial NEFA and GLY concentrations were higher in the HF trial compared with the LF trial. This may reflect a reduction of the insulin sensitivity in the HF trial compared with the LF trial. A higher insulin concentration and lower insulin sensitivity have been suggested to decrease the LPL activity and clearance of TG from the blood circulation [20]. Therefore, a higher postprandial insulin response may reduce the positive effect of higher postprandial fat oxidation on postprandial TG concentration. This study also revealed that the fat oxidation rate significantly increased in the HF trial. In previous studies on the effects of exercise interventions on postprandial lipemia, high-intensity interval training a day before OFTT was found to significantly increase the postprandial fat oxidation rate after an HF meal the next day, and the postprandial TG concentration was also considerably reduced after an OFTT [7]. These findings indicate that an increase in the postprandial fat oxidation rate may influence the postprandial TG concentration. In addition to high-intensity interval training, ingesting HF meals was similarly suggested to elevate the postprandial fat oxidation rate [10,11]. However, no studies have investigated whether an increase in fat oxidation rate due to HF meals influences TG concentrations after an HF meal. Although this study revealed an increase in postprandial fat oxidation rate, the postprandial TG concentration was not affected. The primary limitation of this study is that a control trial (no exercise group) was not used. It is difficult to determine whether the postprandial TG concentration was or not affected in the exercise trial. However, the objective of this study was to investigate the effects of ingesting HF or LF meals on postprandial TG concentration and postprandial fat oxidation after an OFTT the next day. Therefore, a control trial did not appear to be critical for this study. The second limitation of this study was the difference in the protein content among trials. The acute effect of the ingestion of additional protein into an HF meal may reduce the postprandial TG concentration [21,22]. However, no study has investigated the long-term effect of protein ingestion or the effect of protein on the day before the HF meal test. We believe a higher content of protein the day before the HF meal did not influence the results in this study. Conclusion This study revealed that various contents in meals after a 90-min exercise did not influence the postprandial lipemia after an OFTT the next day. Compared with LF meals, HF meals resulted in a higher fat oxidation rate, GLU level, and insulin concentration after an OFTT. Thus, HF diets can cause a reduction in insulin sensitivity. Nevertheless, future studies should consider using the OGTT method to investigate the effects of various meals after exercise on insulin sensitivity.
2019-10-23T15:25:08.804Z
2019-10-23T00:00:00.000
{ "year": 2019, "sha1": "0ce6377cbb0e41db7cfa0a32d27dd067d95dde23", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-019-1129-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "269245c91e2d5492d82f3579ca194980f271037c", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
53159174
pes2o/s2orc
v3-fos-license
Interplay Between the Unfolded Protein Response and Immune Function in the Development of Neurodegenerative Diseases Emerging evidence suggests that the immune and nervous systems are in close interaction in health and disease conditions. Protein aggregation and proteostasis dysfunction at the level of the endoplasmic reticulum (ER) are central contributors to neurodegenerative diseases. The unfolded protein response (UPR) is the main transduction pathway that maintains protein homeostasis under conditions of protein misfolding and aggregation. Brain inflammation often coexists with the degenerative process in different brain diseases. Interestingly, besides its well-described role in neuronal fitness, the UPR has also emerged as a key regulator of ontogeny and function of several immune cell types. Nevertheless, the contribution of the UPR to brain inflammation initiated by immune cells remains largely unexplored. In this review, we provide a perspective on the potential role of ER stress signaling in brain-associated immune cells and the possible implications to neuroinflammation and development of neurodegenerative diseases. INTRODUCTION The Unfolded Protein Response (UPR) Proteostasis encompasses the dynamic interrelation of processes governing generation and localization of functional proteins (1). Physiological and pathological factors can impair the balance between protein load and protein processing, resulting into accumulation of improperly folded proteins (2,3). Abnormal protein aggregation is a key feature of several neurodegenerative diseases, including Alzheimer's disease (AD), Parkinson's disease (PD), amyotrophic lateral sclerosis (ALS), Huntington's disease (HD) and prion-related disorders amongst others, collectively classified as protein misfolding diseases (PMDs) (4,5). Protein misfolding is sensed by dedicated stress-response pathways that include the cytoplasmic heat shock response (HSR) and the unfolded protein response originated in the mitochondria and in the endoplasmic reticulum (ER) (3). Activation of these intracellular mechanisms by the presence of misfolded proteins leads to ameliorating the protein folding load and resolving proteotoxic stress (1,3). In this context, the ER is a central node of the proteostasis network controlling folding, processing and trafficking of up to a third of the protein load in the cell (6). The UPR originated in the ER (for now referred as "UPR") is a main intracellular mechanism responsible to safeguard the fidelity of the cellular proteome and for this reason, it will be the main focus of the current review (6,7). The UPR is an adaptive reaction controlled by three ERlocated signal transducers: inositol requiring enzyme 1 (IRE1) α and β, protein kinase R-like ER kinase (PERK) and activating transcription factor 6 (ATF6) alpha and beta (6) (Figure 1). Upon activation, these signal transducers activate gene expression programs through specific downstream transcription factors, restoring proteostasis and increasing ER and Golgi biogenesis (6,8). IRE1α cleaves the mRNA encoding for the X-box binding protein (XBP1), removing a 26 nucleotide intron, which followed by RTCB (RNA 2 ′ ,3 ′ -Cyclic Phosphate and 5 ′ -OH ligase) ligation changes the coding reading frame, prompting the translation of a protein with transcription factor activity termed XBP1s (XBP1 spliced) (7). XBP1s controls the expression of genes involved in ER-associated degradation (ERAD), lipid biosynthesis, folding and quality control (9,10). IRE1α RNase also directly degrades diverse mRNAs and microRNAs through a process termed "Regulated IRE1-Dependent Decay" (RIDD) (11), originally proposed to contribute to alleviating the detrimental effects of ER stress by reducing the protein folding load (12), in addition to regulating inflammation and apoptosis (13). Activation of PERK mediates protein translation shutdown via phosphorylation of eukaryotic initiation factor 2α (P-eIF2α), which also favors selective translation of certain mRNAs encoding proteins involved in cell survival, ER homeostasis and anti-oxidant responses, such as ATF4 and nuclear erythroid related factor 2 (NRF2) (6,14). ATF6, translocates to the Golgi apparatus where it is cleaved by site-1 and site-2 proteases, releasing a transcription factor that directs the expression of genes encoding ERAD components, ER chaperones and molecules involved in lipid biogenesis (15,16). XBP1s and ATF6 can also heterodimerize to control selective gene expression patterns (9). Moreover, the activity (signaling amplitude and kinetics) of the three UPR stress sensors is controlled by several cofactors through the assembling of distinct platforms termed the UPRosome (17). Binding of adapter proteins to the IRE1α UPRosome also mediates the crosstalk with other stress pathways including MAP kinases and NF-κB (6). Thus, the UPR integrates information regarding intensity and duration of the stress stimuli toward cell fate control in cells suffering from ER stress. UPR in Brain Homeostasis And Protein Misfolding Diseases ER stress signaling has a physiological as well as pathological role in brain function and development (18)(19)(20). In neurodegeneration, the UPR influences several aspects including cell survival, synaptic plasticity, axonal regeneration, protein aggregation and control of the secretory pathway (21)(22)(23). By mediating synthesis and secretion of the brain-derived neurotrophic factor (BDNF), XBP1s regulates neuronal plasticity at a structural, molecular and behavioral level (18,(24)(25)(26)(27). Moreover, postmortem tissue analyses revealed that ER stress markers often co-localize with cells containing protein aggregates in brain of patients affected with PMDs (4,5,22,28). In AD, the expression of Grp78/BiP, PDI and HRD1 is increased in the hippocampus and temporal cortex; and the phosphorylated forms of PERK, IRE1α and eIF2α are found in AD neurons and substantia nigra of PD patients (22,29,30). Phosphorylated IRE1α levels directly correlate with the degree of histopathological changes, where most cells showing neurofibrillary tangles exhibit signs of ER stress (31). Furthermore, ER stress signs are also observed in different brain areas in PD patients, a phenomenon also observed in incidental cases of subjects who died without PD symptoms but presented α-synuclein inclusions in the brain (32). Moreover, components of all UPR signaling branches are overexpressed in spinal cord samples of patients with familial and sporadic forms of ALS (33), as well as in striatum, parietal cortex and caudate putamen of HD and Prion disease patients (22,(34)(35)(36)(37)(38)(39). In support of a dual role of UPR in controlling cell fate in neurodegenerative diseases, genetic disruption and pharmacological intervention modulating ER stress signaling revealed that depending on disease type and the UPR component targeted, distinct and even opposite effects are observed [reviewed in (21,40)]. Conditional deletion of XBP1 in the central nervous system (CNS) provides protective effects through upregulation of autophagy levels, improving motor performance in ALS, PD and Huntington's disease models (35,37,41), whereas XBP1 deficiency does not affect Prion pathogenesis in vivo (42). Ablation of IRE1α signaling in neurons decreases astrogliosis and amyloid β accumulation in an animal model of AD, correlating with improved neuronal function (31). Conversely, therapeutic gene delivery of active UPR components or ER chaperones to specific brain areas has shown outstanding effects in different animal models of PMDs (43). Different studies have shown that ectopic delivery of XBP1s into the hippocampus restored synaptic plasticity in an AD model (27), promoted axonal regeneration (44), reduced mutant huntingtin aggregation (45) and protected dopaminergic neurons against PD-inducing neurotoxins (41,46). Targeting the PERK pathway also provides contradicting results. PERK signaling supports oligodendrocyte survival in animal models of multiple sclerosis (MS) (47) and enhancement of eIF2α phosphorylation is protective in ALS and other models (32,48), whilst ATF4 deficiency has a detrimental effect in spinal cord injury models, diminishing locomotor recovery following lesion, also impacting oligodendrocyte survival (49). Conditional deletion of PERK in the brain however, improved cognition in an AD model, correlating with decreased amyloidogenesis and restoration of normal expression of plasticity-related proteins (50,51). Similarly, genetic targeting of CHOP has neuroprotective effects in a PD model, and ATF4 ablation protects against ALS (52,53). Consistent with this, sustained PERK signaling has been shown to enhance neurodegeneration due to acute repression of synaptic proteins, resulting in FIGURE 1 | Signaling pathways of the unfolded protein response. Noxious stimuli in cells may induce endoplasmic reticulum (ER) stress and trigger an adaptive response known as the unfolded protein response (UPR), which is controlled by three main ER-resident sensors: IRE1α, PERK and ATF6. Upon ER stress, IRE1α autophosphorylates, leading to the activation of its RNase domain and the processing of the mRNA encoding for XBP1s, a transcriptional factor that upregulates genes involved in protein folding and quality control, in addition to regulating ER/Golgi biogenesis and ER-mediated degradation (ERAD). Additionally, IRE1α RNase also degrades a subset of specific RNAs and microRNAs, a process termed Regulated IRE1α-Dependent Decay (RIDD). The second ER sensor, PERK, phosphorylates the translation of the eukaryotic initiation factor eIF2α, decreasing the synthesis of proteins and the overload of misfolded proteins at the ER. PERK phosphorylation also leads to the specific translation of ATF4, a transcription factor that promotes the expression of genes related to amino acid metabolism, anti-oxidant response, autophagy and apoptosis. The third UPR sensor, ATF6, is a type II ER transmembrane protein that encodes a bZIP transcriptional factor in its cytosolic domain. Following ER stress, ATF6 translocates to the Golgi apparatus where it is processed, releasing a transcription factor which directs the expression of genes encoding ER chaperones, ERAD components and molecules involved in lipid biogenesis. abnormal neuronal function, as demonstrated through PERK inhibitors in Prion disease (54), frontotemporal dementia (48) and PD models (32). ATF6, on the other hand, protected dopaminergic neurons in another PD model, by upregulating ER chaperones and ERAD components (55,56). Overall, UPR mediators have a pivotal role in the progression of various PMDs, nurturing the hypothesis that UPR components could be used as therapeutic targets in neurodegeneration (21,22,43). UPR in Neuroinflammation Immune surveillance is an active process in the brain. The mammalian CNS harbors several subtypes of leukocytes, which display physiological roles related to tissue homeostasis and regulation of the inflammatory response (57,58). However, if unrestrained, inflammation can have detrimental effects in the CNS, contributing to the type of tissue malfunction that precedes pathological processes (59). During neuroinflammation, the immune response in the CNS is drastically altered, and it is typified by activation of resident microglia and invasion of peripheral immune cells into the parenchyma, including granulocytes, monocytes and, in pathologies like multiple sclerosis, lymphocytes (60)(61)(62)(63). Interestingly, the UPR has shown to regulate inflammation in peripheral tissues, emerging as an interesting candidate for targeting CNS-associated inflammation in a field that remains largely unexplored. Thus, in addition to the well-described role of the UPR in neuronal fitness, it is also plausible that UPR activation in CNSassociated immune cells could contribute to modulating PMD development. One hallmark of neuroinflammation is the presence of tumor necrosis factor (TNF), interleukin (IL)-1β, and IL-6 in brain, cerebrospinal fluid (CSF) and serum of patients with AD, PD and HD (63)(64)(65). Production of pro-inflammatory cytokines across tissues depends on activation of innate immune sensors (known as pattern recognition receptors, PRRs) specialized in the recognition of microbes and stress signals (63). In the brain, PRRs can promote pro-inflammatory cytokine production upon recognition of "neurodegeneration associated molecular patterns" (NAMPs) that consists in CNS-specific danger signals such as extracellular protein aggregates, molecules exposed by dying neurons, lipid degradation byproducts and myelin debris, among others (66). The most relevant PRRs associated to the development of PMDs are TLRs (Toll-like Receptors) and NLR (Nucleotide-binding domain, leucine-rich repeat containing) inflammasomes (63). These receptors are broadly expressed in CNS-myeloid cells including microglia, macrophages and infiltrating cells such as monocytes and dendritic cells (DCs) (63,67). Interestingly, PRR-signaling and the UPR converge on several levels for amplification of inflammatory responses via activation of NF-kB, IRF-3, JNK and JAK/STAT modules (68)(69)(70)(71). Signaling via TLR2 and TLR4 induces ER stress in peripheral macrophages and activates IRE1α and XBP1s, which in turn is required to increase production of IL-6 and TNF, thus connecting activation of the IRE1α-XBP1s branch of the UPR with TLR-dependent pro-inflammatory programs (68). In the CNS, misfolded α-synuclein and Fibrillar Aβ, characteristic in patients with PD and AD, can be sensed by TLR1/2 and TLR4, further promoting inflammation (63) (Figure 2). Moreover, injection of lipopolysaccharide (LPS), an agonist of TLR4, into the substantia nigra induces dopaminergic neuronal death resembling animal PD models (73). LPS-induced neurotoxicity and LPS-derived inducible nitric oxide synthase (iNOS) expression was shown to be mediated by the UPR related chaperone BiP/Grp78 and NF-kB (74,75). Correspondingly, Tlr4 null mice are protected from PD in a mouse model induced with neurotoxins (63,76). Overall, TLR pathways activating the IRE1α-XBP1s axis are relevant drivers of PMDs, although the precise contribution of this UPR branch to TLR-induced neuroinflammation remains to be formally demonstrated. Another PRR relevant in neurodegeneration modulated by the UPR, is the NLRP3 (NLR Family Pyrin Domain-Containing-3) inflammasome, a multimeric protein complex composed of the NLRP3 sensor, the adaptor ASC and activated caspase 1, which mediates the proteolytic activation of IL-1β and IL-18 and promotes a type of inflammatory cell death referred to as pyroptosis (63). In the brain, the NLRP3 inflammasome is activated by amyloid β and α-synuclein aggregates (63). The relevance of this protein complex is underscored by studies with Nlrp3 deficient mice carrying mutations associated with familiar AD, which are protected from the disease (77). On a mechanistic level, the interplay between the UPR and inflammasome activation has been connected to IRE1α signaling (78), where the RNase domain of IRE1α increases the expression of TXNIP, an activator of the NLRP3 inflammasome, through degradation of the TXNIP-destabilizing microRNA miR-17 (78) (Figure 2). Considering the relevance of the NLRP3 inflammasome in AD progression and its dependence on IRE1α endonuclease, it is tempting to speculate that IRE1α activation in CNSresident myeloid cells may contribute to the development of AD (79)(80)(81)(82)(83)(84). Additionally, the B-class scavenger receptor CD36, upon recognition of amyloid β fibrils, forms a complex with TLR4/6, which triggers activation of the NLRP3 inflammasome, promoting cytokine and ROS production (67,85). On the other hand, in models of peripheral nerve damage, XBP1 expression has been shown to enhance nerve regeneration after injury, involving increased expression of the chemokine MCP-1 and macrophage infiltration, essential to remove myelin debris and allow axonal regeneration (44). PERK expression correlates with astroglial activation and production of IL-6 and the chemokines CCL2 and CCL20, which promotes microglial activation (71,86). In spinal cord injury, ATF4 deficiency reduced microglial activation, which is associated with altered levels of IL-6, TNFα, and IL-1β (44)(45)(46)(47)(48)(49). Similarly, ATF6 deficiency in the context of PD induced by neurotoxins leads to suppression of astroglial activation and decreased production of BDNF and antioxidative genes, such as heme oxygenase-1 (HO-1) and xCT (56). To sum up, ER stress and inflammation are both prevalent in many neurodegenerative diseases and NAMPs can alter neuronal function as well as promote inflammation through the activation of innate defense mechanisms of immune cells in the CNS, which can be modulated by UPR activity and vice versa. Immune Targets of the UPR in the Central Nervous System Although it is clear that inflammation contributes to neurodegeneration (61), there has been limited knowledge about the homeostasis of immune cells residing in the CNS. Recent technological advances in single cell analysis have provided insights into the identification and characterization of the vast diversity of immune cell lineages present in the healthy and pathogenic brain (61,62). The potential role of the UPR in immune cell lineages in the CNS is illustrated in Figure 2. Microglia Microglia is the CNS-resident macrophage and most prominent myeloid cell in the brain (87). Microglia fine-tunes the development of neuronal circuits, neurogenesis and synaptic plasticity through the production of neurotrophic factors (88,89). Given that several PRRs that signal via IRE1α and XBP1s such as TLR1/2 and TLR4, the NLRP3 inflammasome and nucleic acid sensors are expressed in this cell lineage, it is plausible that microglial XBP1s activation may contribute to the initiation of neuroinflammation. The ATF6 branch has also been associated with microglial activation and production of inflammatory mediators via NF-kB (90). Furthermore, although long conceived as a homogeneous cell type that becomes destructive in neurodegeneration (62), comprehensive single cell RNA analysis has demonstrated that a subset known as "diseaseassociated microglia" plays an important role in several CNS diseases including AD, ALS, MS and also in aging (62,(91)(92)(93). Thus, it is vital to elucidate whether protective microglial populations engage the UPR upon innate recognition of NAMPs, and whether microglial UPR is an intrinsic mechanism of sensing danger in the CNS. Border Associated Macrophages Border associated macrophages (BAMs) are a recently characterized population distinct from microglia and from infiltrating monocyte-derived macrophages, which display high heterogeneity and are classified per phenotype, development and location in the CNS (62,94). Single cell analysis, fate Frontiers in Immunology | www.frontiersin.org FIGURE 2 | cells in the brain are microglia, which along with border associated macrophages ("BAMs") and dendritic cells act as sentinels, sampling the environment and clearing cell debris, maintaining CNS homeostasis. Except for dendritic cells and macrophages, which exhibit IRE1α/XBP1s activation, little is known about UPR activation in additional myeloid subsets, although microglia, macrophages and monocytes could potentially activate this axis downstream of PRR signaling. While very rare, B and T cells have been identified in the steady state brain, and activation of IRE1α/XBP1s has been proposed to be critical for their differentiation and activation. ATF6 axis is also necessary for B cell development and activation whilst absence of PERK contributes to plasma cell differentiation and immunoglobulin synthesis. Basal activation of UPR in neurons is still a matter of debate in literature as the function of IRE1α and PERK pathways has just begun to be understood in the context of normal neuronal physiology (72). In aging and neurodegeneration, the number of immune cells within the brain increases, due to higher cell activation as well as blood brain barrier infiltrates. Extracellular protein aggregation promotes activation of immune cells via PRRs, in addition to inducing ER stress and activation of the UPR, mainly the IRE1α/XBP1s axis. Microglia and dendritic cells become more activated, with higher production of pro-inflammatory and oxidative mediators and loss of their protein clearance function. This is further aggravated by antibodies against CNS-derived antigens by B cells accumulated in the CSF, mediated by the activation of IRE1α and ATF6 signaling. Activation of infiltrating T cells reactive to α-synuclein, amyloid-β and myelin constituents further amplify inflammation, resulting in more protein aggregation and neuronal loss. In neurons, UPR triggering may elicit both, adaptive or neurodegenerative responses, since all three UPR pathways are engaged in brain diseases and have been found to be altered during the normal aging process. Different inducers of neuroinflammation, have shown to engage the UPR in neurons and promote a greater inflammatory response due to immune cell infiltration, mainly B and T cells. The cDC1 subset of dendritic cells could activate IRE1α for cross presentation of antigens to infiltrating CD8 + T cells, and cDC2 as well as monocyte-derived DCs may set an inflammatory environment through cytokine secretion and activation of infiltrating CD4 + T cells. Macrophages and microglia also become highly activated and could tune IRE1α/XBP1s upon recognition of NAMPs. Inflammatory mediators such as cytokines prime axonal destruction and neuronal loss. It remains to be addressed weather UPR triggering in these cells corresponds to a homeostatic (adaptive) response, or a terminal (neurodegenerative) response due to sustained unresolved ER stress. mapping and parabiosis experiments revealed that these cells express distinct surface markers and differentially populate the pia mater, perivascular space, choroid plexus and dura mater (62,94). Most of these subsets sample the environment, clear apoptotic cells and amyloid β plaques, and help maintaining CNS homeostasis in steady state. Up to date, there is no evidence available on the extent of UPR activation in BAMs. However, it has been described that splenic F4/80 macrophages display basal levels of IRE1α RNase activity and upon bacterial infection, peripheral macrophages induce XBP1s for enhancing cytokine production in a mechanism mediated by TLRs and reactive oxygen species (68,95). However, whether CNS macrophages show a functional analogy to peripheral macrophages and also engage the IRE1α-XBP1s branch upon recognition of NAMPs (68) remains undetermined. Dendritic Cells DCs are major APCs in the CNS, acting as sentinels between brain and periphery (87,(96)(97)(98)(99). Steady-state CNS is populated by most DC subtypes, including plasmacytoid DCs (pDCs), and conventional DC type 1 (cDC1) and type 2 (cDC2) (62). These cells locate in the choroid plexus, pia mater and dura mater, but not in the perivascular space, suggesting that these compartments may serve as entry sites for MHC-dependent T cells (62,96,97). Importantly, DCs are key targets of the UPR. XBP1s is constitutively expressed by DCs and high XBP1s is a hallmark of cDC1s across tissues, although the CNS remains to be examined (95,100,101). Furthermore, cDC1s activate the IRE1α -XBP1 axis for development, survival in mucosal tissues and cross-presentation of antigens to CD8 + T cells, which may be of relevance in infections with neurotropic viruses (2,102). In addition, cDC1s are highly sensitive to perturbations in XBP1 signaling and counter activate RIDD upon XBP1 loss (95,101). The implication of RIDD and XBP1s signaling in DC subtypes in the CNS has not been explored so far but relevant aspects downstream of XBP1s and RIDD may encompass cytokine production upon recognition of protein aggregates, cell survival and cross-presentation of antigens to CD8 + T cells. Lymphocytes T and B cells survey the steady-state CNS exerting a neuroprotective role, but can become pathogenic under unresolved inflammation (57,(103)(104)(105)(106). T cell numbers have been found to be increased in AD, PD, ALS and MS, and to contribute both to inflammation and neuronal dysfunction as well as to deferring inflammatory responses leading to nerodegeneration (107,108). The immune response elicited by these cells in the CNS depends on their functional phenotype, although observations regarding cell number and T cell subset involved varies between different disease types and model of study (108)(109)(110)(111)(112)(113). UPR activation in T cells is not completely elucidated, however the IRE1α-XBP1s branch has shown to regulate cell differentiation and cytokine production in CD8 + and CD4 + T cells under infection and chronic ER stress (114)(115)(116)(117)(118). During neuroinflammation and aging, B cells play a pathogenic role by producing pro-inflammatory cytokines, promoting effector T cells and activating macrophages via Fc receptors (62,(119)(120)(121)(122)(123). B cell development, activation and differentiation is critically regulated by IRE1α-XBP1s and ATF6, whilst absence of PERK favors plasma cell differentiation and immunoglobulin synthesis (124)(125)(126)(127)(128). Overall, as proposed on Figure 2, activation of UPR components could occur in CNS-residing and infiltrating immune cells upon PRR recognition of protein aggregates, or due to noxious threats. The IRE1α-XBP1s axis has a key role in immune cell development from hematopoietic progenitors, cell survival and effector function, and it could be activated by NAMPs through PRR signaling in microglia, macrophages or dendritic cells, inducing cell maturation and activation (66,68,88,97). The PERK pathway in contrast, is mostly deactivated to allow immune cells to fulfill their function under different inflammatory settings without going through apoptosis. In AD or PD however, sustained stimulation triggered by amyloid β or α-synuclein aggregates could lead to a dysfunctional activated phenotype associated to defective clearance and increased production of inflammatory mediators. This process could, in turn, attract more immune cells that exert a neurotoxic effect, promoting the accumulation of more protein aggregates, axonal destruction and neuronal malfunction (129,130). Under this chronic ER stress, UPR signaling would be expected to be highly activated in CNS-related immune cells, in line with observations in brain samples of patients. Nevertheless, it remains to be addressed whether the UPR output in CNSassociated immune cells proves to be beneficial or detrimental for the development of PMDs, as is the case of neurons and astrocytes (131,132). CONCLUDING REMARKS The interplay between the UPR, the immune system and the CNS in neurodegenerative diseases remains in its early stages. Intensive research will be required to accurately understand the role of ER stress in the immune-related aspects of CNS pathology and to determine whether UPR signaling in immune cells answers to a homeostatic or a terminal fate. It is also important to keep in mind the potential differences between human and mice immune cell types, since most knowledge gained in this matter emerges from studies in murine models. Through our knowledge on the UPR role in peripheral immunity and neurodegeneration models, better access to human samples and the advent of novel analytic tools for identification of the diversity of cell lineages, the cell-specific contribution of the UPR to neural and CNS-associated immune cells will begin to be elucidated, generating valuable knowledge that may provide therapeutic opportunities. AUTHOR CONTRIBUTIONS All authors read and approved the final version of the manuscript. PG-G and FC-M contributed equally to the work. PG-G, FO, FC-M, and CH participated in manuscript conception and design.
2018-11-02T13:03:47.253Z
2018-11-02T00:00:00.000
{ "year": 2018, "sha1": "74b6c1ae2c483f0ac622de8aa78d8404ee4acaec", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.02541/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74b6c1ae2c483f0ac622de8aa78d8404ee4acaec", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
31961873
pes2o/s2orc
v3-fos-license
necrobiosis lipoidica – an old but challenging dermatosis Introduction. Necrobiosis lipoidica (NL) is a rare skin condition associated with diabetes that occurs in 0.3–1% of diabetic patients. Nevertheless, there are patients who develop this entity in the absence of diabetes mellitus. Objective. To highlight some problems in differential diagnosis and treatment of NL. Case report. We report a case of a 44-year-old woman with a 3-year history of undiagnosed skin lesions. Clinical examination revealed atrophic plaques on an erythemal background located on both shins. The first histopathological examination revealed infiltrated granulomatous lesions, later confirmed as deep tuberculoides chronic granulomatous dermatitis. However, after 6-month anti-tubercular treatment skin lesions were still present. Based on another skin biopsy we restricted the diagnosis to polyarteritis nodosa, necrobiosis lipoidica and granuloma annulare. Further examination led us to establish a diagnosis of necrobiosis lipoidica. The patient was treated with many therapeutic regimens without a satisfactory response. Conclusions. Lack of treatment guidelines and therapy based on dermatologists’ clinical experience suggest that NL needs to be evaluated in the near future. A large retrospective study and systematic review are required to establish diagnostic and treatment regimens. braniu następnej biopsji potwierdzono diagnozę NL.U pacjentki zastosowano wiele metod leczenia bez zadowalających rezultatów.Wnioski.Brak jednoznacznych wytycznych, a także dane z piśmiennictwa sugerują konieczność prowadzenia dalszych badań w celu ustalenia postępowania diagnostyczno-terapeutycznego u chorych na NL. intrOductiOn Necrobiosis lipoidica (NL) is chronic granulomatous disease involving both dermis and subcutaneous tissue, first described in 1929 by Oppenheim [1].Necrobiosis lipoidica presents as indolent shiny plaques in most cases localized on lower extremities.Usually, the disease progresses from red-brown telangiectatic lesions to atrophic, depressed and sclerotic plaques.It is considered that NL occurs mainly in diabetic patients, but there is increasing evidence about its concomitance with sarcoidosis, connective tissue diseases (CTD) and inflammatory bowel diseases (IBD) [2][3][4].Based on data from the literature, about 75% of NL patients also suffer from diabetes mellitus (DM) [5], but NL occurs in only 0.3-1% of diabetic patients [6].The association between NL and DM remains equivocal; there are still patients who develop these lesions in absence of DM.The histopathological picture of NL demonstrates an inflammatory process with granuloma formation, degeneration of collagen and thickening of endothelium.Granulomas, composed predominantly of lymphocytes and histiocytes with occasional plasma cells and eosinophils, are arranged in a layered fashion.Reduction of intradermal nerves and perivascular lymphocyte concentration is a supplementary feature of NL [7].The pathogenesis of NL remains unknown, although many investigators have considered diabetic microangiopathy.Other conceptions about development of NL include immunoglobulin deposition in the endothelium and platelet aggregation dysfunction [7,8].Słowik-Kwiatkowska et al. [9] demonstrated that increased tumor necrosis factor (TNF)-α production in NL leads to impaired angiogenesis, which results in microangiopathy.There are also no guidelines about treatment of NL; immunosuppressive drugs such as cyclosporine, corticosteroids and antimalarials are not always effective, but there is increasing evidence that phototherapy combined with psoralen administration (PUVA) is efficient [10]. ObJective In this article we present a case of NL which was very challenging in both diagnosis and treatment. cAse repOrt The 44-year-old woman was admitted to our hospital with a 3-year history of skin lesions (fig.1).Six months before admission, the diagnosis of deep tuberculoides chronic granulomatous infiltrate based on histopathological findings, was established, and in that time she was treated with isoniazid and rifampicin without the expected improvement.During the first hospitalization clinical examination revealed slightly painful red-brown atrophic plaques with visible blood vessels located on both shanks.Some small areas showed crusting and superficial ulceration.The histopathological analysis suggested the diagnosis of necrobiosis lipoidica.We decided to administer phototherapy combined with psoralen, but 72 cycles of PUVA bath did not bring a satisfactory effect.The next pathological examination revealed the presence of granulomas and fibrosis in the dermis, accompanied by an inflammatory infiltrate.The diagnosis suggested chronic granulomatous dermatitis, which could be observed in sarcoidosis, tuberculosis or necrobiosis lipoidica; nevertheless, the chest X-ray did not indicate any disorder.We performed polymerase chain reaction, but mycobacterial DNA was not detected in biopsy specimens.Then, we started dapsone therapy (100 mg per day), which resulted in blanching and flattening of skin lesions and healing of minor ulcerations (fig.2).After 2 months of dapsone therapy, side effects from the gastrointestinal tract (nausea, mucous erosion and oral dryness) and paresthesias on both legs were observed, so it was discontinued.Additionally, atrophic plaques were enlarged again and new skin changes were noticed.The third pathological examination revealed perivascular macrophage infiltration and focal necrobiosis, with the suggestion of polyarteritis nodosa, necrobiosis lipoidica or granuloma annulare.We decided to include cyclosporine therapy in a dose of 300 mg per day.The performed venereal disease research laboratory test (VDRL) and anti-nuclear antibody levels were both negative, and other laboratory examination results were within normal limits.Skin changes were still painful and without any signs of clinical remission.Lack of improvement in the patient's condition resulted in the fourth biopsy, which clearly suggested necrobiosis lipoidica.Cyclosporine therapy was replaced by methylprednisolone and pentoxifylline, but no significant remission of skin lesions was observed, although the patient reported reduction of pain.The patient remains under observation in an outpatient clinic, but after various therapies we did not achieve clinical remission (fig.3). discussiOn Necrobiosis lipoidica is an idiopathic dermatological condition, and very few studies have been conducted in order to establish its pathogenesis.The age of onset varies between infancy and late adulthood, and in most cases NL occurs in the third decade [11].Necrobiosis lipoidica also has a sex predilection, being three times more common in women than in men [11].One of the main problems with NL is its clinical similarity to other skin conditions: morphea, sarcoidosis and granuloma annulare [12].Both morphea and NL can present as isolated yellow patches on sclerotic skin, but NL can be distinguished from morphea by telangiectasias around the lesions.In the case of sarcoidosis, cutaneous symptoms range from nodules and rashes to erythema nodosum.Skin atrophy occurs rather rarely, but in our case the absence of radiological abnormalities excluded sarcoidosis from the differential diagnosis.In the early form of necrobiosis lipoidica, superficial annular lesions can resemble granuloma annulare, as in our case [12].These data indicate that the diagnosis of NL is time consuming.In our case the final diagnosis was established after four biopsies. Treatment of necrobiosis lipoidica is challenging and usually marginally effective.Previously, we reported effectiveness of PUVA therapy in NL [10].After a series of sessions we observed remission of skin lesions in some cases.Despite our promising results, in the reported case long-term photochemotherapy did not bring any improvement.The presence of inflammatory infiltrate in examined skin specimens was the reason why dapsone was introduced as a new treatment modality.Anti-inflammatory properties of dapsone could explain some regression of the lesions in our case, but this therapy should be discontinued because of side effects.Literature data indicate effectiveness of cyclosporine A in the treatment of NL [13].In the present case 3 months of treatment with cyclosporine A did not bring sufficient improvement or even slight pain reduction. Systemic and topical treatment with corticosteroids is one of the most widely used therapies in the treatment of NL, but in our case long-term therapy with methylprednisolone has not brought detectable clinical improvement.Nevertheless, Petzelbauer et al. [14] described successful 5-week systemic corticosteroid therapy with complete regression of skin lesions and no recurrence within 7 months after treatment.Also, vascular drugs are widely used in therapy of NL.Basaria and Braga-Basaria [15] and Littler and Tschen [16] reported successful treatment of NL with pentoxifylline, but in our case it was ineffective.cOncLusiOns Despite a thorough diagnostic process including numerous histopathological examinations and laboratory tests, the diagnosis of NL was significantly delayed.Lack of treatment guidelines is also another problem, so the applied therapy is mainly based on dermatologists' clinical experience.Good outcomes achieved by other doctors were not confirmed in our case.We present this case report as it was challenging both in differential diagnosis and treatment.
2017-10-21T01:18:42.470Z
2016-02-16T00:00:00.000
{ "year": 2016, "sha1": "ee392955e72cb07af13e868de9cf54a8fa2c5d45", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-56/pdf-26906-10?filename=Necrobiosis%20lipoidica.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee392955e72cb07af13e868de9cf54a8fa2c5d45", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
99387769
pes2o/s2orc
v3-fos-license
Renewable Hydrocarbons from Triglyceride's Thermal Cracking Renewable Hydrocarbons from Triglyceride's Thermal Cracking This chapter gives an overview of renewable hydrocarbon production through triglycer‐ ide's thermal‐cracking process. The influence of feedstock characteristics and availability is discussed. It also presents issues about the reaction, the effect of operational conditions, and catalysts. A scheme of the reaction is presented and discussed. The composition and properties of bio‐oil is presented for both thermal and catalytic cracking. The high content of olefins and the high acid index are drawbacks that require downstream processes. The reactor design, kinetics, and scale‐up are opportunities for future studies. However, the similarity of bio‐oil with oil turns this process attractive. Introduction Nowadays, the search for processes that aims to reduce the use and the dependence of fossil fuels is imperative. Decrease in the emission of greenhouse gases might be a global effort. In this way, the biomass appears to be the logical choice to produce solid, liquid, and gaseous fuels, once it is abundant and available all over the world [1]. There are many technological processes applied to different kinds of biomass being studied and proposed by scientific community [2]. One thing is for sure, there will not be only one technology that will solve all the issues, but different technological routes taking into account the specific characteristics of the source region and the feedstock. Besides the fact that these new technologies to produce biofuels must be environmentally friendly, they are facing some obstacles to overcome economic and technical viability, high scale, and stable production. Specifically on liquid biofuels, another technical barrier is the fact that almost every machine and vehicle was designed for fossil fuels usage. These fossil fuels have several regulations and quality parameters that must be attended for commercialization. In this way, it is a "sine-qua-non" condition that the new generation on liquid biofuels shall be compatible with actual standard of the engines. The modern electronic fuel injection systems make possible the use of different fuels maintaining a good combustion in the engine. However, how higher the similarity of the biofuel with fossil fuels, higher is the possibility for its commercial application. In this way, the organic liquid product produced by thermal cracking of vegetable oils and waste fats appears with high potential of oil substitute in the refineries [3]. The objective of this chapter is to provide a brief overview about the thermal conversion of triglycerides into a liquid fraction, called bio-oil, rich in hydrocarbons, presenting its properties. Thermal cracking of triglycerides The production of bio-oil through thermal cracking of biomass is easily found in literature . The bio-oil is defined as a dark brown viscous corrosive fuel obtained from biomass pyrolysis [33], but it is very important to highlight that the bio-oil has different properties according to the feedstock. If it is produced from lignocellulosic materials, the bio-oil has significant amount of water and oxygen content, decreasing its gross calorific value and its stability [34,35]. On the other hand, if the feedstock is triglycerides, the oxygen and water content is low and the high heating value is comparable to the fossil fuels [6,36]. Another important characteristic is the similar chemical composition, based on hydrocarbons [37]. So, based on these issues, the bio-oil produced from triglycerides appears like one of the most promising technologies for biofuels production [38]. Feedstock The triacylglycerol, also known as triglyceride (TAG) is an ester derived from glycerol and three fatty acids [38]. It can be found in edible and nonedible vegetable oils, animal fats, and used oils. The most abundant vegetable oils are soybean, palm, canola, sunflower, rapeseed, among others. From animals, the main sources are pork lard, poultry fat, fish oil, and beef tallow [39]. Waste greases or tap greases are found in cooking oils and sewage scum [40]. In general, they have similar physical properties and chemical structure. They differ in the composition of the fatty acids, in the acidity and content of saturated fatty acids [39]. The acidity of the oil is evaluated through the acid index determination (ASTM D974) which gives the free fatty acids (FFAs) content in the oil. Waste oils are classified in yellow and brown greases according to the content of FFA. Oils with lower than 15% (w/w) are classified as yellow greases, while if it has more than 15% it is brown greases. The iodine index (pr EN 14111) provides the number of double bonds in the fatty acids. Oils with high content of unsaturated acids are liquid in ambient conditions; however, oils with high content of saturated acids are solid or semisolid in the same conditions. The fatty acid composition is provided by the fatty acid methyl ester (FAME) determination [41]. It is a chromatographic analysis, which is a well-accepted method for its determination. The fatty acids composition of various TAGs can be found in the literature [34,41,42]. One fact that must be pondered over, when one talks about biofuels production using TAGs, is the feedstock availability [42]. In this way, we have two subjects to consider: the use of edible oils and the logistic to join the wasted ones. In the first case, we need to consider the food versus fuels issues. The main concern is based on the assumption that biofuel feedstocks tend to be more profitable than food feedstocks, which may lead to food shortages. Thus, it must be carefully pondered in order to efficiently attend both markets [43]. In the second case, it is possible to consider the waste-cooking oils, the animal fats, and the sewage scum. From cooking oils, its generation varies to each country, as it depends on the vegetable oil consumption. The estimated generation in the European Union (EU) is about 700,000-1,000,000 tons/year [44]. Only the UK generates an amount of approximately 250,000 tons per year [45]. Canada produces around 135,000 tons of yellow grease every year [46]. Mexico's generation is about 840,000 tons every year, similar to Malaysia. Japan produces around 450,000-570,000 tons/year [47]. Hong Kong generates approximately 20,000 tons/year [48]. The USA's generation is about 1,000,000 tons/year [47]. Even so, it is estimated that the general worldwide generation is around 4.1 kg per habitant per year [49]. Animal fats availability is also related to the region. It is well known that China, the USA, and Brazil are large producers of meat. Only in 2013, the US industry processed 180,000 tons of meat and poultry [50]. The fish industry also plays an important role. In 2014, the world fish production was about 146 million tons of fish [51]. As the amount of oil ranges from 40 to 65% [52], it represents around 70.8 million tons of waste fish oil. Thus, these numbers show that it is possible to use biofuels production as a final destination to these wastes. It is important to highlight the complex logistic to use it and that these amounts will not replace the oil, but they can be a viable alternative. Process and reaction The thermal-cracking reaction is defined as thermal decomposition of the organic chains by heat in an atmosphere free of oxygen, with or without the aid of a catalyst. Figure 1 presents a basic scheme of the triglycerides thermal-cracking process. As one can see in the scheme, the reaction will generate always a solid fraction, generally called coke, a liquid product named as bio-oil, and a gaseous stream known as biogas. This reaction is affected by the feedstock characteristics and the pair temperature-residence time [34]. The higher the temperature and the residence time, the higher the yield of the gas product. Lower temperatures and higher residence times improve the coke formation. Moderate temperatures with short residence times yield the liquid product. This last operational condition is called fast pyrolysis [5]. The fast pyrolysis process is gaining attention due to the possibility to obtain high amounts of bio-oil, which can be used as fuel. Figure 1 shows that independent of the operational conditions, the solid fraction called coke will appear, and this product will not be easily removed from the reactor. In general, this product formation is associated with clogging [53]. One possibility to remove it is to proceed a controlled burning in the heated reactor through feeding air instead of biomass, for a certain period of time, promoting the combustion of the coke. The reactor design is the heart of the process [54]. Different configurations have been proposed in the literature for several researches. It is possible to find batch [9,10,12,16,21,22,24,31,32] and continuous configurations [4-8, 11, 13, 17-20, 23, 25-27, 55]. In general, the batch reactors are used to evaluate the reaction mechanism, kinetics, yields, and chemical characterization. As it works with lower capacities, they are not appropriated for industrial applications. The continuous ones are in a higher sizes, bench or pilot, testing different reactor designs and operational conditions, evaluating the kinetics, yields, characterization, energy consumption, and economic evaluation, aiming the scale-up studies [26]. The irreversible reaction is highly endothermic and requires high heat transfer rates. The possibility to run the process in an autothermal operation promotes an advantage over other processes. This condition can be reached burning a fraction of the products to produce the thermal energy required for the reaction. An energy balance of the TAGs thermal cracking was presented by [5]. Due to the complexity of the organic reactions, there is no complete knowledge about all the reactions involved, just proposals for the principal ones. A simplified reaction step for the thermal cracking of triglycerides is presented in Figure 2. The reaction starts with the decomposition of the triglyceride molecule forming heavy oxygenated hydrocarbons. Some of the saturated fatty acids formed may not suffer any subsequent breaking. The decarboxylation and decarbonylation reactions (2) are favored by unsaturations and compete with the C-C bond cleavage reaction (3). The CO and CO 2 are formed by the deoxygenation reactions in (2) and (4). The isomerization, polymerization, dehydrogenation, and cyclization are responsible for dienes, acetylenes, cycloparaffins, and polyolefins (5). The Diels-Alder addition of dienes to olefins also produce cyclo-olefins (8) resulting in hydrogen formation. The hydrogenation of cyclo-olefins to cycloparaffins and the reverse reaction occurs in steps (6) and (7). Hydrogen also comes from steps (9) and (10). The solid product coke is produced directly from triglyceride (12), by the polycondensation of heavy hydrocarbons and saturated fatty acids (11) and aromatics (10). The polymerization of olefins can also lead to coke (13). Considering the reaction scheme in Figure 2, it is very important to advance the cracking at least to the point which deoxygenation reaction occurs, eliminating the oxygen by CO and CO2. It is also important avoid coke formation (steps 10 and 13 in the Figure 2). As a first conclusion, for thermal cracking, the temperature-residence time is the key factor for this process. The use of catalysts aims to aid the reaction and increases the products' quality [57]. As the composition of the products may vary due to catalyst material, size, and shape [58], several works evaluate the use of many types of catalysts. Table 1 shows the different catalysts used for the cracking of triglycerides. One of the concerns involving catalysts use relies on their stability and reutilization, which directly affect the cost of the process [31]. In general, the coke formation limits the use of heterogeneous catalysts, due to the deactivation, and this phenomenon requires a regeneration process for its reuse, making the entire process for the conversion complex. A scheme reaction for catalytic cracking was proposed by [59]. Yields, properties and characterization The yields of the products are strongly affected by the operational conditions. Table 2 shows the range of temperature and residence time applied in published papers, presenting the average product yields obtained in thermal [4-6, 9, 10, 13-15, 17, 18, 22, 24, 26, 31, 55] and catalytic cracking [7, 9, 11, 15, 17, 18, 20, 21-23, 25, 27, 31, 55]. In thermal-cracking processes, the temperature range is higher than catalytic. One can also note that the yield of liquid and gas products tends to be a little higher in thermal cracking. On the other hand, the coke formation is more favorable in the catalytic cracking. The liquid fuels have fundamental importance in final energy consumption, especially due to its energy density. So, in this way, most of the researches are being conducted in the way to maximize the organic liquid product. No less important are the properties and the characterization of this product. Table 3 presents average properties of the bio-oil presented in the literature for thermal [4, 5, 9, 10, 12-15, 18, 19, 22, 24, 31, 55] and catalytic cracking [7-9, 11, 15, 17, 18, 20-22, 55]. The elementary chemical composition for bio-oil does not vary so much and the sulfur content is low. The high heating value (HHV) is also comparable to the fossil fuels. The acidity of the bio-oil is higher for the thermal cracking compared to catalytic, but in both cases, the bio-oil requires a reduction in this property for processing and usage. The esterification reaction and reactive distillation were performed by [11] and [60] to reduce the acid index. The content of olefins in the liquid can also be problematic, once its content is associated with poor stability, which may lead to gum or insoluble materials formation. To saturate the double bonds, the hydrorefining process can be applied [61]. The direct hydrocracking also can be an option [62][63][64]. Table 3. Average properties of bio-oil. One way of characterizing these liquid fuels is the distillation curve, used to plot the true boiling point (TBP) versus distilled volume fraction. In general, a simple distillation is performed according to ASTM D86 and ASTM D1160 methods and data obtained are converted to TBP according to correlations outlined in [65]. Process simulators also can be used for this conversion and to predict the thermophysical properties of the oil and its fractions [66]. The bio-oil characterization using distillation curves applying the oil correlations was presented by [34]. The authors showed that it is possible to use this method, but it requires more studies to confirm the results. A chemical characterization was performed by [37] in the distilled fractions of the bio-oil produced by [4]. The purified products, light bio-oil and heavy bio-oil, were obtained in the range of the gasoline and diesel oil, respectively. The detailed hydrocarbon analysis (DHA) performed in light fraction showed that it was composed by aromatics (16.86%), i-paraffins (8.31%), naphthenes (6.07%), olefins (26.56%), paraffins (4.48%), C14+ (5.3%), oxygenates (0.06%), and unclassified (32.38%). The main composition of heavy bio-oil was formed by olefins, aromatics, and carboxylic acid residues. In a continuation of the study [60], samples of the bio-oil were submitted to a reactive distillation process to produce light and heavy bio-oil cuts, with lower acid index. The gaseous products have great importance as liquids, once it has short hydrocarbons and a high HHV and it can be fuel source for the thermal energy required by the endothermic reaction. Table 4 presents the average composition of biogas from thermal [5,10,13,55] and catalytic cracking [7,8,17,23,55]. Using this average composition, the HHV is estimated in 46.6 MJ/kg (thermal cracking) and 46.3 MJ/kg (catalytic cracking). The high content of ethene also makes this product interesting for petrochemical industries. Table 4. Average composition of the biogas produced from thermal and catalytic cracking. Kinetics One of the technical difficulties to scale up the process is the determination of the reaction kinetics. Once the process has hundreds, maybe thousands of reactions, it is very difficult to determine an accurate kinetic mechanism. In these cases, the first step is to use the lumping method to propose simplified mechanisms. The lumping strategy consists in join groups of products according to some similar property, the boiling range, for example. The works of [67][68][69] presented the first kinetic lumped models for TAG's thermal cracking. Table 5 shows the kinetic models proposed in the literature. The model proposed by [67] is simpler than the other models once it has fewer lumps, but it can predict the solid fraction. The study of the kinetic of cracking of TAGs is increasing and soon more models shall appear. Challenges The continuous availability of the feedstock is an issue that requires a complex logistic to solve the high-scale collection. In certain regions, staying close to animal-rendering facilities can be an option [70]. The industrial application of the thermal/catalytic-cracking technology has some obstacles to overcome [71]. The first is related to reactor design and scale-up. With the improvement of the kinetics, the simulation using computational fluid dynamics shall help to deal with this issue. A short work presented by [72] deals with the simulation of TAG's thermal-cracking reactor aiming scale-up studies. The products upgrading is required also, especially to deal with the acid index and olefins content. The acidity reductions, mainly caused by carboxylic acids, using the esterification reaction and neutralization, are opportunities for this issue. The reduction of alkenes content can be done through hydrotreatment reactions, widely used in oil refineries. The use of actual sites for oil refining can be suitable for this biofuel production, once most of polishing processes are present. Conclusions The thermal and/or catalytic-cracking processes are a promising technique to produce renewable source for hydrocarbon production. The product similarity with fossil fuels turns its usage and development attractive. However, some obstacles such as feedstock availability, reactor design, scale-up, and products upgrading require more studies. The thermal/catalytic cracking of triglycerides will not completely substitute the oil, but it can reduce our dependence and be a suitable environmental option.
2019-01-03T00:33:16.997Z
2017-01-25T00:00:00.000
{ "year": 2017, "sha1": "0bd48b893344676e477f7c132be01f5b7c1ad5a8", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/52482", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "2215b30c3de1f0a34dcaeb9e2db1e9c78bd38dcd", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
73476354
pes2o/s2orc
v3-fos-license
The incidence of biphalangeal fifth toe: Comparison of normal population and patients with foot deformity Background: Pedal biphalangism, which was also defined as symphalangism, is seen at a frequency that cannot be ignored; however, no study can be found in the literature evaluating biphalangism in normal population in comparison to those who have foot disorders. The aim of this study was to evaluate the incidence of the pedal fifth toe symphalangism in normal population and in patients with foot deformity including hallux valgus, pes planus, pes cavus, and pes equinovarus. We hypothesized that pedal fifth toe symphalangism may be a predisposing factor or an accompanying structural variation for foot deformity. Materials: Patients admitted to the emergency department of our center in October and November 2016 were defined as the control group, and patients with the diagnosis of hallux valgus, pes planus, pes cavus, and pes equinovarus treated between 2011 and 2016 in our department were defined as the foot deformity group. Individuals who had anteroposterior, oblique, and lateral radiographs of foot were included in the study. Results: One thousand and four patients participated in the cross-sectional observational study. Biphalangeal fifth toe was found in 328 of 1004 (32.7%) patients. In foot deformity group (n = 672), 222 patients (33%) had biphalangeal fifth toe. In the control group, 106 (31.9%) of the 332 patients had biphalangeal fifth toe. There was no statistically significant difference in the incidence of biphalangeal fifth toe between the two groups (p = 0.72). Conclusions: According to the results of this study, biphalangeal fifth toe is a common pedal anatomical variant seen approximately in one-third of the population who have either foot deformity or not. This information may be valuable for podiatrist undertaking the conservative or surgical treatment of fifth toe-related disorders. Introduction Pedal symphalangism is a relatively common condition; the joint between the intermediate and the distal phalanges of one or more lateral toes never develops, resulting in toes with two phalanges rather than three. According to radiographic and cadaveric studies, incidence varies from 35% to 80% of the population. [1][2][3][4][5][6][7] In addition to that, the incidence of biphalangeal fifth toe differs ethnically. 2 Despite being a common condition, limited number of studies is available in the literature regarding clinical manifestations of this anatomical variant. Problems in the identification of fifth toe fractures and clinical findings such as dorsal corns and claw toes that are formed due to the rigidity of biphalangeal fifth toe are some of them. 1,[7][8][9] Recently, publications about this topic increased, but there were no studies comparing the normal population with the foot disorder population. The aim of this study was to evaluate the incidence of the pedal fifth toe symphalangism in the normal population and in patients with foot deformity including hallux valgus, pes planus, pes cavus, and pes equinovarus. We hypothesized that pedal fifth toe symphalangism may be a predisposing factor or an accompanying structural variation for foot deformity. Methods Patients who were admitted to our department and undertaken anteroposterior, oblique, and lateral radiographs of the foot participated in this cross-sectional observational study. Patients with the diagnosis of hallux valgus, pes planus, pes cavus, and pes equinovarus who were treated in our department between 2011 and 2016 were included into the foot deformity group. Patients who were admitted to the emergency department of our center in October and November 2016 with foot trauma without accompanying foot deformity were included in the control group. Patients with insufficient radiographs and lack of true anteroposterior, lateral, and oblique views were excluded from the study. After exclusion, foot radiographs of 1004 patients were evaluated in this study (Table 1). Also the study was approved by the institutional ethics committee. We evaluated the foot radiographs of patients using the radiological database of our center (PACS Infinitt, Seoul, South Korea). Patients with two-phalangeal fifth toe according to radiographical views were defined as biphalangeal fifth toe, and other patients with three-phalangeal fifth toe were defined as normal fifth toe (Figures 1 and 2). Statistical analyses were performed using SPSS 20.0 software (SPSS Inc., IBM Corp, New York, USA). The comparison of categorical data was performed using the w 2 test. The values of p < 0.05 were determined as statistically significant. Results Patient demographic data are demonstrated in Table 2. In the study population with the foot deformity group (n ¼ 672) and control group (n ¼ 332) with a total of 1004 patients, biphalangeal fifth toe was observed in 32.7% of all patients (328 individual; Table 3). In 672 patients with foot deformity, 222 patients (33%) had biphalangeal fifth toe. In the control group, 106 of the 332 patients (31.9%) had biphalangeal fifth toe. There was no statistically significant difference in the incidence of biphalangeal fifth toe Journal of Orthopaedic Surgery 27 (1) between the normal population (the control group) and the foot deformity group (p ¼ 0.72). Ninety-nine of the 273 patients (36.3%) in the hallux valgus group, 53 of the 179 patients (29.6%) in the pes planus group, 28 of the 106 patients (26.4%) in the pes cavus group, and 42 of the 114 patients (36. 8%) in the pes equinovarus group were diagnosed as biphalangeal fifth toe. The frequency of biphalangeal fifth toe was similar in patients with hallux valgus, pes planus, pes cavus, and pes equinovarus (p ¼ 0.16). The number of bilateral foot radiograph of patients who have biphalangeal fifth toe was 204. In these radiographs, only 12 (5.8%) patients have unilateral twophalanged fifth toe. Discussion Pedal biphalangism is seen at a frequency that cannot be ignored at all. The incidence of biphalangeal fifth toe in adult population has been reported in the literature with a broad range, which is different between ethnic populations. The lowest frequency was observed in Indian population, where as it was highest in Japanese population. 5,10 In Western populations, the incidence was reported as 46% in the United Kingdom, 11 41.02% in French adults, 4 46.4% in Euro-Americans, and 44% in African Americans. 3 We observed biphalangeal fifth toe in 31.9% of normal population, in 36.3% of hallux valgus, in 29.6% of pes planus, in 26.4% of pes cavus, in 36.8% of pes equinovarus groups in Turkish population. When interpreting our results, it should be taken into consideration that biphalangeal fifth toe frequency varies according to the ethnic root. In addition to ethnic background, the habit of wearing shoes may change the frequency of biphalangeal fifth toe. Rabi et al. reported, in South Indian individuals, lower incidence of biphalangeal fifth toe according to Western population. 10 In their study, authors mentioned that consistent shoe wearing in Western population may increase the incidence of biphalangeal fifth toe due to reduced vasculature and disuse of the toes (particularly the fifth toe). Authors also claimed improperly designed shoes for the development of biphalangeal fifth toe. In our population, shoe wearing habit is similar to Western countries. Although some studies have evaluated the correlation between biphalangeal fifth toe and a predisposition to symptomatic deformity. 12 To our knowledge, there is no study in the literature comparing normal population and patients with foot deformity according to the number of phalanx in fifth toe. In our study, despite the differences in foot anatomy and shoe-wearing habits between the foot deformity group and the control group, there was not a statistically significant difference in the incidence of biphalengeal fifth toe. Foot deformities such as hallux valgus, rigid or flexible pes planus, pes cavus, and pes equinovarus are common among the population. These deformities affect the functional anatomy of the foot and toes. Among the population that has these foot disorders, foot posture and foot function are associated with the presence of specific foot disorders. 13,14 Pes cavus foot in which there is loss of intrinsic musculature and domination of extrinsic musculature suffers from development of the characteristic hammer/claw toes. The lower medial column in the pes planus can alter the mechanics of the foot, resulting in limited range of motion and overlapping toes. 13,14 In patients with hallux valgus, altered regional loading and foot kinematics have been shown. 15,16 Treatment of toe deformities that are complications of these foot disorders starts with the conservative approach that consist of specially designed or customized shoes with pillows or pad support and splints to help restoration of toes' natural position. Avoiding narrow high-heeled shoes made from hard materials should be another important approach. Surgical treatment is recommended if conservative treatment fails to relieve symptoms, or the deformed toes become rigid and immovable. There is no specific surgical treatment for deformed biphalangeal fifth toe in normal feet. Surgical intervention of deformed feet can be seen as a part of the foot deformity correction. Patients with these disorders 1,12 or foot-related trauma 2,17 are mostly candidates for forefoot surgery. Surgery is more common in fifth toes with two phalanges. 1,9 Podiatrists undertaking the conservative or surgical treatment of these patients must be aware of this common anatomical variant of the foot. The main limitation of this study was that the other side was not evaluated in patients with unilateral foot radiographs. Sampling from only one side will miss some individuals who exhibit the trait in the other foot. 3 But we should note that, in literature, the incidence of bilateral biphalangeal fifth toe incidence was reported in 97.4% of patients. 18 Additionally, we did not assess phalanx numbers in the other fingers. In past studies, especially the frequency of fourth toe involvement, it was reported in approximately 1-4% in European and American feet and also in 12% of Japanese feet. 1 On the other hand, this is the first study including 672 patients to focus on differences of incidence in normal and deformed feet. Conclusions According to the results of this study, biphalangeal fifth toe is a common pedal anatomic variant seen approximately in one-third of the population, who have either foot deformity or not. This information may be valuable for podiatrist in surgical approaches of fifth toe. Authors' note This study was conducted in Erzincan University Faculty of Medicine, Baltalimani Bone and Joint Diseases Education and Research Hospital and Sivas Numune Hospital. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2019-03-11T17:20:16.751Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "dc8284d34e4bd2568f9d8b76bddb32241a94813c", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2309499019825521", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "009f7cb0de55d1ab1b27968c68fbd458e49564e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258007960
pes2o/s2orc
v3-fos-license
When crisis hits: Bike-Sharing platforms amid the Covid-19 pandemic In this work, we examine the changes in demand for bike-sharing platforms with the onset of the Covid-19 pandemic. Using the fixed-effects regression formulation of difference-in-differences, we evaluate how the demand for bike-sharing platforms changed after the first cases of Covid were discovered and after the first executive orders were implemented. Accounting for weather conditions, socio-economic characteristics, time trends, and fixed effects across cities, our findings indicate that there is an increase in daily bike-sharing trips by 22% on average after the first Covid-19 case diagnosis, and a decrease of 30% after the first executive order implementation in each municipality, using the data up to August 2020. Moreover, we observe a 22% increase in weekday-specific trip frequency after the first Covid-19 case diagnosis and a 28% decrease in weekend-specific trip frequency after the first executive order implementation. Finally, we find that there is an increase in the frequency of trips on bike-sharing platforms in more bike-friendly, transit-friendly, and pedestrian-friendly cities upon both the first Covid-19 case diagnosis and the first executive order implementation. Introduction The Covid-19 pandemic has significantly affected our societal and economic structures. Mandated lockdowns and voluntary precautions, which are taken to reduce the spread of the virus, have affected the demand for all modes of transportation, including public transport in cities. For example, Aloi et al. [1] indicate a fall of 76% in overall human mobility and a 93% decrease in public transport usage in Santander, Spain. Using aggregated mobility data from mobile phones in numerous urban areas in the U.S., Kishore et al. [2] show a surge in travel out of the cities immediately preceding the stay-at-home advisory. Another study points out a significant reduction in traffic volumes of 30% to 50% for select highways in California compared to prior shelter-in-place orders [3]. Individuals have changed their transportation patterns as personal travel decisions affect the spread of Covid-19 [4]. In response to the social distancing order, people have been less inclined to board packed buses and trains where social distancing is impossible. Accordingly, individuals reevaluate their transportation options in the face of the Covid-19 pandemic and shift to more isolated modes such as biking or walking. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 With the Covid-19 pandemic, we witness an increasing awareness of bicycles as an alternative means of transport, as many people either avoid using mass transit or encounter reduced mass transit services. In the U.S., sales of bicycles and related equipment also almost doubled in March 2020 compared with the same period in 2019 [5]. In times of crisis, bicycles can provide resilience in transport systems, satisfying our mobility needs when mass transit systems are inaccessible. For instance, during the national public transit strike in France in December 2019, Parisians adapted and learned that bikes are dependable and credible modes of transport. The bike-sharing system in Paris, Vélib, gained popularity during the strike [6]. Other examples are the 2005 New York City transit worker strike and Hurricane Sandy in 2012, which severely disrupted New York's subway system. These events led to an increase in bicycle ridership in the city of New York by about 20% [7]. Bicycle sales surged in Japan after earthquakes struck that country in 2011 [8]. With the surge in demand for bikes, the popularity of bike-sharing platforms has also increased in March compared to the same period in the year before. They have become a viable transport alternative that reduces the risk of contracting or spreading the virus and relieves the fear of overcrowding [9,10]. Compared to other means of transportation systems such as buses or trains, bicycling is an open-air activity and helps to avoid close contact with other travelers. Therefore, people have a more positive attitude toward bike-sharing for traveling amidst the pandemic [11]. For instance, Citi Bike in New York City announced a 67% increase in demand between March 1, 2020, and March 11, 2020, compared with the same period in 2019. Divvy in Chicago has also reported that the number of trips doubled in the same period [12]. A report from Foursquare and Apptopia shows that bike-share mobile application installations in May and June of 2020 were up 15.6% and 23.3%, respectively, compared to the prior year [13]. A recent study by Li et al. [14] analyzed the demand for the bike-sharing platform in London over the period from January 2019 to June 2020. They found that the number of bikesharing trips in London decreased after the lockdown; however, it was followed by an increase in demand over time. Heydari et al. [15] investigated the impact of the Covid-19 pandemic on the London bike-sharing platform over the period from March 2020 to December 2020. They initially observed a reduction in bike trips between March and April 2020; however, demand increased in May and June 2020. Moreover, Bouhouras et al. [16] found that the demand for bike-sharing platforms in Greek cities such as Igoumenitsa, Chania, and Rhodes increased in a short period of time before the lockdown period and peaked during the lockdown. Wang and Noland [17] examined the impact of Covid-19 on both bike-sharing and subway usage. They found that both subway ridership and bike-sharing usage plummeted at the beginning; however, bike-sharing usage has almost returned to normal, whereas subway ridership has remained substantially below pre-pandemic levels. Other recent studies [18,19] also revealed that bike-sharing platform usage in many cities has reached or surpassed pre-pandemic levels [20]. It is argued that bike-sharing demand has plummeted because of lockdown waves, even though it shows higher resiliency and lower drop than subway systems [10]. Following the stay-at-home advisories, many companies began to allow their employees to work from home, resulting in a significant reduction in travel within cities. Studies suggest that the demand for bike-sharing platforms has also decreased due to increased levels of remote working and stayat-home advisories, but not as much as other means of transportation. A study conducted in Budapest, Hungary, shows that there has been an 80% decrease in public transport demand and only a 2% reduction in the use of bike-sharing platforms during the pandemic [21]. Another study that used ridership data from New York in 2020 showed that bike-sharing trips have decreased by less than 71%, whereas subway trips have decreased by 90% compared to February and March of 2019 [10]. Another study by Li et al. [22] examined the changes in demand for micro-mobility services such as bike-sharing platforms in Zurich, Switzerland, before and during the lockdown period. Their spatial and temporal analysis results showed a decrease in the number of trips during the lockdown period. Their study also revealed that leisure-and shopping-related micro-mobility trips decreased while grocery-related trips increased. Apple mobility data shows that, by the end of May 2020, there was a decrease in all modes of transportation including driving, walking, public transit, and bike-sharing; however, the reduction in public transit ridership was down much more than bike-share usage [23]. According to a press release by the Bureau of Transportation Statistics, ridership on eight of the largest docked bike-share systems in the U. S. declined by 44% from March through May 2020, compared to the same period in 2019 [24]. This could be explained by the fact that people have been traveling less due to stay-at-home advisories and limited business operations, and this might be affecting the demand for bikesharing platforms like other transportation modes. In this paper, we examine how demand for bike-sharing platform usage changed immediately following the first Covid-19 case and the first executive order in the U.S., using the data from January 2019 to July 2020. We use a fixed-effects econometric formulation of the difference-in-differences (DID) estimation framework, which exploits a natural experiment to examine how bike-sharing trips have changed with the introduction of the first Covid-19 case and the first executive order. The DID estimation method is a suitable technique in our context, where randomization on the city level is not possible. DID requires panel data, which is part of the fixed-effects strategy, to capture the differences in post-treatment periods across the treatment and control groups [25]. One benefit of the DID model is that it allows us to avoid "the endogeneity problems that typically arise when making comparisons between heterogeneous individuals" [26]. We also consider how the frequency of bike-sharing platform use can be different on weekdays compared to weekends due to the changing travel patterns during the pandemic. In general, weekday travel is primarily made up of commuting to and from work, whereas weekend travel behavior is motivated by recreational activities. Differences in activity types can lead to different travel patterns that can be hypothesized on weekends and holidays, compared to weekdays [22,27,28]. For instance, Agarwal [27] suggests that there is a decrease in vehicle trips on weekends compared to weekdays at the household level. Kim et al. [28] find different weekend and weekday bike-sharing patterns. Their results point out that there is an increase in bike-sharing traffic volume on the weekends at the stations near parks and schools, which can be due to the rise in leisure and school activities on the weekends. In contrast, residential areas and subway stations are found to have less bike-sharing traffic volume on the weekends than on weekdays. Li et al. [22] find that there was a decrease in the number of micro-mobility service trips on weekdays during the lockdown period in Zurich. In contrast, there are only slight changes on weekends compared to before the lockdown period. In addition, we investigate what factors strengthen or weaken the impact of the pandemic on the frequency of bike-sharing use. Transportation infrastructure, land use, and neighborhood attributes contribute to individuals' preference for bike-sharing [29]. Several studies examine the effects of the built environment, cycling facilities, transit proximity, and transportation facility features on bike-sharing frequency [30][31][32]. The findings are consistent: More bicycle facilities and more excellent transit proximity lead to greater use of bike-sharing. Recent studies [33,34] also find that better biking infrastructure is linked to higher bike-sharing demand during the Covid-19 pandemic. For instance, according to Bergantino et al. [33], safer cycling conditions and the creation of dedicated infrastructures encourage individuals to use bike-sharing platforms during the pandemic. Therefore, we also test the heterogeneous effects depending on such factors as the city's bike-friendliness, transit-friendliness, and pedestrian-friendliness. The remainder of the paper is organized as follows. The next section introduces the data collection, variable definitions, and research method. Section 3 presents the main results. Section 4 shows the heterogeneous effects of bike-friendliness, transit-friendliness, and pedestrian-friendliness. Finally, Section 5 concludes with a discussion of the findings. Data and variables We collected data from multiple sources. First, we collected the historical daily trip data available to the public from bike-sharing programs in Austin, Boston, Chicago, Columbus, Minneapolis, New York, Philadelphia, Pittsburgh, Portland, San Francisco, and Washington, D.C. The daily trip data includes trip duration, start time, end time, starting station, ending station, and subscription type (i.e., member, single rider). Based on this data, we compute our dependent variable, the daily trip frequency of bike-sharing platform trips (TripFrequency ij ). It is calculated as the total daily trips of bike-sharing platform i at time j. During the construction of this variable, we excluded the bike-sharing platform trips with a duration of two minutes or less as there might be an issue while renting the bike (i.e., the bike is in a bad condition). We also excluded the bike-sharing platform trips of forty-five minutes or more, as these trips are more likely to represent leisure and recreational trips. A single ride for subscribers of these services includes forty-five minutes of ride time. Second, we collected data from various online sources to construct our treatment measures, FirstCase ij and FirstExecutiveOrder ij . Our first treatment variable is the first Covid-19 case diagnosis (FirstCase ij ), which is coded as 1, indicating that the first Covid-19 case is identified in city i as of day j' such that j > = j'. The data on Covid-19 cases comes from The New York Times [35], which is based on reports from state and local health agencies. Our second treatment variable is FirstExecutiveOrder ij , which is coded as 1, indicating that an executive order is issued in city i as of day j' such that j > = j'. Specifically, we refer to the first executive action taken by the state governments against the Covid-19 pandemic, which is the stay-at-home advisories announced by the state governments. While stay-at-home advisories were lifted before August 2020, restrictions continued in most cities in various forms. For instance, Minnesota's stay-at-home advisory expired on May 18, 2020; however, it was replaced with a "stay safe Minnesota" order. Moreover, the state extended the state of emergency by another 30 days. Therefore, as the restrictions were still in effect, cities did not fully reopen before August 2020. Consequently, we used the entire period after the first executive order implementation. Table 1 lists the start date for each of our treatment variables in each city in our study. To construct the control variables, we collected weather data from the National Oceanic and Atmospheric Administration (NOAA). Control variables are included in the model as pre-treatment variables, as weather conditions can have different impacts on the demand for bike-sharing trips. Evidence from empirical studies [36][37][38][39] indicates that favorable weather conditions, such as higher temperatures, increase bike-sharing platform usage. In contrast, unfavorable conditions, such as precipitation and strong winds, will decrease such use. For example, Gebhart and Noland [36] suggest that cold temperatures, rain, and high humidity levels are likely to reduce the demand for bike-sharing platform trips in Washington, DC. In contrast, high temperatures are linked to an increased number of such trips. Similarly, the findings of Morton [37] point out that higher temperatures are associated with higher demand rates, whereas heavy precipitation, high wind speed, and relative humidity are negatively associated with the demand for the London bike-sharing system. Consistent with previous studies, An et al. [39] find out that there is a higher demand for the CitiBike bike-sharing platform in NYC in good weather, which is characterized by favorable temperature conditions, lack of winds, humidity, and rain. On the other hand, El-Assi et al. [31] show that weather conditions such as precipitation and high humidity decrease the demand for the Toronto bike-sharing system. Based on the background evidence, first, we control for the following weather-related variables: 1) Temperature ij , a measure of the average temperature for day j in city i in Fahrenheit (˚F); 2) Wind ij , a measure of the average wind speed for day j in city i in knots; 3) Snow ij , a measure of snow depth for day j in city i in inches; 4) Rain ij , a measure of total precipitation for day j in city i in inches; and 5) Humidity ij , a measure of the average dew point for day j in city i in Fahrenheit (˚F). Furthermore, we collected data on the socio-economic characteristics of the cities from the U.S. Census Bureau. We include the population (Population ij ), median income (Income ij ), the number of the elderly population (Elderly ij ), the number of houses with two cars (Vehicle ij ), and the number of people commuting to work with bikes (Commute ij ). When combined, we end up with a panel data set that comprises eleven cities spanning from January 2019 through July 2020. To make the interpretation of the socio-economic characteristics easier, we include the log-transformed values in our analyses. Following prior literature, we keep the weather-related variables in their original form [36,37,39]. Tables 2 and 3 present the summary statistics and the correlation of the critical variables, respectively. We use the logtransformed values of the socio-economic characteristic to interpret linear regression results. Model-free evidence Before we introduce our model specification, we present visual model-free evidence of the role of the first Covid-19 case diagnosis and the first executive order implementation on the use of the bike-sharing platforms in Figs 1 and 2. It is worth noting that Figs 1 and 2 do not account for time-fixed effects. In Fig 1, we plot the daily trip frequency 60 days before and after the first reported Covid-19 case in each of the eleven cities (excluding Minneapolis, MN, as its bikesharing systems do not report daily trip data between December and March). The solid vertical line represents the first Covid-19 diagnosis. The dashed horizontal lines represent the average daily bike-sharing trip frequency before and after the first Covid-19 diagnosis. In Fig 1, we observe that the daily bike trip frequency decreases in most cities following their first reported Covid-19 case, except for Boston and Chicago, which had cold winters and were beginning to warm up in the ensuing weeks. In Fig 2, we plot the daily bike trip frequency 60 days before and after the first executive order implementation across the cities. We observe a similar trend in Fig 2. Across all cities, the daily trip frequency declined in the days immediately after the first executive orders were implemented. Similar to the plots in Figs 1 and 2, we plot the difference in the bike-sharing trip frequency before and after the first Covid-19 case diagnosis and the first executive order implementation by weekday and weekend (see Fig A1 and A2 in S1 Appendix for more details). In Fig A1 in S1 Appendix, we show the weekday bike trip frequencies 60 days before and after the first Covid-19 case reported in each city. We notice a decrease in the trip frequency on weekdays following the first reported Covid-19 case in most cities, with a few exceptions in which we see a close average daily trip frequency after the first Covid-19 case was reported, such as in Boston. In Fig A2 in S1 Appendix, we show the weekend bike trip frequencies 60 days before and after the first Covid-19 case that was reported. When we look at the changes in the weekend trip frequency, we see opposing results suggesting an increase in the daily trip frequency in some cities, such as Philadelphia and Pittsburgh. Moreover, we also notice a similar trend in the daily weekday and weekend trip frequency after the first executive order implementation across the cities in this study (see Figs A3 and A4 in S1 Appendix). Finally, Fig A5 in S1 Appendix shows the bike-sharing seasonal trend of February-June 2019 (pre-covid) compared to February-June 2020 (post-covid) for each city in this study. Relative to the patterns observed in 2019, we see a short-term decrease in bike-sharing trip frequency following the pandemic's start (towards the end of the first quarter of 2020). These plots provide further model-free evidence of the changes in the use of the bike-sharing system due to Covid-19. Figs 3 and 4 show dumbbell charts that compare the average daily bike-sharing trip frequency before and after each of our two treatments (the first Covid-19 case diagnosis and the first executive order implementation) that occurred over the entire period of study. We use the log transformation of trip frequencies for better visualization. Generally, we see short-term decline in bike-sharing frequency after the first reported infections and the first executive order implementation within the same U.S. cities. The blue point represents the log average daily trip frequency for the period before the treatment occurred, and the red point represents the log average daily trip frequency in the period after the treatment occurred. Fig 3 shows that the frequency of average daily bike-sharing platforms decreases after the first Covid-19 case diagnosis, except for a few cities such as New York, Philadelphia, and Chicago, in which the average daily bike-sharing trips did not change significantly; and also except for Columbus, in which we see an apparent increase. Fig 4 also shows that the frequency of average daily bikesharing trips decreases upon the first executive order implementation, again with the aforementioned exceptions. However, it is essential to note that these plots do not consider the cities' weather conditions and socio-economic characteristics. Therefore, in the following subsection, we propose a statistical model to evaluate how bike-sharing frequency changed following the Covid-19 pandemic, accounting for weather conditions and socio-economic characteristics. Model specification The model-free evidence shows that the frequency of trips on bike-sharing platforms generally decreased in U.S. cities following the first Covid-19 cases and the first executive order implementation. However, as mentioned earlier, model-free evidence does not account for many factors that could influence the trip frequency for bike-sharing and the spread of the virus. Therefore, before incorporating such factors into our statistical analysis, we need to know the pandemic's actual effect on bike-sharing trip frequency. However, we may observe a decrease in the trip frequency in some cities and an increase in others. Thus, we use a fixed-effects econometric formulation of the DID estimation framework to examine how the Covid-19 pandemic affected the frequency of bike-sharing trips. The primary benefit of this estimation model is that we can mimic an experimental design using observational data. This method compares the differences in bike-sharing trip frequency in treated cities before and after the treatment event-the onset of the pandemic-to the differences in the untreated cities (i.e., those cities yet to report a coronavirus case or to implement a first executive order). The longitudinal nature of the data allows us to use the yet untreated observations in the data as controls for the treated observations; that is, those cities that have yet to have a first Covid-19 case or a Covid-related executive order. To facilitate estimation, we use the fixed-effects regression formulation of the DID model, a formulation described in [25], as follows: where ln(TripFrequency) ij is the log-transformed value of our dependent variable in city i during day j. Treatment ij refers to the treatment variables in city i during day j: FirstCase ij or First-ExecutiveOrder ij . They are applied in different cities at different times. To control for existing time-invariant differences among the heterogeneous geographical locations, i.e., cities, we included city-fixed effects, μ i , in our model. In addition, we included time-fixed effects, θ j , to control for common temporal shocks. This allows for non-linear time-varying effects in the DID model. W ij is the set of control variables, which includes ln(Population), ln(Income), ln PLOS ONE (Elderly), ln(Vehicle), ln(Commute), Temperature, Rain, Snow, Wind, and Humidity. Finally, ε ij is the error term. Table 4 reports the coefficient estimates of Eq (1) for the dependent variable ln(TripFrequency). As shown in Column (1), we estimate an increase in the log of bike-sharing platform trip frequency of 0.196 on average across eleven cities after the first Covid-19 case, adjusted for covariates. An economic interpretation of this result suggests an average adjusted increase in the number of daily bike-sharing trips by 22% (rounded from the following: [exp(0.196)-1] *100 = 21.65%). On the other hand, from Column (2), we observe a decrease in the log of bikesharing platform trip frequency by 0.353 after the stay-at-home order implementation. Economically, this result suggests a reduction of 30% (rounded from the following: [exp (-0.353)-1]*100 = -29.74%) in the number of daily bike-sharing trips. We also further examine the robustness of our model to temporal trends using a relative time model (see Fig A6 and Fig A7 in S1 Appendix for more details). Furthermore, we divide our dataset into three panels to compare weekday, weekend, and bank holiday travel behavior. We include the bank holidays (i.e., New Year's Day, Martin Luther King Jr. Day, or Independence Day) observed by the Federal Reserve System. The results are given in Tables 5 and 6. We estimate a 22% (rounded from the following: [exp(0.197)-1] *100 = 21.77%) increase in the trip frequency on average across cities during the weekdays upon the first Covid-19 case (see Column 1 in Table 5), whereas our results do not suggest a Table 5) and the bank holidays (see Column 3 in Table 5). Table 6 shows the results when the treatment variable is FirstExecutiveOrder in which we observe opposing results. We estimate a statistically significant decrease in the trip frequency of 28% (rounded from the following: [exp(-0.323)-1]*100 = -27.60%) on the weekends (see Column 2 in Table 6), whereas we find no evidence of the same effect during the weekdays (see Column 1 in Table 6). However, we do not observe any effect for the bank holidays data panel as the variable in question is being omitted by the regression (see Column 3 in Table 6). The reason behind this is that any variables that are constant within every unit are redundant in a fixed-effects model and will be omitted from the model. Due to the launch dates of the first executive order, Memorial Day 2020 and Independence Day 2022 are treated the same for each unit. Therefore, our treatment variable becomes constant for each city and does not create any variation. These results generally show the differences in residents' travel behavior between weekdays, weekends, and bank holidays. After the first Covid-19 diagnosis, individuals might have started using bike-sharing platforms as an alternative to other modes of transportation on weekdays, especially, for journeys to and from work. However, with the first executive order implementation, on average, individuals might tend to stay inside more rather than go out. We also ran the analysis on daily trip frequencies with fewer than thirty minutes, as a single ride for non-subscribers includes thirty minutes of ride time. The results are consistent. Heterogeneity analyses While our empirical estimations thus far suggest a significant impact of the Covid-19 pandemic on the frequency of bike-sharing platform trips, it is worth examining the factors that might amplify the strength of the effect. Prior literature [29][30][31] suggests that transportation infrastructure, land use, built environment, and neighborhood attributes contribute to individuals' preference for bike-sharing systems. One crucial factor that can moderate the impact of Covid-19 on bike-sharing platforms' trip frequency is the pre-existing biking infrastructure. First, in cities with more bike lanes, longer bike route lengths, fewer hills, higher road connectivity, and bicycle-aware traffic, bike-sharing platforms should more likely be adopted by individuals and used as an alternative transportation mode. Second, in walkable cities with better access to amenities, residents might be embracing these platforms more due to easy and comfortable access to bike stations. Lastly, in cities with access to public transit, bike-sharing platforms might be used more by the residents due to better connectivity of the transit network. Therefore, we test the heterogeneous effects depending on such factors as the city's bike-friendliness, transit-friendliness, and pedestrian-friendliness. We collected data from Walk Score [40] to measure 1) bike-friendliness (BikeScore 1 ) [41], which measures the built environment's ability to support biking for a given location, 2) pedestrian-friendliness (WalkScore 1 ) [41], which measures the walkability of any address by analyzing the walking routes to nearby amenities within a 5-minute walk, and 3) transit-friendliness (TransitScore 1 ), which measures how well a location is served by public transit [41]. These measures range from 0 to 100 and divide cities into different groups [1]. Based on the classification of BikeScore 1 , WalkScore 1 , and TransitScore 1 , the cities in our dataset scored less than 90 in all measures. Detailed information on the groups and descriptive statistics of the scores are given in Tables 7 and 8, respectively. Then, we re-estimate Eq (1) incorporating interaction terms for these classifications with the treatment. The new equation including the interaction terms is given below. Note that as the moderators are static, fixed-effects panel regressions do not yield estimates for β 2 . The results are given in Table 9. ln TripFrequency ð Þ ij Surprisingly, these findings suggest interesting differences. First, we see that the impact of the first Covid-19 case and the first executive order implementation on bike-sharing platforms' trip frequency is more substantial in bikeable cities. We estimate that the effect of the first Covid-19 case on Trip Frequency is approximately 90% (rounded from the following: [exp (0.640)-1]*100 = 89.64%) higher in "very bikeable" cities than in "bikeable" cities (see Table 9, Column 1). We observe that the impact of the first executive order implementation on Trip Frequency is approximately 103% (rounded from the following: [exp(0.708)-1] *100 = 102.99%) higher in "very bikeable" cities than in "bikeable" cities (see Table 9, Column 4). Followed by the first Covid-19 case and the first executive order implementation, we estimate a decrease that is respectively greater by approximately 33% (rounded from the following: [exp(-0.405)-1]*100 = -33.30%) (see Table 9, Column 1) and 30% (rounded from the following: [exp(-0.356)-1]*100 = -29.95%) (see Table 9, Column 4) in Trip Frequency in "somewhat bikeable" as compared to "bikeable" cities. These results might suggest that the residents in bikefriendly cities embrace these platforms more due to better pre-existing biking infrastructure. With safe and comfortable biking afforded by good biking infrastructure, residents are more likely to use bike-sharing platforms for commuting and recreational purposes. Moreover, we observe a more substantial impact on the Trip Frequency in very walkable cities upon both the first Covid-19 case (see Table 9, Column 2) and stay-at-home advisory implementation (see Table 9, Column 5). In pedestrian-friendly cities, residents might be embracing these platforms more as a result of easy and comfortable access to bike stations by walking. Moreover, similar to bike-friendliness and pedestrian-friendliness, we observe a more substantial impact on the Trip Frequency in cities with excellent transit upon both the first Covid-19 case (see Table 9, Column 3) and the first executive order implementation (see Table 9, Column 6). Lastly, unlike the "somewhat bikeable" classification of cities, we do not observe that car dependence or having some transit (compared to good transit) has any moderating influence on the effect of the first Covid-19 case (see Table 9, Column 3) and the first executive order implementation (see Table 9, Column 6) on the use of bike-sharing platforms. Discussion and conclusion We used a DID framework formulated as a fixed-effects regression model to examine how bike-sharing trip frequency in the United States changed with the onset of the Covid-19 pandemic. We modeled the first reported Covid-19 cases and the implementation of the first executive order in each municipality as two treatment events. We also accounted for socioeconomic factors, weather, and fixed effects for each day and city. First, our results indicate that, on average, the first reported Covid-19 cases had a positive and statistically significant effect on the frequency of bike trips in U.S. cities. This could be explained by the fact that the existence of the first reported Covid-19 case in U.S. cities has heightened individuals' sensitivity to cleanliness and social distance. Therefore, individuals were compelled to change their travel behavior and look for alternative systems of mobility that may offer more resilient urban transportation. Bike-sharing platforms offer alternative transportation to avoid crowds in the cities. Second, we observe that the first executive order advisories had a negative and statistically significant effect on the frequency of bike trips in U.S. cities. This could be explained by the fact that lockdown restrictions and working from home have led to a decline in commuting bike trips and other modes of transportation such as public transit. We also examined sources of heterogeneity in the effect of the pandemic on bike-sharing use. We compared how bike-sharing use changed between weekends and weekdays with the onset of the pandemic. We observed an increase in weekday-specific trip frequency as a result of the first Covid-19 case diagnosis and a decrease in weekend-specific trip frequency due to the first executive order implementation. We also tested for heterogeneous impacts across a set of city-level characteristics. We found that there is a greater increase in the frequency of bikesharing trips in more bike-friendly, transit-friendly, and pedestrian-friendly cities upon both the first Covid-19 case diagnosis and the first executive order implementation. We might conclude that bike-sharing platforms have an essential role in individuals' travel behavior, especially in cities with better bike and transit infrastructure. These platforms are perceived as a sustainable and resilient transportation option by individuals due to the unprecedented consequences of the Covid-19 pandemic. Bike-sharing platforms offer a sustainable and active mode of transportation, and hence it is important to better understand the factors that affect their adoption by the populace. The Covid-19 pandemic represents an opportunity for cities to embrace new paradigms for urban mobility. Bike-sharing platforms represent one way in which cities provide a resilient and adaptive transport network to face the potential challenges of disruptive events like the Covid-19 pandemic. The pandemic has already highlighted the importance of rethinking the design of urban transit for greater resilience to such disruptive events. Cities may consider how to encourage greater use of bike-sharing platforms. Decisions by city authorities such as offering free or reduced membership could break down barriers to the adoption of bike-sharing. With the support of local authorities in creating more bike lanes and accessibility to public spaces, bike-sharing platforms can attract more individuals. Proper incentives followed by infrastructure adjustments could ensure that individuals will become accustomed to bike-sharing platforms and continue to use them even after the pandemic. For instance, in New York, city officials are already planning to expand Citi Bike and add more docking stations in its busiest areas [12]. Investing in bike-sharing platforms and cycling infrastructure could lead to an increase in memberships because individuals' willingness to bike is closely linked to how safe they feel [42]. It is important to note that our research is subject to several limitations. First, the adjusted R 2 values of the models are low, ranging from 0.18 to 0.26. Low values might be an indicator of the models would not be suitable for use in the predictive modeling of the outcome variables. Hence, the aims of our model interpretations are limited to assessing the direction and significance of coefficient estimates for causal inference. Second, the main challenge with DID estimation is to ensure that no pre-treatment trends in the absence of treatment [25]. The violation of this assumption can lead to biased causal estimates. Although there is no statistical test for this assumption, we examine the robustness of our model to temporal trends using a relative time model. Our findings suggest that there are no pre-existing trends in bike-sharing demand across the cities that experience the first Covid-19 diagnosis (see Fig 6A in S1 Appendix). As seen in Fig 7A in S1 Appendix, we observe that pre-treatment trends exist in bike-sharing demand across the cities experiencing the first executive order implementation. Future research could explore different estimators that could overcome this challenge. There are a few recent papers that focus on different ways to relax this assumption [43][44][45]. They propose alternative estimators when the parallel trends assumption is violated and examine the robustness of the results to the potential violations of parallel trends. Third, our model only examines the short-term effect of the Covid-19 pandemic on bikesharing demand due to the data collection period. In the future, depending on data availability by the bike-sharing providers, the effects can be examined for longer periods. Fourth, we use the first executive order implementation and the first Covid-19 case as proxies of the Covid-19 pandemic; however, we should keep in mind that the first executive order implementation might be correlated with the first Covid-19 case. This might be also one of the reasons that might explain the pre-treatment trend in Fig 7A in S1 Appendix. Lastly, our reported results are based upon the actual usage of bike-sharing platforms, along with public transit data for eleven cities in the U.S. Despite the lack of data available for other U.S. cities, we believe that the exogenous nature of the Covid-19 pandemic provides robust insights into the relationship between the Covid-19 pandemic and travel behaviors. Given that we are still in the midst of the pandemic, we expect that forthcoming data could reveal more about the pandemic's long-term effects on travel behavior. It is very plausible that the effects observed thus far may serve as a signal for more lasting changes to come in urban travel behaviors.
2023-04-08T06:17:44.540Z
2023-04-07T00:00:00.000
{ "year": 2023, "sha1": "6396a4b666efeaa1c574ed864a87ee40073d9189", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "18a0688e797809c816ce94c98dade6467658f897", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
258722083
pes2o/s2orc
v3-fos-license
How Can the BISE Community Promote Tech Startups? Steffi Haag Startups around the world raised around USD 329.5 billion in 2021 (Glasner 2022) including 10.5 billion Euros from startups in Berlin (Ernst & Young GmbH 2021) – more money than ever before. In the future, startups, especially tech startups that bring novel technology-empowered products or services to the market, are expected to significantly contribute to economic and technological growth and development, thus helping solve some of the world’s greatest challenges. Universities play a central role in the startup ecosystem. For the German Startup Monitor 2021 (Kollmann et al. 2021), the German Startups Association surveyed more than 2,000 startups and 5,000 founders, 85% of the startups were tech-related, and 85% of the founders spanned the fields of business or science, technology, engineering, and mathematics. The 2021 survey demonstrated that every fourth startup had spun off from a university or research institution, that every second startup cooperates with research institutions, and that more than one-third of all respondents met their co-founders at university (Kollmann et al. 2021). Most startups appreciate this close relationship with universities and the opportunity to make use of several support services, such as consulting services (e.g., ideation, business plan, and financing), support with grant applications (e.g., EXIST Business Startup Grant), access to networks (e.g., mentors, founders, business angels, incubators), use of rooms (e.g., coworking space), or technical infrastructure services (e.g., computers, machines, laboratories). Figure 1 summarizes the most relevant support services of universities/research institutions. Startups are as important for universities as universities are for startups. The reason for this is that startups bring scientific innovation to practice. Given the support universities currently offer, this discussion strives to explore what universities can do to promote startups and how the Business & Information Systems Engineering (BISE) community can contribute. The objective is to devise strategies that create and leverage synergies between the scientific community and startups in the BISE field. The discussion builds on a panel held at the virtual 17th International Conference on Wirtschaftsinformatik (WI) 2022 that discussed how the BISE community – in addition to universities’ efforts – could (better) promote tech startups tackling grand challenges. In particular, the following questions were raised: S. Haag (&) J. van Bohemen Heinrich Heine University Düsseldorf, Düsseldorf, Germany e-mail: steffi.haag@hhu.de Steffi Haag Startups around the world raised around USD 329.5 billion in 2021 (Glasner 2022) including 10.5 billion Euros from startups in Berlin (Ernst & Young GmbH 2021)more money than ever before. In the future, startups, especially tech startups that bring novel technology-empowered products or services to the market, are expected to significantly contribute to economic and technological growth and development, thus helping solve some of the world's greatest challenges. Universities play a central role in the startup ecosystem. For the German Startup Monitor 2021 (Kollmann et al. 2021), the German Startups Association surveyed more than 2,000 startups and 5,000 founders, 85% of the startups were tech-related, and 85% of the founders spanned the fields of business or science, technology, engineering, and mathematics. The 2021 survey demonstrated that every fourth startup had spun off from a university or research institution, that every second startup cooperates with research institutions, and that more than one-third of all respondents met their co-founders at university (Kollmann et al. 2021). Most startups appreciate this close relationship with universities and the opportunity to make use of several support services, such as consulting services (e.g., ideation, business plan, and financing), support with grant applications (e.g., EXIST Business Startup Grant 1 ), access to networks (e.g., mentors, founders, business angels, incubators), use of rooms (e.g., coworking space), or technical infrastructure services (e.g., computers, machines, laboratories). Figure 1 summarizes the most relevant support services of universities/research institutions. Startups are as important for universities as universities are for startups. The reason for this is that startups bring scientific innovation to practice. Given the support universities currently offer, this discussion strives to explore what universities can do to promote startups and how the Business & Information Systems Engineering (BISE) community can contribute. The objective is to devise strategies that create and leverage synergies between the scientific community and startups in the BISE field. The discussion builds on a panel held at the virtual 17th International Conference on Wirtschaftsinformatik (WI) 2022 that discussed how the BISE community -in addition to universities' efforts -could (better) promote tech startups tackling grand challenges. In particular, the following questions were raised: • How do you, the BISE community, or your university currently promote startups? How do startups benefit from those activities? • What could we do better? What can the BISE community specifically offer? What could startups do to improve the relationship? • How can we leverage synergies between the BISE community, universities, startups, and investors to jointly solve large and complex societal issues? This discussion brings together the perspectives of researchers, founders, entrepreneurs, and investors from different backgrounds and institutions. The five panelists -Christina Chase, Gesa Miczaika, Kathrin Möslein, Dennis Steininger, and Rüdiger Zarnekow -all have experience in either founding their own startup or promoting startups in the BISE field. Next, each panelist will provide their perspective on the questions raised above. In the end, the conclusion summarizes how the BISE community -in particular -can contribute to promoting tech startups. Startups Driving the Dissemination of University Research: The Example of MIT Christina Chase Startups are important as they help disseminate research coming from labs into real-world applications. The Massachusetts Institute of Technology (MIT), and the broader Boston entrepreneurial ecosystem, does a lot to help students realize their innovative ideas and develop a startup business. To that end, MIT is constantly looking at how they might provide the resources needed to support the new needs startups require as they arise. The institution, and the Martin Trust Center in particular, constantly stay in touch with those who want to develop new ideas and start a business. Supporting ''tough tech'' startups (e.g., related to science, pharma, robotics) is especially challenging due to the higher technical risk and longer timelines to bring them to market. Many great ideas fall into the so-called valley of death between research and commercial application ( Which support services offered by universiƟes did German startups use in 2021? Fig. 1 The most relevant support services offered by universities/research institutions. Adapted from the German Startup Monitor 2021 (Kollmann et al. 2021(Kollmann et al. ) et al. 2009). For this reason, institutional support, and especially university support, is needed. The example of MIT shows how these initial obstacles can be overcome and turned into opportunities. One driving factor is the space made available to students to experiment with their ideas, such as Project Manus and ProtoWorks, open makerspaces that give students the ability to prototype and test their ideas. In addition to open makerspaces, MIT has an internal funding process called Sandbox 2 where students can apply to receive anywhere from $1,000 to $25,000 depending on the stage of their project. This not only reduces the financial risks for the students, but also helps them with the technological risks and requirements. This academic environment to experiment, learn, fail, and pivot, before seeking outside investment, prepares students while also helping them to put their ideas into action and start developing the business around them. Through this process, MIT noticed the challenge founders of ''tough tech'' startups were having in securing early funding so they put founding investment into a new type of venture capital (VC) fund called The Engine (A Home for Tough Tech Founders 2016). The fund focuses on ''tough tech'' to support the difficult step of launching those startups that have high technical risk and much longer time horizons to bring their products to market, providing access to capital, industry knowledge, and equipment for earlystage companies. One of the core pillars of the MIT ecosystem is the Martin Trust Center for MIT Entrepreneurship. 3 Students have access to experiential classes, facilities, such as fully outfitted meeting rooms, maker space, and entrepreneursin-residence, so they can learn, train their skills, and access to expertise throughout every stage, from concept to company. These are all tools to help students learn, and potentially become, entrepreneurs and create startups. Through these resources, MIT hopes to bridge the gap from the first scientific work to getting this work out of the lab and into the real-work, ultimately putting this research into action. Encouraging More Students to Become Entrepreneurs Gesa Miczaika Germany needs more innovation. One of the main levers is to encourage more students to become an entrepreneur. For the sake of argument, students can be divided into students that are intrinsically motivated to found an own business and those who are intrinsically non-motivated to start a business. Intrinsically motivated students are eager to start a business and turn their startup into a larger company. However, a diverse group of founders is important for the success of a startup. Therefore, it is important to also include non-intrinsically motivated students to give them the opportunity to start their own businesses and support existing founders. In other words, incentives should be created for a variety of students to become more involved with entrepreneurship. A crucial factor in making more students enter the startup world is early education with regards to entrepreneurship, which should be offered at scale in universities. To better involve all students in the entrepreneurial environment, it is important to show them more approachable and relatable role models, give them free time and a budget for ideas (e.g., better promote the EXIST fellowship), and connect them with angel investors and founders, especially from other disciplines (e.g., biotech meets business). One example is the climate tech venture builder 1.5°V entures from Berlin. It was created because the team behind 1.5°Ventures realized that not enough startups were being founded in the carbon offset space. So, they went to all the universities and tried to find smart people who were working on solutions. The venture builder tried to get them to become entrepreneurs, but most students are focused on their academic work and frequently want to stay in academia. They often do not see the opportunities in becoming entrepreneurs, but rather the risk encompassed in this endeavor. In the field of carbon offset, however, actually creating a startup is a great opportunity. Students can take the risk because plenty of alternative jobs are available for them if they fail. In Germany, we have an immanent skill shortage. Students should therefore be introduced to the idea of founding a startup as early as possible. Instead of only seeing the traditional career paths of joining an established company or staying in the academic context, students should be motivated to become entrepreneurs. This is also an opportunity for educators and other stakeholders who can get involved and see what ideas students are working on. In this way, educators can discover the next big thing in the industry and expand their horizons. Another great example from Germany of how to incentivize students to create startups is the Center of Digital Technology and Management (CDTM). 4 The CDTM takes MIT as a role model and has already built an extensive network to implement entrepreneurship within the academic institution and universities. The CDTM brings together interdisciplinary students with creative ideas, great motivation, and an entrepreneurial mindset. The program is located at the intersection of digital technology, management, and entrepreneurship, providing students with the tools to put their ideas into practice (Center for Digital Technology and Management 2015). Students who participate in the program are rewarded with an honorary degree to further promote the program. Students work intensively with industry partners within the program to focus on the hands-on experience. In addition, the center's students and assistants themselves run the center, underscoring its entrepreneurial spirit. Major companies such as Celonis, Personio, Lilium, and others are a product of the CDTM, highlighting the program's success. The CDTM thus shows how German universities can adapt and apply MIT's practices to help students develop their ideas and start their businesses. Once a network such as the CDTM is formed, it will also be attractive for venture capitalists (VCs) to participate. Large events can be planned where investors and students with their startup ideas can meet, share interests, network, and collaborate. This idea is consistent with practices already being implemented at MIT and other universities. However, there is a need for a general rollout of such tools and networks across German universities. Kathrin Mo¨slein A general question is how BISE can contribute to more entrepreneurship among students. In addition to general education in the discipline, a contribution would be to focus on grand challenges and push the boundaries of traditional fields. Grand challenges can be defined as those that cannot be addressed by a single discipline, but are interdisciplinary and require teams contributing diverse skills. Why do I think the BISE field with its roots in computer science and business management is uniquely suited to address grand challenges? The reason is simple: the BISE discipline does not only draw its strength from its own interdisciplinarity emcompassing these two root disciplines, but it also has a boundary spanning function to many other fields and disciplines needed to address grand challenges. By their very nature, BISE scholars do much to encourage students to become entrepreneurs, intrapreneurs, and changemakers by building bridges across boundaries. BISE students, scholars, and staff cross boundaries between business schools and schools of engineering or computer science. Bridging also goes far beyond the academic associations of our own and our sister disciplines. BISE is perhaps best placed to do this because it is itself a boundary spanning discipline. By building strong bridges and crossing boundaries, universities can bring together entrepreneurial students and faculties from a variety of departments and disciplines. This requires individuals to cross boundaries and actively connect to people from different spheres -so-called boundary spanners. In addition, boundary spaces provide space to meet, build bridges, and create shared understanding. These places and spaces can help bring people together to facilitate conversation and collaboration. Another approach to bridging disciplines in academia are boundary concepts, such as those implemented in boundary spanning programs. One example is FAU's Digital Tech Academy, which brings together students and young researchers from different levels and across all faculties. This really fosters entrepreneurship and venture creation. 5 The interdisciplinary talent program provides an overarching hub for digitization and entrepreneurship at FAU and supports FAU's efforts to scale and professionalize digital entrepreneurship by integrating existing curricular offerings and novel formats, such as FAU's spin-off services. Finally, to stimulate cross-disciplinary solutions, we need to formulate problems, tasks, or challenges that require multidisciplinary input. This brings us back to grand challenges. They are inherently boundary-spanning. Often, grand challenges are not attractive to academics who focus on challenges within their discipline and enjoy writing for the journals of their discipline. Recognizing work on grand challenges can help overcome this double hurdle and encourage cross-disciplinary communication and collaboration. Within universities and academia as a whole, grand challenges can therefore help to motivate, mobilize, and implement cross-disciplinary efforts. Academics at all levels have good reasons to engage in these efforts, because it benefits not only the academic environment, but ultimately the individual who engages. It is in the nature of knowledge creation that new fields emerge. Statistics about how many tenure positions will be available for junior faculty in well-defined traditional disciplines have never been very reliable guides to careers: as people specialized in fields of the past, new fields emerged. This was no different in the early days of IS and BISE. The future growth of our field was underestimated by many. The same is true of many emerging fields and emerging disciplines, which often begin by bridging the known and exploring the unknown. Students, scholars, and staff are therefore well advised to start with an open mind and to challenge traditional disciplinary boxes and boundaries. At FAU, we encourage boundary spanning in research, teaching, learning, entrepreneurship and innovation. In research, the German Research Foundation (DFG) funding focuses on clusters and large collaborative research centers to foster boundary-crossing efforts. Not only are students and employers constantly challenging traditional curricula in education, but academics naturally develop their fields and educational offerings. In entrepreneurship and innovation, innovation labs, incubators, and accelerators provide boundary spaces for cross-border activities. At FAU, the Open Innovation Lab JOSEPHSÒ or the TechIncubator ZOLLHOF are examples of platforms that act as boundary spaces for innovators and entrepreneurial minds. 6 Sabbaticals for academics and exchanges for students and staff are also strongly supported to foster innovation and experimentation outside the beaten path. How does this fit into typical academic incentive and recognition systems? At FAU, we place equal emphasis on four strategic fields of action for academia. In addition to education and research, issues of leadership and innovation are addressed in the strategic fields of people and outreach. These four pillars (PEOPLE, EDUCATION, RESEARCH, OUTREACH) are reflected in recruiting processes, target agreements, faculty development, and, of course, in university management as a whole. Among other things, this ensures that FAU is and remains an innovation leader. It also helps to implement the so-called academic decathlon, which was introduced by Peter Mertens (Wiener et al. 2018). He compares the decathlon in sports with its different sub-disciplines to the different sub-disciplines in academia. To become a great role model and leader in academia, every scholar needs to be aware of, address, and ideally master each of the different sub-disciplines. Setting one's own priorities allows for profile building as well as individual success within and across institutions. Dennis Steininger A notable grand challenge is finding jobs for people and developing new jobs for people. This is a great motivation for the BISE community to help students become entrepreneurs and create startups. But what kind of startups are best suited to create jobs and is the BISE community suited for that? A comparison of different types of startups shows that Information and Communication Technology (ICT) startups create most jobs, even compared to high-tech startups (Hathaway 2013). Another aspect: When comparing European and American industries, many of the European companies are static. In contrast, American companies tend to shrink or grow rapidly (Bravo-Biosca et al. 2016). In other words, startups are a great opportunity for the European industry to create jobs, and ICT startups are probably best at it. But what aspects are important for a startup to grow and succeed? Research shows that the combination of business and technology education in startup teams leads to the strong growth that startups usually desire. In a recent study that has not yet been published, preliminary results have shown that if the CTO or CIO of a startup has an educational background where he or she studied IT and business simultaneously, the success rates are significantly higher (Sassonko et al. 2021). Success was measured by analysing funding data on Crunchbase and social media. On the other hand, as a community, the BISE discipline also addresses many critical elements for startups in research. For example, we look at the successful development of artifacts and how they can create business value and we try to understand how and why users adopt a particular technology. We also look at business models, markets, platforms, technologies and how we can enter or scale such markets with these technologies (Steininger 2019; Veit et al. 2014). All of these are critical dimensions of success for tech startups. The BISE community and education are therefore fundamental to developing successful startups and thus increasing job growth in a digital world. The BISE community combines technical knowledge with business understanding and analyses how technology impacts the business side and vice versa. Not only does technology itself impact the business side, but many principles taught in technical courses, such as Agile Software Development, Scrum, and Design Thinking, can be perfectly transferred to the business side. Lean Startup, for example, can be seen as an agile software development applied to startup business model development. Many other principles from business administration can also be transferred to computer science. This allows students to experiment more, which in turn reduces the perceived uncertainty of pursuing a startup idea (Bocken & Snihur 2020). Reducing perceived uncertainty and risk is particularly important for the German market, where risk aversion is higher. At the same time, it is a crucial point that we can make our students aware of entrepreneurship as a career path. Many students think that a corporate or consulting career is the perfect path, but this is not true for everyone and we know from literature that entrepreneurs show particularly high job satisfaction. Therefore, it is important to teach not only the use of technology in established companies but also in startups, and how startups can benefit from technology. In my opinion, startup courses should generally be integrated into university curricula and startup work during studies should be rewarded with credits and even beyond. My argument is that many students have not even thought about becoming entrepreneurs because they have not met interesting entrepreneurial role models and their passion has not been ignited. The introduction of credits for entrepreneurial courses will lead to more students considering becoming entrepreneurs. And even those students who do not start their own businesses can use many of the skills they learn to work as consultants or in a business. In addition, courses should provide more information about role models who began as startup founders and later became academics. This would show students that an interdisciplinary career path is possible and desirable. Regarding the boundary spanning discussed earlier, there are still challenges in an interdisciplinary career path and it is not always easy to cross the boundaries as a university or professor. In my own career, I went from being an entrepreneur in practice, to an IS researcher in academia to a professor of entrepreneurship. The job description was looking for someone who could cover digital and entrepreneurship topics, which was a perfect fit for me. But crossing boundaries within one university can already be challenging. When students from different disciplines and faculties are granted access to a course, there is less room for students from the own discipline or faculty. This is a trade-off between spanning boundaries with its benefits for all participating students and fulfilling the needs and requirements of the students within the own faculty. However, it is really helpful for students from different disciplines to come together and find the right people for their startups. During my courses I have particularly found that combining business and technology students creates friction but also the most promising startup ideas with thriving teams. Implementing such formats promotes spill-overs, and we regularly see such teams continue after the course is over and credits are awarded. Integrating final student pitches with local investors further fosters such spill-overs and anchors startups in a region's entrepreneurial ecosystem. In summary, the BISE community and academics in the field excel at educating students to become entrepreneurs. This, in turn, helps addressing the great challenge of creating jobs and growing the economy. While it is a perfect opportunity to see students from different disciplines come together and collaborate, the boundary spanning implications also represent challenges. Ru¨diger Zarnekow One can approach the general question of how to assist students in becoming entrepreneurs from two different angles. On the one hand, from the general perspective of a university; on the other hand, from the individual perspective of researchers and educators. Both perspectives are relevant to the question and should be elaborated. At the Technical University of Berlin, for example, there has been a strong focus on entrepreneurship and startups for many years. Students and researchers have been encouraged to pursue their own ideas and to become entrepreneurs. This is most likely true not only for the universities in Berlin, but for all universities mentioned in this discussion paper. Much progress has been made in general support for students over the last decades. The incubator at the Technical University of Berlin, for example, supports startup teams throughout the whole incubation process. It helps them develop their ideas and business models, access funding, and get ready for success in the market. There is also a large network of experts that teams can draw on to find help. In general, support at the university level has improved a lot. However, there could be an even stronger focus on entrepreneurship in teaching. The focus should be on deep tech, as there is great potential for innovations in this area. On the other hand, there is still a long way to go on an individual level. Many professors still see themselves as pure researchers and their interest in startups and entrepreneurship is limited. This needs to change because supporting and promoting startups requires commitment and dedication on an individual level. From an individual perspective, prioritizing resources is the challenge here. Mentoring and supporting startup teams requires the investment of time and resources. To make this investment worthwhile, the goals of a researcher need to be better aligned with the goals of a startup. In detail: writing a research proposal might result in the reward of a research grant and funds from a third party that in turn fosters the researcher's career. When writing a research paper, the reward is the publication of the paper. However, when time and resources are invested in mentoring a startup, it is mostly just based on the personal motivation of the researcher. There are no specific rewards per se. In order to foster the support of startups, new incentives should be created. In this way, personal commitments can be increased and personal goals can be better aligned with the startup's goals. How to accomplish this is still an open question that needs further study. It has already been mentioned in previous statements that there is a particular need to better promote deep tech innovations. For example, a Makerspace was opened at the Technical University of Berlin, similar to the makerspaces at MIT. A large area with lab space and tools was made available for the teams in the incubator. Providing this infrastructure helps students experiment and develop their business ideas. At TU Berlin, an Investor's Club has also been established, inviting potential investors interested in investing in Deep Tech. This group of business angels and early-seed VCs are invited to the university several times a year. Startups receive support in preparing to pitch and are given the opportunity to connect with these investors. Although these are welcome developments, educators and researchers should become more personally involved in the entrepreneurship process to support their students as early as possible. Strategies Leveraging Synergies Between the BISE Community and Tech Startups Steffi Haag This paper brings together the perspectives of researchers, founders, entrepreneurs, and investors from different backgrounds and institutions in order to discuss how the BISE community is currently promoting tech startups and how we can improve our efforts. To summarize the different perspectives, this conclusion presents the following five strategies that have the potential to create and leverage synergies between the BISE community and tech startups. Each strategy builds on or extends the support services that startup-oriented German universities currently offer (see Fig. 1). provide funding opportunities, such as grants and seed funding, to support the development and growth of startups. BISE researchers can contribute by communicating these funding opportunities to (potential) startup founders, collaborating with startups on (joint) grant proposals, and providing scientific expertise to help secure funding. For instance, BISE researchers could advise startups on the types of grants and funding opportunities available in the field. They are further equipped to assist in the development of competitive grant proposals by, for example, helping to review and refine methods and tools related to customer and product development, or by identifying potential challenges that could arise during the project. In addition, BISE researchers could provide guidance on the proposed project's feasibility and its potential ethical implications and/or help identify potential collaborators or partners to strengthen the proposal. BISE researchers themselves could continue to support the startup with their scientific expertise after funding is granted and assist with disseminating the project's findings. Overall, BISE researchers have the potential to play a critical role in helping startups secure funding and succeed throughout the project's life cycle. 5. Supporting access to equipment and resources: universities could offer startups access to equipment and resources, such as computers/computer clusters, laboratories, or data sets, and run makerspaces to support startup research and development efforts. BISE researchers can contribute by providing access to resources, such as hardware, software, or unique data sources, and by sharing their expertise in using these effectively. This might particularly help deep tech or ''tough tech'' startups develop and test their products more quickly and efficiently. To conclude, these strategies have the potential to create and leverage synergies between the scientific community and startups in the BISE field, ultimately leading to innovative scientific advancements and more successful startups tackling the world's greatest challenges. The key is to build strong partnerships between BISE researchers and startups based on mutual trust and a shared commitment to sustainable economic and technological development. Funding Open Access funding enabled and organized by Projekt DEAL. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
2023-05-17T15:25:54.274Z
2023-05-15T00:00:00.000
{ "year": 2023, "sha1": "dedb6e151ecae2dc914e2fb79ec2dbc6aa1e3cd5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12599-023-00814-x.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "d3f6828a17167ee8ef03320159e3805f5f4039c4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
268101312
pes2o/s2orc
v3-fos-license
Nurse Caring and Mother Stress Level on Baby Hospitalization During the COVID-19 Pandemic . Parents, mainly mothers, will be stressed when their children undergo hospitalization. Especially during the Covid-19 pandemic, the stress level goes higher. Nurse caring and nurse attitudes are treatments to lower mother stress levels. This research aims to see the correlation between nurse caring and mother stress levels on baby hospitalization during the Covid-19 pandemic. This research is observational and uses a cross-sectional design and accidental sampling technique. The samples are 95 mothers whose babies are being cared for in the neonatal intensive care unit at one of Yogyakarta's hospitals from December 2020 – January 2021. The instrument is the Caring Assessment Toll (CAT) questionnaire and the Parental Stress Scale (PSS). The Chi-square test was used as the statistical test. The result showed that 50 people (52.6%) said nurse caring was good. Then, 86 people (90.5%) have low to mid-stress levels. Thirty-eight people (40%) are the ones who categorized nurse caring as a good which affects their low-stress level. The p-value is 0.000 (p-value < 0.05). It can be concluded that there is a correlation between nurse caring and a mother's stress level during baby hospitalization in a perinatology room. Hopefully, the perinatology room nurses should improve their care, especially during the pandemic. Introduction Hospitalization constrains children to stay in a hospital to receive intensive care.It becomes a significant stressor for the children and the parents.Hospitalization affects children and parents in many ways.They are separation, lost control, and pain.Other effects on the children are losing hope, protesting, not cooperating, and depression (Sarjiyah et al., 2018).Meanwhile, the effect on the parents is anxiety because they are a significant factor in caring for the children during their stay in the hospital.Parents' responses towards their children's hospitalization commonly are disbelief, anger, guilt, fear, anxiety, stress, and frustration (Ulfa et al., 2018;Wong, 2009).Those effects were also stated in Fauziyah's research in a hospital in Bogor regency in 2014.Her result showed that 68% of parents had mild stress, and 2.7% had severe stress because of children's hospitalization (Fauziah & Agustin, 2014).Apart from the stress, many aspects will change parents' life by being in the hospital, including daily activities, social and economic issues, and anxiety.By being on their children's side during hospitalization to treat their children's sickness, most parents showed various reactions, including disbelief, anger, guilt, anxiety, frustration and even depression (Ulfa et al., 2018). Let alone if the parents experience all this stress during the pandemic.The Covid-19 pandemic caused changes and modifications in providing health services, including nursing care.Fan et al. (2020) stated that the Covid-19 pandemic pressures parents psychologically during neonatal care in the hospital.It is because there was a minimum chance to meet their baby and limitation to stay with their babies during the care.Another cause is pandemic uncertainty and raised financial expenses during the pandemic.Zhang et al. (2017) also stated the same problems saying that parents whose babies are being hospitalized tend to be anxious and depressed.It sparked a family conflict and a conflict between the family and health workers.Fan et al. (2020) also showed the same result that the respondents were anxious and depressed. Because anxiety and psychological stress happen to the parents, it will negatively affect their family relationships and children's health service.Some research showed a conflict between the health workers and the parents.The conflict emerged due to psychological issues, which happened to most parents and families.The pressure, anxiety, and depression affect the family members and the medical worker.Parents also did not cooperate in caring (Diffin et al., 2016;Tandberg et al., 2019;Zhang et al., 2017).Parents become more stressed if their hospitalized children show deteriorating conditions.The parents become more anxious, afraid of their babies' worsening condition, and guilty about seeing their children suffering.They are easily irritated when they are asked trivial things.They are in a hurry when doing their work.For parents whose babies are in long hospitalization, it will disrupt their work because they must accompany their babies in the hospital.Those things stress the parents more (Ulfa et al., 2018). There are some factors underlying parents' stress during children's hospitalization.They are the diagnosis of the disease, treatment or care, incapacity of treating children's disease, lack of support system, unable to use coping mechanisms, and lack of communication in the family (Wong, 2009).During the Covid-19 pandemic, the stress source has become more varied.The baby's condition, which is vulnerable to everything, having a disease or abnormality, unstabilizes the mother's psychological condition postpartum, triggering the parent's anxiety. The research result shows that some respondents argued that their babies' condition worsened due to minimized care because of the Covid-19 pandemic (Fan et al., 2020).The covid-19 pandemic causes the parents to have less intensity in terms of communication and their chance to have a consultation about their baby's condition.Another rule of visit limitation and parents' interaction with their baby during care causes significant stress.In NICU, parents are prohibited from being too close to their baby.Their interactions are limited.The prohibition from entering the NICU room caused the parents to react negatively.They are angry, disappointed, afraid, anxious, depressed, and helpless (al Maghaireh et al., 2016). Nurses and other health workers' workload are increasing during the pandemic.research showed that 19 nurses (61,3%) had a high workload.It happened because there was a spike in patients' number during the Covid-19 pandemic.The service and care mechanism policy were adjusted due to the increased risk of Covid-19 spread.Being infected by Covid-19 caused a stressor for health workers, specifically nurses.They were worried about interacting with the patients and their families.Padila & Andri's (2022) also showed that 16 nurses (51.6%) nurses had severe stress levels and 15 nurses (48.4%) had low-stress levels.Those things cause the nurse to provide minimum attention to the patient and the family.Thus, making the parents anxious and stressed. On the other hand, the nurse should provide caring at the utmost level, providing their full attention or empathy to the patient and the family.That is the key to lower the parent's stress levels.Pardede et al. (2020) research showed a correlation between nurse care and parents' coping and anxiety.Inadequate caring caused parents' coping to be maladaptive, leading to anxiety.Until recently, no study has researched the correlation between nurse caring and parent stress levels on baby hospitalization during the Covid-19 pandemic.This research aims to know the correlation between them. a. Design This research is nonexperimental research or observational research.A cross-sectional design was used in this research. b. Population and Sample The population of this research is mothers whose babies undergo hospitalization in a perinatology room.This research used accidental sampling and acquired the data for two months, from December 2020 -January 2021.The total sample is 95 mothers whose babies undergo caring in a neonatal care room in one of Yogyakarta's hospitals.The inclusion criteria are parents whose babies were being cared for in a perinatology room for ≥ two days and were agreed to be respondents. c. Research Instrument This research used two questionnaires, Caring Assessment Toll (CAT) and the Parental Stress Scale, as the research instruments.Duffy (1990) developed the Caring assesment toll (CAT) and had been translated and tested its validity-reliability by Kusumarini (2016).The questionnaire consists of 41 questions covering favourable and unfavourable statements For favourable questions, if the respondent answered "strongly agree", they receive 3 points, 2 points for answering "agree", 1 point for answering "not agree", and 0 points for answering "strongly disagree".If the respondent answered "strongly agree" for unfavourable questions, they received 0 points.One point for answering "agree", 2 points for answering "not agree", and 3 points for answering "strongly disagree".Then, nurse caring is categorized into three categories based on the total points.They are good (81-123), fair (32-80), and poor (<32).Miles (1987) developed the Parental stress scale (PSS) NICU in English.Then, it was translated tested its validity-reliability by Jayanti (2013).This instrument has 34 questions and uses a Likert scale.They are 0 (not applicable), 1 (not stressful), 2 (a bit stressful), 3 (mildly stressful), 4 (very stressful), and 5 (severely stressful).To evaluate the questionnaire by summing up the score of each item.Then, separate it into three categories.Low (<56), Mid (56-114), and High (>114). d. Data acquiring procedure The data was acquired directly (primary data) by asking the respondents using the questionnaire. The researchers conducted the process with the help of an enumerator. e. Research ethics The researchers applied ethical principles during the research.The researchers abide by ethical principles: autonomy, confidentiality, beneficence, nonmaleficence, and justice.All respondents were given the informed consent form before agreeing to be respondents.This research has passed an ethical review conducted at KEPKN STIKes Surya Global with issuance No. 216/KEPK-SSG/09/2020. f. Data Analysis The chi-square correlation test is used as the statistical test. Research Results The research provides univariate and bivariate analysis.The univariate analysis covers the frequency and characteristic percentage of the respondents.It also covers the variable assessment result, namely caring and mother stress level on hospitalization.Bivariate analysis is the correlation test between nurse caring application and the mother's stress level during baby hospitalization.In this research, the mother's characteristics include age, occupation, hospitalization, number of children, family support, social-economic status, and education. There are ninety-five mothers served as respondents.) have an income the same or above the regional minimum wage (UMR -Upah Minimum Regional). Nurse caring is stated in a score based on respondent answers in the questionnaire.The total score was categorized into three categories (bad, decent, and good).The above showed that 50 respondents (52.6%) evaluate that the nurse caring in the Perinatology Room is in a good category. Mother stress level is stated in a score based on respondent answers in the questionnaire.Then, the total score was categorized into three categories (low, mild, and severe).Table 3 below shows the mother's stress level.The table above shows that 44 respondents (46.3%) have low-stress levels during their baby's hospitalization. Cross-tabulation was used to know the correlation between nurse caring and mother stress level on baby hospitalization in the Perinatology Room in one of Yogyakarta's hospitals.Table 4 shows the result of the cross-tabulation.Parents' occupations also may take part as stressor cause to the mothers.Fauziah & Agustin (2014) stated that working respondents have higher stress levels.It is because parents have double roles as the breadwinners and caring for their sick children.It causes a conflict between the role leading to a higher stress level.Tehrani et al. (2012) also stated that working parents have a higher stress level because they cannot accompany their children for 24 hours.Take into account the far distance between the working place and the hospital. Economic status (family income) also may take part as a stressor caused to the mothers.Fauziah & Agustin (2014) found that parents with income higher than minimum wage will have lower stress levels than those with lower than minimum wage.That statement aligns with Supartini's (2014) research.She stated that the parents are anxious and afraid of the expenses for the children's treatment.Being ever or never hospitalized also may cause stress to the mothers. According to Priyoto (2014), one who ever faced a stressful situation will be more capable of overcoming the same situation in the future.Parents who ever being hospitalized or their children whoever being hospitalized will have a better experience in treating sick children. Thus, it is easier for them to overcome the problem. Support also may take part as a stressor caused to the mothers.Yeni et al. (2015) pointed out that respondents who receive family support will have lower stress levels than those who do not. Priyoto (2014) stated that supports and others' empathy significantly help lower stress levels. One of the nurses' efforts to minimize hospitalization's effect is providing a caring service.The nurse will involve the parents in the treatment process to avoid raising stress levels, improve self-control, and minimize children's separation.In this research, the nurses are friendly and provide a clear explanation regarding a nursing procedure which makes the parents understand the treatment given to their children.In Alligood (2010) Conclusions and Recommendations Based on the research result, data analysis, and the discussion, we conclude that there is a correlation between nurse caring and mother stress levels in baby hospitalization during the Covid-19 pandemic-the better the nurse's caring, the lower the mother's stress level. Nurses assigned explicitly to the perinatology room should improve their caring treatment of the parents, mainly the baby's mother.Nurses should improve their future care treatment, including giving the parents a chance to disclose their feelings, concerns, or worries.Also, nurses should routinely provide information on the baby's condition and development.Lastly, nurses should always provide emotional support. , one of Watson's carative factors stated that nurses can improve interpersonal learning systems.Nurses should conduct an exciting and outright learning process which includes providing information and the definition of health and sharing experiences with the patients and their family.Providing nursing services based on caring will improve the quality of health services.Applying caring integrated with biophysical and human traits knowledge will improve one's health and facilitate patient health services.Watson (1979) inAlligood (2010) added that practical caring stimulates health and growth.Furthermore,Gaghiwu et al. (2013) found a significant correlation between nurses caring for children's stress on being hospitalized.The better the nurse caring, the lower the children's stress on being hospitalized.One of the factors affecting parents' stress is their inability to communicate, accept, and understand complex information.They also made a difficult decision during the initial stage of hospitalization(Needle et al., 2009).Therapeutic communication in nurse caring will reduce parents' stress.This statement is supported byHeidari et al., (2015)In his research, he stated that the appropriate response and answer from the nurses to parents' questions on their children's condition reduce parent stress significantly.This statement aligns with Currie et al. (2018) research.In the research, she stated that providing understandable information showed sensitivity and recognition to the parents.It reduces parents' confusion about their children's condition.Solheim & Garratt (2013) also stated the same thing.They stated that caring and therapeutic communication display nurse warmness and care towards the children and the family.Those will surely improve the satisfaction of the clients and their families.Nurses can use the advancement of information technology to provide care during the pandemic.They also can provide any form of support, including psychological support, to give any information without having physical contact.Monaghan et al. (2020) conducted research that showed the same result.Family satisfaction is improving due to the use of multimedia technology to communicate between doctors and patients.It can also be optimized for providing health education-many applications in the gadget support the interaction between patients, parents, and health workers.Animations can also show or demonstrate any health education.Those are significantly beneficial for the effective communication between doctors and patients during the Covid-19 pandemic (Yin et al., 2019).Stress affects parents' actions in treating their children.Thus, it is vital for nurses to provide nursing services and as a holistic provider of client health needs.Nurse caring fills that role.This statement aligns with Gaghiwu et al. (2013) research.In his research, he stated that a caring nurse could reduce parent stress levels when hospitalized.It also aligns with Ludyanti et al. (2015) research.She stated that caring, which fulfills basic human needs physically and psychologically, provides comfort to the patient.Fan et al. (2020) suggested giving the family more emotional support than usual during this pandemic.That support can be conducted by giving the parents a chance to disclose their feelings and worries.Nurses must communicate with the parents daily by phone call to provide an update on the baby's condition and development.The updates include the volume of eating, drinking, urine, feces, temperature, and movements. Table 1 The Characteristics of Respondents whose Baby being Cared in Perinatology Room in One of Yogyakarta's hospitals (n=95) Table 2 below shows the data analysis of nurses caring in the Perinatology Room in one of Yogyakarta's Hospital. Table 2 Nurse Caring in the Perinatology Room in One of Yogyakarta's Hospitals (n=95) Table 3 Mother Stress Level during Baby Hospitalization in Perinatology Room in One of Table 4 (Aryanti et al., 2019)4) Caring and Mother Stress Level on Baby Hospitalization in During the Covid-19 pandemic, all aspects and organizations are modified and adjusted.So does nursing care in hospitals.The rules and policies related to health services and caring change due to the increasing transmission risk of Covid-19.According to the parents, thoseThis negatively affects the children and the parent.Apriany (2013)pointed out that parents are afraid during hospitalization for unclear reasons.They occasionally had bad dreams, easily annoyed and panicking.That statements correlate with Wong's.He stated that parent's responses during children's hospitalization include disbelief, guilt, anger, afraid, anxiety, and stress(Wong, 2009).Some factors may relate to the respondent's stressor in this research.Some are age, education level, occupation, social-economy status, ever or not being hospitalized, and family support.Yeni et al. (2015)research showed that as one age, they easily overcome stress.According to her, parent age affects their thoughts and acts to be mature enough to accept children's treatment in the hospital.Fauziah & Agustin (2014)conducted the same research showing that as one gets older, one becomes more experienced and has a broad knowledge.Both traits showed their preparedness to face a problem.Experienced parents in caring for their children will be calmer in facing problems relating to children's health conditions(Aryanti et al., 2019). Oktavianto et al. (2018)han other nurse caring categories (poor and decent).The correlation test between nurse caring and mother stress level on baby hospitalization resulted in a p-value < 0.05.There is a correlation between the issues.The higher category of nurse caring, the lower the mother's stress level on baby hospitalization.4.Research DiscussionHuman Caring, emphasized that caring is a type of relation and transaction needed between the giver and the receiver of nurse caring to improve and protect the patient as it affects the patient's ability to recover.In common terms, caring is the ability to dedicate oneself to others, attentive supervision, showing attention, empathy, and love others; that is the will of nursing (Potter & Perry, 2010).Various nursing theories place caring as the core of nursing.It formed the nursing practice in which the nurse helps the client recover from his/her disease, explains his/her disease, and manages or rebuilds the relationship.Potter and Pery said that caring is an important core of nursing practice.It emphasizes the recognition of one's dignity.In nursing practice, nurses always appreciate the client by accepting the client's strengths and weaknesses (Potter & Perry, 2010).Caring is closely related to human relationships and the ability to dedicate oneself to others.hospitalization in a perinatology room.Forty percent of mothers who regard good nurse caring have lower stress levels than mothers who regard poor and decent nurse caring.The Chi-square test result shows p-value = 0.000 (p-value <0.05).It can be concluded that there is a correlation between nurse caring and mother stress levels in baby hospitalization.Being hospitalized means the children and the mothers must adapt to the new surroundings during the children's treatment.Education level and knowledge also may take part as stressor cause to the mothers.Fauziah & Agustin (2014) said that respondents with higher education levels are more capable of overcoming stress by using effective coping systems than respondents with lower education levels.Oktavianto et al. (2018)stated that knowledge causes one's level of stress and anxiety.A better-knowledge nurse is more confident in facing any problems.
2024-03-02T16:25:36.894Z
2022-12-31T00:00:00.000
{ "year": 2022, "sha1": "eb7cc572bdba3f0ccaad8cadc2e23f4a24fc07cf", "oa_license": "CCBY", "oa_url": "https://talenta.usu.ac.id/IJNS/article/download/9985/5646", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b4a6e2affd9c5fbd2a92be49503e0e05fcdac99a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
125164004
pes2o/s2orc
v3-fos-license
Interpolation of an analytic family of operators on variable exponent Morrey spaces A bstract . In this paper we show the validity of Stein’s interpolation theorem on variable exponent Morrey spaces. Introduction The Stein interpolation theorem, where the interpolation is given with regards to an analytic family of operators, is an essential tool pervading modern Fourier analysis. For example, the first non-trivial progress on spherical summation of multiple Fourier series was obtained with the usage of this theorem, see [7] for more details. Stein's interpolation theorem is given in the framework of Lebesgue spaces and we were not able to find such an interpolation theorem for Morrey spaces. It is interesting to note that the Riesz-Thorin interpolation theorem when the domain space is a Morrey type space does not hold for appropriate counter examples see [18]. Hence, the proved Stein type result will deal only when the target space are Morrey type spaces but the domain is a Lebesgue type space. For interpolation type results on Morrey-Campanato spaces, we refer to [9,17,28] and references therein. In 1938 C. Morrey [19] studied Morrey spaces for the first time in connection to its applications in partial di¤erential equations. Until recently, a rapid growth has been seen in the study of Morrey type spaces because of its applications in major fields of engineering and sciences (see e.g. [8]). For a comprehensive study of Morrey spaces we refer to [2,22,21]. Function spaces with non-standard growth has seen a major focus in recent times (see e.g. [14,15]) because of its wide range of applications e.g. in the area of image processing [1,27], the study of thermorheological fluids [4] and modeling of electrorheological fluids [23]. Let X and Y be two quasi-metric measure spaces (QMMSs). In this manuscript, a version of Stein's interpolation theorem is proved in the framework when the target space is a variable exponent Morrey space L qðÁÞ; lðÁÞ ðY Þ and the domain space is the variable exponent Lebesgue space L pðÁÞ ðX Þ. It is worth mentioning that these results are new even for the constant case. Throughout the paper, constants (often di¤erent constants in the same series of inequalities) will mainly be denoted by c or C; by the symbol p 0 ðxÞ we denote the function pðxÞ pðxÞÀ1 , 1 < pðxÞ < y; the relation a A b means that there are positive constants c 1 and c 2 such that c 1 a a b a c 2 a. Preliminaries Let X be a non-empty set. A function d : X  X ! ½0; yÞ is said to be quasi-metric if the following conditions are satisfied: (a) dðx; yÞ ¼ 0 for all x A X . (b) dðx; yÞ > 0 for all x; y A X and x 0 y. (c) There is a constant c 0 > 0 such that dðx; yÞ ¼ c 0 dð y; xÞ for all x; y A X . (d) There is a constant c 1 > 0 such that dðx; yÞ a c 1 ðdðx; zÞ þ dðz; yÞÞ for all x; y; z A X . Let m be a complete measure such that the set of all compactly supported continuous functions are dense in L 1 m ðX Þ. We refer the triplet ðX ; d; mÞ as quasi-metric measure spaces (QMMS), where d is a quasi-metric. Let d X ¼ diamðX Þ ¼ supfdðx; yÞ : x; y A X g. Let us denote by Bðx; rÞ ¼ fy A X : dðx; yÞ < rg a ball of radius r > 0 and centered at x. Throughout this paper, it will be assumed that 0 < mðBðx; rÞÞ < y for every r > 0 and x A X . It is evident that the assumption that all balls have finite measure together with the condition d X < y imply mðX Þ < y. Variable exponent spaces. Let W be a m-measurable set in ðX ; mÞ with positive measure. We denote: for a m-measurable function p on W. Suppose that 1 a p À ðWÞ a p þ ðWÞ < y. We say that a m-measurable function f on W belongs to L pðÁÞ ðWÞ (or to L pðxÞ ðWÞ) if It is a Banach space with respect to the norm (see e.g. [11,16,24,25]) For the following propositions we refer to [16,24,25]. Proposition 1 (Hö lder's inequality). Let W be a m-measurable subset of X and let 1 a p À ðWÞ a p þ ðWÞ < y. Then for every f A L pðÁÞ ðWÞ and g A L p 0 ðÁÞ ðWÞ the following inequality ð The following lemma has been taken from [5, p. 27]. Lemma 1. Let W be a m-measurable subset of X and let 1 a p À ðWÞ a p þ ðWÞ < y. Then the following inequality k f k L pðÁÞ ðWÞ a S pðÁÞ; W ð f Þ þ 1; holds. Definition 1. We say that a m-measurable function p : X ! ½1; yÞ belongs to the class P log m ðX Þ if for every x; y A X such that mBðx; dðx; yÞÞ a 1=2 the following inequality j pðxÞ À pð yÞj a ÀA ln mðBðx; dðx; yÞÞÞ holds. The following lemma can be found in [22,14]. Lemma 2. Let ðX ; d; mÞ be a QMMS with mðX Þ < þy and let p A P log m ðX Þ. Then kw Bðx; rÞ k L pðÁÞ a CðmðBðx; rÞÞÞ 1=pðxÞ : Morrey spaces with variable exponent L pðÁÞ; lðÁÞ ðWÞ where W is an open subset of R n were introduced simultaneously by Almeida et al. [3], Kokilashvili et al. [12,13], Ohno [20] and X. Fan [6] in more or less similar manner. Let 1 a pðÁÞ < p þ ðWÞ < y and 0 a lðÁÞ a 1 be m-measurable functions. We say that a m-measurable function f A L pðÁÞ ðWÞ belongs to L pðÁÞ; lðÁÞ ðWÞ if The norm on variable exponent Morrey spaces can be introduced in the following ways (see e.g. [3,12,13,22]): kðmðBðx; rÞÞÞk ÀlðxÞ=pðÁÞ f w Bðx; rÞL pðÁÞ ðWÞ ; ðmðBðx; rÞÞÞ ÀlðxÞ=pðxÞ k f k L pðÁÞ ðBðx; rÞÞ : It can be checked easily by means of simple computations that g. [22]) then both the norms k f k 2 and k f k 1 are equivalent to k f k 3 . We define the norm on variable exponent Morrey space as: It is easy to see that if the parameter l ¼ 0, then L pðÁÞ ðX Þ ¼ L pðÁÞ; 0 ðX Þ. When pðxÞ 1 const and lðxÞ 1 const then L pðÁÞ; lðÁÞ ðX Þ is reduced to the case of classical Morrey space L p; l ðX Þ. The following lemma gives the embedding of variable Morrey spaces into variable Lebesgue space in the case d X < y. Here we present the proof of this lemma for the sake of completeness. holds. Interpolation of analytic family of operators in variable exponent Morrey spaces In this section we prove the main result of this paper. We prove the Stein interpolation type theorem for analytic family of operators. The next lemma is due to Hirschman and can be found in e.g. [10]. exists, is continuous and bounded on the strip S ¼ fz : 0 a ReðzÞ a 1g and analytic on intðSÞ, where a k are positive real numbers and m k , b k are measurable functions for k ¼ 1; 2. We shall call fT z g z A C of admissible growth if F y; r ðzÞ is of admissible growth in the sense of Definition 2. Remark 1. Although the definition of an analytic family of operators given in Definition 3 seems cumbersome at first sight, but it should be noted that in the non-variable framework this definition coincides with the definition given by Stein in [26]. We now formulate and prove the Stein interpolation theorem in the variable exponent framework. hold for all simple functions f . Also we assume that logjM k ðtÞj a Ce jtjl l < p for k ¼ 0; 1: For z A S :¼ fz : 0 < ReðzÞ < 1g, define p z , q z and l z by Proof. Since T z is linear, we may assume that f 0 0, otherwise the inequality holds for f ¼ 0. By the homogeneity of the norm and the scaling argument we may assume that k f k L p y ðÁÞ ðX Þ a 1. Now we need to show that kT z f k L q y ðÁÞ; l y ðÁÞ ðY Þ a cM y : We will show (6) for simple functions in X and since the span of simple functions is dense in L pðÁÞ ðX Þ we will have the estimate for all f A L p y ðÁÞ ðX Þ. Let us assume f , g are simple and complex valued functions defined on X and Y respectively by, where a j ; b k > 0 and a j ; b k A R, mðA j Þ; mðB k Þ < y, and fA j g and fB k g are, respectively, pairwise disjoint. Now define, Finally, for every y A Y , r > 0 and z A C, we put Substituting the values of f z and g z in the last expression we have Hence for almost every y A Y , F y; r ðzÞ is analytic on intðSÞ and continuous and bounded on S and of admissible growth, since T z is an analytic family of linear operators of admissible growth. Since A j are pairwise disjoint and a j > 0, we have for z ¼ it ðt A RÞ Hence for almost every y A Y and r > 0 we have, ðnðBðy; rÞÞÞ Àl y ð yÞ=q y ð yÞ kT y f k L q y ðÁÞ ðBð y; rÞÞ a cM y ; which implies that, kT y f k L q y ðÁÞ; l y ðÁÞ ðY Þ a cM y : This completes the proof.
2019-04-22T13:13:09.841Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "84bfe8806bae6b8bec7c5dfda583cfcf58dd46d1", "oa_license": null, "oa_url": "https://projecteuclid.org/journals/hiroshima-mathematical-journal/volume-48/issue-3/Interpolation-of-an-analytic-family-of-operators-on-variable-exponent/10.32917/hmj/1544238031.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9481506047a87d160d2a4f26cdc875b1a30cbe8c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55131161
pes2o/s2orc
v3-fos-license
Correlations Between Administration of Food Supplements with Marked Antioxidant Properties and Clinical Parameters in Patients with Prostate Carcinoma Several studies indicate that high oxidative stress is associated with various degenerative diseases, including tumors. The high levels of free radicals present in many patients derive from the chronic lack of antioxidants, caused by an increasingly poor and artificial diet. The study presented in this research, conducted on 50 male volunteers, carriers of Prostatic Carcinoma (PCa) at different stages of development and under current therapy, showed that the daily intake of antioxidants contained in two food supplements, Citozym (CIZ) and Propulzym (PRZ), induces a significant increase of the ratio free/total PSA and also a reduction of various clinical parameters, correlated with PCa. This result suggests a potential slowdown in the progression of the disease. This study was planned on the basis of a recent preliminary trial, that highlighted the positive activity of CIZ in a model of benign prostatic hypertrophy. Studies are in progress to identify the components present in these food supplements affecting biochemical signals, elements that underlie the ability of PCa to progress. This does not mean that the intake of antioxidants is a cure for PCa, but that a high undiagnosed oxidative stress, much less correct by the standard oncological approaches, contributes to the evolution of the tumor disease. In other words, current oncology focuses correctly on reducing the tumor mass, but does not intervene on the biological medium that produced it. This is why there is still a dramatic incidence of relapses. Introduction PCa is recognized as one of the main medical problems facing the male population. In Europe, it is by far the most frequent neoplasia, with an annual incidence of 214 cases per 1000 men for an estimated 2.6 million new cases each year, and the second leading cause of death by neoplasm. For about 20 years with the introduction into the clinical practice of PSA (specific prostate antigen) there has indeed been a noticeable increase in the diagnosis of the neoplasm, to which, however, has corresponded only a slight increase in mortality, posing the problem of PCa clinically not significant and the risk of over-processing. Several studies indicate that high oxidative stress is associated with many degenerative diseases including tumors. To contribute heavily to the high levels of free radicals found in patients is a chronic deficiency of antioxidants caused by an increasingly poor and artificial diet. A study carried out in 2015 highlighted the daily intake of antioxidants by patients with PCa induces a significant reduction in PSA [1]. This according to the researchers indicates a real slowing down of the progression of the disease. Recently, researchers have identified some antioxidant components of plants that seem able to inhibit the movement of cancer cells and weaken their chemical signals, elements that are at the base of the tumor's ability to generate metastases [2]. This is not to say that intake of antioxidants is a cure for PCa but that a high undiagnosed oxidative stress, much less correct by standard oncological approaches, contributes to the evolution of tumor disease. In other words, current oncology focuses correctly on reducing the tumor mass but does not intervene on the biological soil that produced it. This is why there is still a dramatically high incidence of relapses. To date, the main diagnostic tools for prostate tumor include the digito-rectal prostate (DRE) scan, trans-rectal ultrasound and PSA. The DRE must always be the first diagnostic approach and has the purpose of appreciating the shape, the consistency, the tenderness and approximately the volume of the gland, as well as the presence of suspicious nodules. Trans-rectal ultrasound mainly performs the task of highlighting areas of doubtful ultrasound and improving the accuracy of prostate biopsy, which is the only investigation that allows to make a certain diagnosis of neoplasia. The PSA is still today the most used silky marker. It is a glycoprotein produced almost exclusively by the epithelial cells of the prostate and, fundamental data, constitutes an organ marker and not of cancer; therefore an elevation of its values can also occur in the presence of other prostatic pathologies (Benign Prostatic Hyperplasia and prostatitis). It is a serine protease of the human tissue callicreine family, encoded by a gene located on chromosome 19, which is produced by secretory columnar epithelial cells and is contained in the ejaculate. The function of the PSA is to keep the spermatic fluid that coagulates, allowing the sperm to move. PSA is highly specific for prostate tissue, as no other tissue produces it. Free PSA is found free in the plasma, while the bound PSA is complexed to proteins such as alpha-1-antichymotrypsin and alpha-2macroglobulin. The fraction linked to alpha-1-anti -chymotrypsin can be evaluated in serum. In the early years of the clinical use of free PSA the main international guidelines had adopted a threshold value of 4 ng / mL, then lowered by some authors to 2.5 ng / mL. In fact, for some years it has been widely demonstrated that it is not possible to establish a minimum threshold that can safely rule out the presence of a prostatic carcinoma. A well-known US study [3], conducted on about 3000 men showed a risk of neoplasm of about 25% for PSA values between 2 and 4 ng / mL and about 8 % for values less than 1 ng / mL. Therefore, by lowering the PSA reference value there is an increase in the diagnosis of new cases of prostatic neoplasia, but with an increase in the incidence of non-significant tumors, whose natural history does not affect survival. Therefore, the free PSA shows a low diagnostic specificity and is poorly reliable to select clinically significant tumors. For this reason, over the years it has been tried to improve the specificity of PSA with some of its variants such as the relationship between free and total PSA, which finds its rationale in the fact that neoplastic prostate cells produce less PSA in free form than to benign cells. The evaluation of these two fractions of PSA, showed that alterations affecting free PSA are found mainly in benign prostatic hypertrophy, while alterations in the levels of linked PSA are often due to malignant diseases. Therefore, the lower the ratio, the more the risk of cancer increases. For example, when the value falls below 0.10 (<0.10) the risk of PCa is estimated at 56%. In recent months, among the various PSA isoforms, the -2proPSA has aroused some interest. It is a precursor of free PSA and is typically associated with prostate cancer. The use of this marker significantly improves the diagnostic specificity of the free PSA/total PSA ratio in patients with values of free PSA between 2.5 and 10 ng / mL. In the past almost all doctors considered normal PSA values equal to or less than 4.0 ng / mL, so prostate biopsy for PCa research would be indicated in a subject with values above 4.0 ng / mL. To complicate the situation, remember that there is no unanimity on the upper limit of normality, for example the National Health Service identifies the value 3.0 ng / mL, the Havard University sets normal limits proportional to age. Subjects and Treatments The patients observed in this survey were recruited after having given their informed consent at AIMI oncological urology clinic. 50 male subjects aged 60 to 75 with a clear diagnosis of PCa, under official cancer therapy, were divided into two groups, the first group (25 patients-Group I) untreated and the second group (25 patients-Group II) treated daily orally with a specific CIZ / PRZ protocol and examined for a period of about four months. During the study the following clinical data collected weekly, were analyzed in all patients,: free and total PSA, PSA density, PHI, PCA3, testosterone, fibrinogen, prostatic acid phosphatase (PAP) and zinc. These clinical parameters were compared with the initial data, placed at 100, obtained before treatment. CIZ and PRZ were obtained from CITOZEATEC, S.r.l. (Peschiera Borromeo, Milano, Italy). The main components of CIZ are as follows (units/100 g): 500 mg of vitamin C; 56 mg of vitamin B5; 56 µg of vitamin D; 3, 3 mg of vitamin B9; 222 mg of pyruvic acid; 120 mg of citric acid; 250 mg of tartaric acid. PRZ is a mixture of glutamic acid (24.5 mg/mL), acetyl cysteine (19.3 mg/mL), calcium carbonate (18 µg/mL), citric acid (140 mg/mL), tartaric acid (122.5 mg/mL), sorbitol (28 mg/mL) and mannitol (14 mg/mL). Bioassays PSA: PSA is still today the most used marker. It is a glycoprotein produced almost exclusively by the epithelial cells of the prostate and, fundamental data, constitutes an organ marker and not of cancer; therefore an elevation of its values can also occur in the presence of other prostatic pathologies (Benign Prostatic Hyperplasia, prostatitis). It is a serine protease of the human tissue callicreine family, encoded by a gene located on chromosome 19, which is produced by secretory columnar epithelial cells and is contained in the ejaculate. The physiological function of the PSA is to keep the sperm fluid after ejaculation, allowing the spermatozoa to move more easily through the cervix. PSA is highly specific for prostate tissue, as no other tissue produces it. Free PSA is found free in the plasma, while the bound PSA is complexed to proteins such as alpha-1-antichymotrypsin and alpha-2-macroglobulin. The fraction linked to alpha-1-antichymotrypsin can be evaluated in serum. In the early years of clinical use of free PSA the main international guidelines had adopted a threshold value of 4 ng / mL, then lowered by some authors to 2.5 ng / mL. In fact, for some years it has been widely demonstrated that it is not possible to establish a minimum threshold that can safely rule out the presence of a prostatic carcinoma. For this reason the free / total PSA ratio is actually considered more reliable from the prognostic point of view. An increase in this value indicates the reduction of tumor progression. PCA3: PCA3 is a gene overexpressed in 95% of cases of prostate carcinoma. The value of the messenger RNA (mRNA) of the PCA3 researched in the urinary cells, after prostate massage, is increased 60-100 times in 95% of prostate tumors compared to what is observed in normal prostate cells. In addition, its value is independent of the volume of the prostate, inflammation, the rate of blood PSA and the number of biopsies already performed. PSA density: PSA density expresses the relationship between the total PSA and the measured trans-rectal ultrasound size of the gland, and is based on the fact that the amount of PSA produced per gram of tissue is much greater in cancer than in hypertrophy. PSA density would, in fact, increase the diagnostic accuracy of PSA in patients in whom the marker value is between 4 and 10 ng / mL. PHI index: the problems arising from the limited ability of the PSA to accurately suggest the presence of a clinically significant prostate tumor allowed to insert a marker called PHI (Prostate Health Index) which derives from a mathematical processing of the data related to three analyzes: total PSA, free PSA and -2proPSA. -2proPSA is a fraction of the PSA molecule that is measured in the blood after a normal sample (p2PSA / PSA free square root PSA total). Testosterone: a connection between the levels of testosterone and the onset of PCa has been established, starting from the fact that PCa cells are initially hormone-sensitive. Low testosterone levels are associated with more aggressive tumors in patients who have developed PCa [4]. However, this does not mean that having low testosterone levels constitutes a risk or that correcting testosterone levels in those subjects leads to a reduced risk of developing prostate cancer. This association can give us an important indication of how the tumor will evolve before subjecting a patient to surgery and then helping us to choose the best therapy for the individual case. The argument is still controversial, in fact, although testosterone is historically considered an enemy of the prostate, recent clinical and preclinical evidence have reversed this vision. A very large prospective study could clearly demonstrate that there is no association between the concentration of blood testosterone and the risk of prostate cancer [5]. Fibrinogen: globulin that intervenes with other coagulation factors to promote blood coagulation, as it turns into fibrin. Values of 200-400 mg / 100 mL are considered normal. Values lower than those considered normal can be determined by prostate cancer. PAP: acid phosphatase is an enzyme that can be measured in the blood, produced by the prostate, spleen, liver, red blood cells, platelets and bone marrow. The dosage of the prostatic fraction serves in particular to confirm or not the suspicion of prostate cancer. They are considered normal values of PAP up to 4.2 mU / mL. Values higher than those considered normal can be determined by prostate hypertrophy and prostate cancer. Zinc: essential element for normal growth, development of the genital organs, normal prostate activity, wound healing, protein production; controls the activity of more than 100 enzymes and is involved in the functioning of insulin. Small amounts of this element are present in many foods such as lean meat, bread and whole grains, dried beans and marine foods. Normal values: 80-160mg / 100mL. Results This study was planned on the basis of a preliminary experimental trial that highlighted the positive activity of CIZ in a model of benign prostatic hypertrophy [6]. Comparing the clinical parameters of patients who have not been treated and treated with CIZ / PRZ for 60, 90 and 120 days, a clear improvement in the effect of the antitumor therapy has emerged. Data processing was performed by comparing the parameters reported in the " Bioassays " section, among the group of patients treated only with the anticancer therapeutic protocol (Group I) and the group of patients treated both with the antitumor protocol and with the two food supplements under examination (Group II). Figure 1 shows the values obtained after 60 days of treatment of the two experimental groups. The adjuvant function exerted on the antitumor therapy, by the two antioxidants CYZ and PRZ, is particularly evident for free and total PSA, whose values are reduced by 7% and in the ratio free/total PSA increased of about 4%. PCA3 value was reduced of about 7%. The other parameters did not undergo statistically significant changes except for the marked increase in testosterone (30%) and fibrinogen (17%) levels. Figure 2 shows the clinical parameters collected after 90 days in patients of Group I and Group II. The reduction of the observed clinical values is already evident only with the antitumor therapy, but becomes even more marked with the aid of antioxidant treatment. In particular, both the free PSA and the total have been reduced by about 20%, improving the value of the free / total PSA ratio. Remarkable is the increase of both the testosterone level (35%) and the fibrinogen (27%). The level of PAP, appears reduced by 13%. Figure 3 shows the values obtained after 120 days of treatment. Both the free PSA and the total PSA are further reduced in Group II, compared to the values of Group I. PHI is reduced by 12%, PCA3 by 10% and PAP by 15%. Considering the prognostic value of the free / total PSA ratio, the positive result appears the 10% increase of this parameter following 120 days of treatment.. Values for zinc and PSA density remained unchanged. It is interesting to underline the reduction of testosterone levels, exerted by anticancer therapy, in contrast with the evident total increase of 36%, following the simultaneous treatment with the two antioxidants (CIZ/PRZ). Discussion Cancer is one of the major heterogeneous diseases with high morbidity and mortality. Despite extensive research and considerable efforts for developing targeted therapies, it is still an alarming condition with a poor prognosis and high mortality. Numerous studies have provided evidence that changes in redox balance and deregulation of redox signaling are common hallmarks of cancer progression and resistance to treatment. Recent studies have demonstrated that cancer cells are highly adapted to elevated levels of reactive oxygen species (ROS) by activating antioxidant pathways. Thus, targeting the ROS signaling pathways and redox mechanisms involved in cancer development are new potential strategies to prevent cancer. Based on these considerations, therapy success may be conditioned by the antioxidants present in our own body, which can be synthesized de novo (endogenous) or incorporated through the diet and nutritional supplements (exogenous). Although cells possess a large repertoire of enzymes and antioxidants, sometimes these agents are insufficient to normalize the redox state produced by an intense oxidative stress [7]. In these cases, exogenous antioxidant supplements may be required to restore the cell redox homeostasis [8]. It has been suggested that antioxidant supplementation may protect against oxidative stress associated with the development of certain diseases or that it may reverse the oxidative stress produced during their course. In the area of our interest, that is, cancer, antioxidants are acquiring great importance. It is believed that antioxidants can prevent the development of cancer due to their effects on cell cycle regulation, inflammation, the inhibition of tumor cell proliferation and invasiveness, the induction of apoptosis, and the stimulation of the detoxifying enzyme activity [9]. It is interesting to note that it has been reported that daily intake of antioxidants by patients with PCa induces a significant reduction in PSA [1]. In the present study, two known antioxidant mixtures were administered as adjuvant, during the antitumor therapeutic treatment of patients with PCa. The most interesting data obtained was that of observing a more effective reduction of some biochemical parameters correlated with the evolution of the tumor. In particular, the reduction of free and total PSA observed, has increased the free/total PSA ratio. This data, besides representing a positive observation, could explain the reduction of PHI, of PSA density and of the activity of PAP, as well as the reduction of PCA3. It is also possible to draw from the values obtained, the observation of the improvement of the general condition of the patient (data not shown), thanks to the increase of the levels of testosterone, known protector of the health of the prostate gland [4]. Testosterone is essential for the normal growth, cyto-differentiation and maintenance of prostate tissue [10]. High testosterone levels were previously considered to lead to the potential development of PCa and more rapid growth of the tumor [11,12]. However, studies on the association between testosterone and PCa risk, have produced conflicting results [10,11]. Numerous studies have demonstrated that low, rather than high, testosterone levels at diagnosis were associated with various markers of poor prognosis, including an advanced pathological stage, higher Gleason scores, higher PSA levels, seminal vesicle invasion and positive surgical margins [13,14]. Few studies have further investigated whether low levels of testosterone predict poor prognosis. Pending the data on the 120 days of treatment it is already possible a preliminary deduction, that the adjuvant therapy with CYZ and PRZ, appears to improve the biochemical parameters of patients suffering from prostate cancer, probably due to the protective effect of increased testosterone levels. Therefore, further in vivo studies, elucidating the mechanisms underlying the activity of these nutraceuticals, may be useful in the treatment of prostate tumors and reduction of the side effects of chemotherapy. In conclusion the results of the present study may have important clinical implications for the management and treatment of patients with Prostatic Carcinoma.
2019-03-17T13:12:40.496Z
2018-05-10T00:00:00.000
{ "year": 2018, "sha1": "de2bc07f97ef77b74b52503bc653f1001c7ce10c", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijcocr.20180302.11.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ddcdd16c79edd86070c2b73271375f76d437342f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18018617
pes2o/s2orc
v3-fos-license
Unusual Clinical Case: Extraluminal Manifestation of a Tapeworm from the Eviscerated Midline Incision in a Post-surgery Patient Taenia saginata infestation is one of the most common cestode infestations in humans, that may cause gastrointestinal tract related complications as a result of obstruction, perforation or anastomotic leakage. A 55-year-old male patient who was receiving palliative chemotherapy for stage IV gastric cancer was admitted to the emergency department for abdominal pain. A hollow viscus organ perforation was diagnosed and an emergency surgery was performed. On postoperative day 5, the patient's midline incision eviscerated and a moving taenia emerged, with abundant particulated fluid from the incision line. The patient was admitted for abdominal surgery due to suspected bowel perforation. During the abdominal exploration, a relaxed purse stitch of the feeding tube was observed and no other bowel perforations were seen. The patient underwent two planned surgery for abdominal cavity lavage after the removal of cestode. Unfortunately, the patient died sixteen days after his admission to the intensive care unit. This is the first case describing an extraluminal manifestation of a tapeworm in a midline incision from evisceration without intestinal perforation. Copyright © 2015 Dural et al. This is an open-access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Introduction Parasitic infestations of the gastrointestinal system are still important health issues in the twenty first century They are mostly encountered in underdeveloped or developing countries. Taenia saginata (4-12m long) and Taenia solium (3-7m long) are two common cestode species. T. saginata is the most frequently found genus in Turkey and cases occur particularly in the southeastern region. For T. saginata, cattle are the intermediate hosts where larval development takes place, while humans are definitive hosts harboring the adult worm. T. saginata is transmitted to cattle through human faeces or contaminated fodder, and to humans through uncooked or improperly cooked beef. Previously reported cases have described a number of taenia-related complications that are usually identified during surgery. These include: acute appendicitis, Meckel's diverticulitis, pancreatitis, cholecystitis, liver abscess, obstruction and perforation of the intestine and anastomotic leakage [1]. Introduction Parasitic infestations of the gastrointestinal system are still important health issues in the twenty first century They are mostly encountered in underdeveloped or developing countries.Taenia saginata (4-12m long) and Taenia solium (3-7m long) are two common cestode species.T. saginata is the most frequently found genus in Turkey and cases occur particularly in the southeastern region. For T. saginata, cattle are the intermediate hosts where larval development takes place, while humans are definitive hosts harboring the adult worm.T. saginata is transmitted to cattle through human faeces or contaminated fodder, and to humans through uncooked or improperly cooked beef. Previously reported cases have described a number of taenia-related complications that are usually identified during surgery.These include: acute appendicitis, Meckel's diverticulitis, pancreatitis, cholecystitis, liver abscess, obstruction and perforation of the intestine and anastomotic leakage [1]. We report an interesting case of a vital 2.4m long tapeworm which emerged from the evisceration of a midline incision without any intestinal perforation, after emergency surgery in a patient with terminal gastric cancer. Case Report A 55-year-old male patient was admitted to our emergency department with a two-day history of abdominal pain.He had a previous history of stage IV gastric cancer and was receiving palliative chemotherapy.The patient's medical condition was critical and his personal hygiene was poor.On physical examination, generalized abdominal tenderness, guarding and rebound tenderness were detected.His hemoglobin was 9.4 g per 100 ml, total leucocyte count was 5200 mm -3 , with a differential count revealing 93.3 % neutrophils, 0.2 % eosinophils, 3.1 % lymphocytes and 3.2 % monocytes.The platelet count was 239000 mm -3 .C-reactive protein was 21.3 mg per 100 ml and plasma albumin was 1.7 g per 100 ml.Other laboratory values were normal. A plain chest X-ray revealed subdiaphragmatic free air (Figure 1).The patient underwent emergency laparotomy with an impression of a hollow viscus perforation.Operative findings showed gastric cancer in the antrum with a 6-7 cm perforation from the anterior gastric wall.The tumor had invaded into the adjacent organs.Multiple metastatic lesions and another mass leading to obstruction in the proximal rectum were identified.The abdomen was intensely contaminated by digested food residues. A total gastrectomy without anastomoses was performed for organ removal instead of a primary repair according to the perforation's size.Esophagus and duodenum were sutured, and closed primarily.Damage control and diversion surgery including a feeding jejunostomy, end sigmoid colostomy and peritoneal lavage were done.The abdomen was closed primarily. Enteral tube feeding was started on postoperative day (POD) one.On the fifth POD; the patient's midline incision eviscerated and a vital moving taenia emerged, with abundant particulated fluid from the incision line (Figure 2).The cestode was removed attentively in a single piece and thereafter an urgent second look laparotomy was performed.During exploration, there was no other perforation site in any intestinal loops, but a relaxation of the purse stitch was detected around the feeding tube's jejunal insertion.The patient received a single dose of niclosamide (4×500 mg) postoperatively. The surgical team who performed the first emergency operation declared no mobile cestode in the abdominal cavity during the procedure in postoperative investigation.Parasitological evaluation confirmed T. saginata, and histopathological examination revealed synchronous gastric and rectal cancer.The patient was intubated during the postoperative period, septic parameters did not improve.He underwent surgery twice for the lavage of abdominal cavity and died on POD 16 due to the underlying disease. Discussion And Conclusions This case report is the first case describing an extraluminal manifestation of a tapeworm in a midline incision from evisceration without intestinal perforation.The cestode used an "open gate" such as relaxed purse stitch of the feedingthis has not been described in the medical literature before. T. saginata infection can be asymptomatic for a long period.Symptoms like weight loss, pain in the abdomen, vomiting, nausea, constipation or diarrhea and rarely mechanical intestinal obstruction can occur [5].Taenaisis is usually treated with praziquantel (10-20 mg/kg, single-administration) or niclosamide (2 gr single-administration).Surgery is recommended only for the treatment of complications [1]. The mortality rate in small intestinal perforation due to infection is mostly related to the primary disease of the patient and may reach up to 42 % [6].On admission to the emergency unit, the patient was in a severe medical condition.He was in a terminal period of gastric cancer and an immunosupressed state as a result of the chemotherapy treatment he was receiving.A fast and palliative procedure was performed with standardized steps.Unfortunately, after the removal of the tapeworm, the patients septic parameters did not resolve and his condition deteriorated. There are a few distinct case presentations describing tapeworm infestations requiring surgery.Hakeem et al. [7] presented a case of gall bladder perforation in 2012.A case of colonic anastomotic leakage related to T. saginata infestation following a right hemicolectomy procedure was reported by Sozutek et al. [1] in 2011.Another report describing a rare case of T. solium peritonitis with multiple ileal perforations was presented by Faheem et al [8]. Perforation or anastomotic leakages are well known and previously described complications of taeniasis.The special feature of our case is that the only probable reason for extraluminal outflow was a "man-made orifice", the relaxation of the purse stitch around the feeding jejunostomy, on account of no evidence for any other perforation site on bowels in secondary laparotomy.This is an unusual clinical case of an extraluminal manifestation of a tapeworm in a midline incision from evisceration related to T. saginata, without prompting any intestinal perforation other than a relaxed purse stitch of the feeding tube.Although surgery is by any measure regarded as the definitive treatment for all kind of complications, more efforts focusing on preventive measures should be made.
2017-04-19T16:55:34.562Z
2015-04-15T00:00:00.000
{ "year": 2015, "sha1": "aef3d6c0f8d8114740ebf8c30aa6321e6662cbab", "oa_license": "CCBY", "oa_url": "https://jidc.org/index.php/journal/article/download/25881535/1292", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aef3d6c0f8d8114740ebf8c30aa6321e6662cbab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59040499
pes2o/s2orc
v3-fos-license
Optimization of Vehicle Routing in Simultaneous Pickup and Delivery Service: A Case study In this case study, considered the vehicle routing problem with pickup and delivery which is a generalization of the capacitated vehicle routing problem (CVRP). The vehicle routing problem with pickup and delivery (VRPPD) arises whenever pickup demand and delivery demand is to be satisfied by the same vehicle. The problem is encountered in many real life situations. In this paper problem arises from the distribution of beverages and collection of recyclable containers. It can be modeled as a variant of the vehicle routing problem with a heterogeneous vehicle fleet, capacity and volume constraints, and an objective function combining routing distance or minimizing the total travel distance to serve customers located to different locations. Three construction heuristics and an improvement procedure are developed for the problem and designing a set of routes with minimum cost to serve a delivery of beverages and a collection of recyclable material with a fleet of vehicles. The aim of this paper is to develop a vehicle routing Problem (VRP) model that addresses simultaneous pickup and delivery in the beverage distribution. To this effect, a mathematical model is adopted and fitted with real data collected from MOHA Soft drinks Summit Plant located in Ethiopia, and solved using Clark-Wright saving algorithm. The form-to-distance is computed from the data collected from Google Earth and the customer’s data from the MOHA. The findings of the study show that the model is feasible and showed an improvement as compared to the current performances of the plant with respect to product distribution and collection and the total distance covered is minimized about 27.79%. The average performances of the model show that on average 5 routes are required to serve customers’ demands. INTRODUCTION Distributors' service routes were designed to fulfill orders from stores, bars, and restaurants. The sequences of the routes vary with the seasons. A variety of routes is used in each season to meet changes in demand. Vehicles serving the routes are constrained by time windows during which they must deliver the orders originating from customers. The vehicles are limited by the load capacity as designed by packaging sizes. Distribution system (DS) refers to the movement of finished goods outward from the central finished goods store to the customer, frequently via intermediaries. It subsumes the delivery of finished products to customers through the distribution networks. The activities within the distribution system include warehousing, transportation (often undertaken by third-party logistics providers), customer service and administration. A product may pass through a number of intermediate warehouses before reaching at customer or can also be sold directly to the customer from the central finished goods store. Therefore, a product can follow different distribution system in order to arrive at customer hand. Product delivery is the ability to manage distribution channels to ensure the timely delivery of products (beverages) to the customers and activities refers to choosing the distribution channel to deliver the products to the customers. Pick-ups of the Product logistic have three primary activities: managing cycle (waiting) time, managing product capacity, and providing product delivery. Managing waiting time refers to methods used to reduce the time customers must wait to consume the products i.e. to reduce order cycle time in the flow of goods. Product capacity is defined as the managing, scheduling, and staffing of people and products so that the product response logistics network can meet customer demands. The main purpose of this study is to develop a daily truck routing and scheduling model, which improves the delivery and pick-up efficiency of the company. The scheduling will be done in such a way that it can satisfy the orders (both pick-up and delivery), while in the meantime, will attempt to improve quality of service with respect to distance and/or cost. This case study focus the routing optimization of any Beverage Company which need reverse logistics of recyclable materials which need to return to the plant for the re-filling of the beverage by applying particular variant of the VRP known as the pickup and delivery problem. The research mainly giving attentions to route optimization and vehicle assignments for respective destinations based on the minimum distance and cost of transportation with capacity constraints. LITRATURE RIVIEW The vehicle routing problem (VRP) is often described as the problem in which vehicles based on a central depot are required to visit a set of geographically dispersed customers in order to fulfill known customer demands. The objective is to construct a low cost, feasible set of routesone for each vehicle. A route is a sequence of locations that a vehicle must visit along with the indication of the serve it provides. The vehicle must start and finish its tour at the depot (Marinakis and Migdalas, 2002). The vehicle routing problem (VRP) plays a central role in distribution management and has for a long time attracted attention in the field of Operational Research. The classic problem can simply be described as the problem of designing a set of routes from a depot to a set of locations. (Holborn, 2013).A Vehicle Routing Problem (VRP) can be considered as a m-TSP where a demand is associated with each city. In a VRP a number of vehicles are stationed at one or several depots and these vehicles need to visit a number of geographically dispersed clients. A specific set of routes for each of the vehicles has to be determined in order to serve all the clients. (Zhou, 2010). According to statistics, the fuel cost could take up about 50% of the total operation costs for any model of car and the ratio will intend to increase with the climb of international oil price undoubtedly (Yu and Wang, 2013). VRP aims at planning the routes of a fleet of vehicles on a given network to serve a set of clients under side constraints and a reasonable route plan could reduce the transportation cost and energy consumption effectively. (Zhifeng Langa, 2014). Given the large amount of published research tackling the various types of VRPs it becomes increasingly difficult to keep track of all problem types and underlying solution methods. However, a general classification of the basic routing and scheduling problems and their interconnections is shown in Figure 1. Two main problem categories are distinguished in this figure 1, based on the observation that routing and scheduling problems are usually represented as graphical networks, in which pickup and/or delivery points are represented as nodes connected with line segments called arcs. The first category is called node routing problems, in which the service demand is associated with nodes (locations). The second category is called arc routing problems, in which the service demand is associated with the arc connecting two nodes. (Gendreau, 2005). The Vehicle Routing Problem with Pickup and Delivery (VRPPD) is a variant in which the possibility that customers return some commodities is contemplated, so it is necessary to take into account that the goods that customers return to the delivering vehicle must actually fit into that vehicle. (El-Sherbeny, 2010 METHDOLOGY The study addresses the existing product distribution and its performance based on secondary data. Related operation and finance is examined to evaluate the nature of the problem affecting the existing network or routes and number of vehicles assigned to each route. Based on the existing routes and by considering the number of boxes of beverages in each route, an improved model is adopted to serve in an optimum way customers in a given route. The improvement made was analyzed against the current performances of the enterprise. This approach provides an improved solution for MOHA soft drinks in Summit plant to serve the customers' demand within the improved routes. Finally, based on considerations of simultaneous pickup and delivery, the distance between the origin-destination of routes, a new model is developed for one of summit distribution center. The solution of the new VRP model may propose new routes which may be different from the existing one. If the improvement is significant, any beverage company that operates by distributing products and collecting empty bottles can adopt it through time to implement the findings of this model. In a real situation, the distances should be the actual mileage travelled by vehicles in the distribution operations. Since the distances can be measured easily on a road map, although this is very time-consuming and could not be undertaken for large applications it is computed by taking the approximate longitude and latitude of origin-destination of each route in summit depot. This is because the adopted model is designed to minimize the distance traveled. The research framework of this paper is designed based on consideration of the concepts of different theories, VRP/SDPVRP models and real-world situation in beverage industries. The following assumptions are also considered in designing the research framework is shown in Figure 2. The Framework of the VRP software as shown in Figure 3. 1. The customers demand at each route (delivery and pick up) of the given depot was collected. This help to model the need of demands at each node or customer destination. 2. The number of routes and their characteristics, the number of vehicles used to distribute products and collect recyclable materials, the number of products per route are collected from the route performance report of the given company. 3. The current operational and financial performances of the case company were analyzed; and an LP model was developed for the existing system to determine the optimum number of vehicles for each route and time period or shift. 4. The capacity of the vehicles (during route selection and assigning of vehicles) has also been taken under consideration. Each vehicle can carry an optimum number of beverages with their cases. 5. By considering the nature of customers a new route was designed with SDPVRP approach that can take into consideration of real simultaneous pickup and delivery of beverage products. The methodology employed different approaches to achieve its objectives. The following methodologies are used to secure both the quantitative and qualitative and/or primary and secondary data. Necessary data are then either collected or simulated depending on their availability. This may include generation of simulation data from the already available data. The different methodologies used in this paper are explained. In the literature review, different articles, proceedings, books, manuscripts, websites and other materials were surveyed in order to assess the state-of-the-art of distribution problems together with vehicle scheduling models and different variants of VRPs. The literature review examined different working models and algorithms and modified them to suit the considered distribution nature of the beverage company. For the purpose of this thesis, the theory of VRP with some of its major variants were also surveyed so as to identify the appropriate model that can best serve the context of case company, such as, nature of demand, and vehicle types. Apart from the literature review, various secondary data which are relevant to the study were also collected from the case company to validate the model. The primary data are a real-life instance of VRPSDP originating from the case company to validate the model, interviewing the responsible persons from the distribution department of the case company and direct Observation of the company distribution activities. In order to test and simulate the models developed, secondary data were collected. The secondary data required are the current routes, the number of delivery beverages and pick up recyclable materials in each route for a period of 6 months of year 2009 E.C. Simulation software such as MS-Excel and VRP software were used to simulate the demand distribution by giving the simulated demand, alternative solutions were generated. This was done by changing the different parameters of the model as well as the demand for different routes at different times on number of routes taking a unique or single distribution center as depot. MODEL DEVELOPMENT AND VALIDATION The model development in this paper considers VRP with real simultaneous pickup and delivery at each customer locations considering as a node, the pickup and delivery demand at each point is determined. In this model formulation a generic a simultaneous pick-up and delivery vehicle routing problem (VRPSPD) is considered and especially in bottled drink routing problem, the delivery and pick-up system is based on a single depot so as to serve a set of customer with a homogeneous fleet of vehicle the customer also requests to pick-up of the empty box or packs of the drinks. The vehicle has a limited capacity of Q =120 cases. If the total number of case (the cumulative of the difference between delivered and collected up that is di -pi) exceeds the vehicle capacity, route failure is said to be occurred. But no penalty cost assumed in this case. At each stop when di-pi > 0, which indicates that more number of products are delivered than collected; decreasing vehicle capacity, then the vehicle capacity is updated to Q -(di -pi) and (di -pi) becomes the net demand at that node. Whereas when (di -pi) < 0, which means more number of Products are collected than delivered; increasing vehicle capacity, then the vehicle capacity is updated to Q +(dipi) and the demand will be zero. Route: A route must start at the depot; visit a number of customer's location (nodes) and return to the depot. This assumption is very important during searching of possible routes that can minimize the total distance. Distance to be minimized: let A = (i; j): i; j ∈ V, i ≠ j is the set of arcs joining the nodes and a nonnegative matrix C = {cij: i; j∈ V; I ≠ j} denotes the traveling distances between nodes i and j. Further, assumes the distance matrix C is symmetric and satisfies the triangular inequality. The from-to-distance of a sample of origin-destination are computed from the digital latitude and longitude data using great circle computation. Solving the model The LP model developed is solved based on the average daily demand for the last 6 months in a single operating shift time. The daily customers' demand for the last 6 months was collected and then the average daily beverage demand of each month was computed. Model input parameters to run and test the model, different input parameters that have to be substituted to the model are required. These inputs are either collected or generated/computed. Customer's demands the customers' demand are collected from the distribution depart of the summit plant and the depot agents of the plant. The net customers' demand distribution is calculated by subtracting the number of collected recyclable material in box from the total delivered beverage number of boxes at each customer location. From-To-distance matrix the from-to-distance computational input parameters of each location is computed by taking the longitude and latitude location of each point using Great Circle distance formula that considers the circular nature of earth. Each cij is defined as the distance from i to j, which can be directly considered as the cost associated to transport the beverage and recyclable materials including depot 1. Further, it assumes the distance is symmetric, that cij = cji and cii = 0. The other inputs are the standard capacity of trucks to be considered in the model. The model considers a truck capacity of 6 in unit volume or 180cases of beverages. This capacity is determined based on the international allowable capacity of loading trucks with their dimension, although Summit PEPSI depot currently considers the capacity of trucks as 120 boxes of beverages or 4 in unit of volume which accounts 66.66% of utilization. The total operational trucks used for local product distribution inside Addis Ababa city around Bole sub-city is 18 trucks. A heuristic procedure developed for the classical VRP has been extended to solve VRPSPD developed above. The heuristics adopted is Clarke-Wright algorithm. The Clarke-Wright algorithm is iteratively repeated for each node as a starting node with the objective of improving the quality of the solution. The following assumptions were considered during solving the model.  All routes start and end at the node of origin, also known as depot.  Each node visited exactly once by a vehicle.  The cumulative demand along the route or demand at any node shall never exceed the vehicle capacity Q.  All vehicles have the same capacity and are stationed at the node of origin.  Split delivery is not permitted.  Each vehicle makes exactly one trip. The task is to determine a route for each vehicle so as to serve a set of nodes such that the total distance traversed is minimal. The initial solution using a Clarke-Wright algorithm is obtained using the following steps: 1. Select the warehouse as the central city. 2. Calculate the savings sij=ci 1 +cj 1 -cij for all pairs of cities (customers) i, j (i=1, 2...n; j=1, 2...n; i=/ j). 3. Order the savings, sij, from largest to smallest. 4. Starting with the largest savings, do the following: Using the Clarke-Wright savings algorithm, route is constructed by incrementally selecting passengers along the nodes until the cumulative number of customer demands reaches the vehicle capacity or all customers are visited. Initially each vehicle starts at the depot summit (v1) with full capacity (Q= 120 cases) and set customers included in the tour. The algorithm selects the next customer to visit from the list of feasible locations and the capacity of the vehicle is updated before another location is selected. The vehicle returns to the depot (summit) when the capacity constraints of the vehicle is met or when all the customers are served at each location. Finally the total minimum distance Cij is computed as the objective function value for the complete route of the vehicle. Model Output The Clarke-Wright savings algorithm constructs complete tour for the first vehicle prior to the second starting its tour. The procedure continues until all the customers' locations are included in the tours or until all the demands are served. The first run of the algorithm terminates with 6 possible routes. Since the number of vehicles is not limited, it will be determined by the number of routes formed. The algorithm first solves the problem using the nearest saving heuristic to create sub cycles that are both delivery and pick-up feasible at each node with vehicle capacity Q = 120 cases. This gives the following six routes: It was observed that in the above route and data presented above all the six routes are feasible load in terms of vehicle capacity and can be used as a feasible solution. If route one is considered, the tours are formed from depot 1 and visit the last customer at location point 17 and returns back to the depot. The overall tour for route or sequence of travel is: 1 2 12 13 10 19 17 1. Along the tour, the vehicle carried 113 beverage cases and travels a total distance of 4.59Km. Total demand in each route is less than the vehicle capacity. The overall summary of a the first run shows that the trucks travel an average distance of 21.15 Km , carried 581 cases of beverages along the route; and the run was performed with 0.04 seconds of CPU time. The figure 4 and 5 shows that view of this run in comparison to the existing and improved solution. Solution Improvement After having an initial solution based on Clark and Wright method, the local search improvement heuristics deployed based on (i) intra-route and (ii) inter-route operations to improve the solution. While applying the improvement heuristics, instead of choosing the best pairing of route at each step, randomly select a pair of routes so as not to be trapped locally and choose the best overall solution. Improvement local search heuristics k known by their short hand notation of k-Opt or r-Opt and with a value of (k or r= 2or 3) popularly accepted. For the implementation of improvement heuristics the 2-Opt and Or-Opt heuristics are used. 2-Opt The 2-Opt operation is a way of improving an existing solution. It involves swapping a pair of edges between any four nodes to reduce the distance in the VRP and the objective function. The algorithm involves looping through all pairs of edges. The first switch that reduces the objective function is accepted and the loop is ended. The swaps could be between routes (inter routes) or within a route (intra route). If the swap is within a route then the total drop off amount in the route remains the same, however, the order in which the vehicles visit every customer is changed. If the swap is between routes then the total drop off amount in each route can change. The swaps with maximum gain are exchanged. The 2-opt algorithm basically removes two edges from the tour, and reconnects the two new sub-tours created. This is often referred to as a 2-opt move. The 2-opt method returns local minima in polynomial time. It improves the tour by reconnecting and reversing the order of sub-tour. Or-Opt As a modification of K-Opt and being of the subset of 3-Opt the Or-Opt algorithm on the other hand has an advantage over the 3-Opt. in 3-Opt it is must to reverse the direction of route and there is a possibility of obtaining infeasible solution whereas the Oropt operator can keep the route direction and guarantee a feasible solution. Hence Or-Opt are part exchange of the 3-Opt operator i.e. a section of route/s (one, two or three continuous points) between two points. The best solution is selected and improvement heuristics are run on Model Validation The outputs of the model are then evaluated using different performance measuring parameters. Distance Coverage The average total distance covered per day per depot for the improved system is 146.2 kilometers; while for the existing system is about 105.3km per day per depots kilometers per day for six trucks. This shows a reduction of 27.79% in the daily distance coverage to serve the same number of customers. Total kilometer covered for the given depot one per day is the sum of kilometer covered during the given period of time. Figure 6 shows the total distance coverage of the current and the improved systems for a single depot. Operating Cost The improvements made were also validated using the different operating costs of the plant. Total daily operating cost for each route is the sum of operating cost for the given time. From the comparison shown in Figure 4 and 5, the finding show that the average daily operating costs for the existing systems is 27,301.6 ETB while for the improved systems is 15, 798.22 ETB. As shown in Figure 7, the improvements made by the new systems are achieved nearly in all the operating costs of the enterprise. Truck Utilization The average daily truck utilization of the existing and the improved systems are shown in Figure 8 the findings show that the improved system for the case depot has better truck utilization than the existing one. The average truck utilization for the improved systems is about 96.8% with less number of routes and for that of the existing systems is 80.69%. The existing systems have a maximum of about16.14 % daily truck utilization. Table 1 show that the presentation of improved model. CONCLUSION To solved a complex real-life problem arising in the distribution of beverages. A distinguishing feature of this problem is that vehicles must deliver goods and pickup recyclable materials at customer locations. To adopt model for this problem as a mixed integer program and also proposed three heuristics for its resolution. A construction heuristic and an improvement heuristics were developed. These were tested on a real-life case and on MOHA soft drinks summit plant with fifteen iterations and randomization. Results obtained on the real-life case show that a 27.97 % distance reduction can be achieved. Due to the minimized total distance for product distribution operations it reflects the minimization of the operation costs and the better utilization of resources. The findings of the study shows that the current scheduling system of MOHA soft drinks shows low performance on the truck utilization, operating cost and daily trips with respect to the standard. However the adopted model shows better improvement over the current system nearly with all the parameters of operating Costs. The current scheduling system has an average utilization of 80.69% while the improved system has 96.8% of utilization.
2019-02-19T14:08:21.288Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "119ae1042ec47eb70c0c7d23dae9e063bc1b9872", "oa_license": null, "oa_url": "https://doi.org/10.31695/ijasre.2018.33007", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "19f2befefb5edace0ad5366a5d892303e19fddab", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
266215904
pes2o/s2orc
v3-fos-license
Effect of hypoalbuminemia on mortality in cirrhotic patients with spontaneous bacterial peritonitis Objectives: The impact of hypoalbuminemia on the short-term and long-term mortality of cirrhotic patients with spontaneous bacterial peritonitis (SBP), both with and without renal function impairment, remains insufficiently elucidated based on population-based data. Materials and Methods: We retrieved data from Taiwan’s National Health Insurance Database encompassing 14,583 hospitalized patients diagnosed with both cirrhosis and SBP during the period from January 1, 2010, to December 31, 2013. Prognostic factors influencing 30-day and 3-year survival were computed. Furthermore, the impact of hypoalbuminemia on the mortality rate among SBP patients, with or without concurrent renal function impairment, was also assessed. Results: The 30-day mortality rates for patients with SBP, comparing those with hypoalbuminemia and those without, were 18.3% and 29.4%, respectively (P < 0.001). Similarly, the 3-year mortality rates for SBP patients with hypoalbuminemia and those without were 73.7% and 85.8%, respectively (P < 0.001). Cox proportional hazard regression analysis, adjusted for patients’ gender, age, and comorbid conditions, substantiated that individuals with hypoalbuminemia exhibit an inferior 30-day survival (hazard ratio [HR]: 1.62, 95% confidence interval [CI]: 1.51–1.74, P < 0.001) and reduced 3-year survival (HR: 1.57, 95% CI: 1.50–1.63, P < 0.001) in comparison to those lacking hypoalbuminemia. Among SBP patients with renal function impairment, those presenting hypoalbuminemia also experienced diminished 30-day survival (HR: 1.81, 95% CI 1.57–2.07, P < 0.001) as well as reduced 3-year survival (HR: 1.70, 95% CI 1.54–1.87, P < 0.001). Likewise, in SBP patients without renal function impairment, the presence of hypoalbuminemia was associated with poorer 30-day survival (HR: 1.54, 95% CI 1.42–1.67, P < 0.001) and 3-year survival (HR: 1.53, 95% CI 1.46–1.60, P < 0.001). Conclusion: Among cirrhotic patients with SBP, the presence of hypoalbuminemia predicts inferior short-term and long-term outcomes, regardless of renal function. IntroductIon L iver cirrhosis (LC) typically arises as a consequence of chronic infection with hepatitis B, chronic hepatitis C, and/or alcoholism [1].LC often progresses to conditions such as hepatocellular carcinoma (HCC) or severe complications including spontaneous bacterial peritonitis (SBP), esophageal variceal bleeding (EBV), and hepatic encephalopathy (HE) [2].The incidence of SBP in cirrhotic patients is approximately 10% [3].The diagnosis of SBP is established by the presence of an absolute polymorphonuclear leukocyte count of >250 cells/mm 3 in the ascites fluid.Therefore, SBP predominantly manifests in cirrhotic patients exhibiting ascites.prognosis of individuals with SBP.Furthermore, the influence of hypoalbuminemia on the mortality of SBP patients, both with and without concurrent renal function impairment, remains to be thoroughly assessed.As far as our knowledge extends, there is a lack of adequate population-based studies aimed at evaluating the prognosis of cirrhotic patients afflicted with both SBP and hypoalbuminemia, and the extent of hypoalbuminemia's effect on the mortality rate of SBP patients with or without renal function impairment remains uncharted territory.Utilizing an extensive nationwide population-based dataset from Taiwan, our objective is to ascertain the near-term and extended prognostic implications for cirrhotic patients affected by SBP and concurrent hypoalbuminemia.In addition, we seek to elucidate the impact of hypoalbuminemia on the mortality rate among SBP patients, regardless of the presence or absence of renal function impairment. Database and ethical statement Taiwan's National Health Insurance (NHI) system has been in operation for over two decades, offering coverage to over 99% of the population and ensuring equitable health care access for all.Comprehensive medical services for illnesses, injuries, and childbirth are available to individuals holding an NHI card when seeking treatment at medical facilities.The Taiwan NHI Research Database (NHIRD) meticulously archives all medical records, encompassing data such as International Classification of Diseases, 9 th or 10 th Revision, Clinical Modification (ICD-9-CM or ICD-10-CM) codes, prescribed medications, hospitalization expenses, and duration of hospital stays.The Taiwan NHIRD has emerged as a pivotal resource, facilitating a multitude of research endeavors [7,8]. The Buddhist Tzu Chi Medical Foundation (TCMF-A 109-01) and the Institutional Review Board of the Buddhist Dalin Tzu Chi Hospital (IRB B10403026) granted approval for the implementation of this study.Given that all data within the NHIRD has undergone de-identification, the review committee exempted us from the requirement of obtaining informed consent from the patients. Study sample The database was queried for patients discharged between January 1, 2010, and December 31, 2013, with a primary or secondary diagnosis of cirrhosis (ICD-9-CM code 571.5 or 571.2).These specific ICD-9 codes have been employed in previous studies to identify cirrhosis patients in Taiwan.Subsequently, within this cohort, individuals with SBP were identified using ICD-9-CM codes 567.2, 567.8, or 567.9.If a patient experienced multiple hospitalizations for SBP within the study period, only the initial episode was considered for analysis.In this study, the definition of hypoalbuminemia is based on the application for health insurance-covered albumin medication, rather than using ICD-9 coding.This choice was made because relying on ICD-9 coding to determine the presence of hypoalbuminemia can be relatively inaccurate, and it cannot be assumed that every physician has the same definition of hypoalbuminemia.According to the NHI Administration's (NHIA) principles for providing albumin medication, one of the criteria is that patients with cirrhosis who also have ascites, and whose blood albumin levels are <2.5 mg/dL.Since all the patients in our study had SBP, by definition, they all had ascites.Therefore, when these patients applied for health insurance coverage for albumin medication, their blood albumin levels should all be <2.5 mg/dL.Hence, in our study, we defined hypoalbuminemia as <2.5 mg/dL based on this fact.On the other hand, because albumin is a relatively expensive medication in Taiwan, most hospitals require clinical physicians to provide confirmation when prescribing albumin.Therefore, we believe that those classified as having hypoalbuminemia in our study would indeed have blood albumin levels below 2.5 mg/dL.In clinical practice, administering albumin infusion to cirrhotic patients with hypoalbuminemia and ascites has shown to alleviate edema and assist in ascites control.To ascertain the 30-day mortality rates of the enrolled patients, a Taiwanese National Mortality Database was consulted.The considered comorbid conditions encompassed alcoholism (ICD-9-CM codes 291, 303, 305.00-305.03,571.0-571.3),HCC (ICD-9-CM code 155.0),HE (ICD-9-CM code 572.2), renal function impairment (ICD-9-CM codes 584, 585, 586, 572.4,or other procedure codes relevant to renal failure), ascites (ICD-9-CM code 789.5 or procedure code 54.91), and EVB (ICD-9-CM codes 456.0, 456.20).The individuals were categorized into three groups based on socioeconomic status (SES): low SES, medium SES, and high SES.In this study, low SES was defined as a monthly income of less than New Taiwan Dollar (NTD $20,000) (approximately US$556).Medium SES was defined as a monthly income ranging from NTD $20,001 to $40,000 (approximately US$556 to $1111).High SES was defined as a monthly income exceeding NTD $40,001 (approximately US$1111). To assess the impact of hypoalbuminemia on the mortality rate among SBP patients, considering the presence or absence of renal function impairment, a subgroup analysis was conducted.The 30-day and 3-year mortality rates were computed for patients with hypoalbuminemia, both with and without concurrent renal function impairment.In addition, hazard ratios (HRs) were determined for these respective subgroup cohorts. Statistical analyses We performed statistical analyses using the SPSS Statistical Package version 22.0 for Windows, developed by IBM Corp (Armonk, NY, USA).Categorical variables were compared using the Chi-square test, while continuous variables were analyzed using Student's t-test.To assess the comorbid factors associated with LC, we employed a proportional hazards Cox regression model for survival analysis.The outcomes were presented as HRs along with their corresponding 95% confidence intervals (CIs).P <0.05 was considered statistically significant in our study.This significance level was chosen for all analyses. results We gathered data from Taiwan's NHIRD, encompassing 14,583 patients diagnosed with both cirrhosis and SBP who were admitted to hospitals between January 1, 2010, and December 31, 2013.The fundamental data are detailed in Table 1.In general, among cirrhotic patients with SBP, the mortality rates within 30 days and over a span of 3 years were 21.9% and 77.7%, respectively. In the subgroup analysis of SBP patients, the 30-day mortality rates for patients with renal function impairment, with and without hypoalbuminemia, were 51.6% and 32.0%, respectively (P < 0.001).Correspondingly, the 3-year mortality rates for patients with renal function impairment, with and without hypoalbuminemia, were 94.1% and 85.2%, respectively (P < 0.001).Moreover, the 30-day mortality rates for patients without renal function impairment, with and without hypoalbuminemia, were 24.8% and 16.3%, respectively (P < 0.001).The corresponding 3-year mortality rates for patients without renal function impairment, with and without hypoalbuminemia, were 84.1% and 72.1%, respectively (P < 0.001). dIscussIon This study underscores the significance of hypoalbuminemia as a crucial prognostic indicator in cirrhotic patients afflicted with SBP, influencing both near-term and prolonged mortality outcomes.Remarkably, this association holds true irrespective of the degree of renal function.Consequently, in clinical settings, health-care practitioners must be vigilant to the heightened risk among SBP patients, particularly those with hypoalbuminemia, even when their renal function appears relatively sound. Albumin is produced by the liver [4].However, the concentration of albumin in the bloodstream is influenced by multiple factors, including the rate of catabolism, synthesis, clearance, and distribution [9][10][11].Hypoalbuminemia may also serve as an indicator of diminished liver reserve.Impaired liver reserve is closely linked to elevated mortality rates in cirrhotic patients afflicted with SBP.Consequently, the association between hypoalbuminemia and unfavorable near-term and extended outcomes among SBP patients is to be expected.Moreover, instances of inflammation, stemming from factors such as infection, trauma, or surgical procedures, have been demonstrated to reduce the synthesis of albumin [12]. Table 4: Adjusted hazard ratios of the risk factor for 30-day and 3-year mortality in cirrhotic patients with spontaneous bacterial peritonitis but no renal function impairment Several possible reasons contribute to explaining why hypoalbuminemia increases the risk of mortality in SBP patients.First, the quantity of albumin in the blood to some extent reflects liver function.In other words, patients with low albumin levels partly indicate liver dysfunction.This can explain the increased mortality rate among such patients.Second, albumin plays a role in maintaining blood volume and fluid balance within the body.Hypoalbuminemia can lead to the accumulation of fluid in the abdominal cavity, causing ascites in patients with cirrhosis, which increases the risk of infection.These factors may exacerbate the symptoms of SBP, further increasing the mortality rate.Finally, studies support the role of albumin in the treatment of SBP, especially in patients with impaired kidney function [13].In addition, in decompensated cirrhotic patients, albumin therapy has been shown to reduce systemic inflammation and cardiocirculatory dysfunction [14].All of these factors indicate that higher blood albumin levels contribute to improved survival in such patients. Hypoalbuminemia has been extensively documented as an adverse prognostic factor in various medical conditions [15][16][17][18][19].For instance, Viasus et al. emphasized the significance of serum albumin levels within 24 h of admission as a crucial prognostic indicator for individuals diagnosed with community-acquired pneumonia [15].Hypoalbuminemia is also recognized as an indicator of an unfavorable prognosis in patients with severe sepsis [20].Hence, it comes as no surprise that the combination of SBP infection and hypoalbuminemia is associated with an unfavorable prognosis.The heightened mortality risk observed in SBP patients with hypoalbuminemia underscores the potential advantages of proactive albumin administration for such individuals.Prior research has demonstrated that supplementing with albumin can lead to reduced mortality rates and a decreased incidence of acute kidney injury (AKI) in SBP patients [21,22].In a particular study, the implementation of a targeted albumin order set for high-risk SBP patients resulted in a significant reduction in both mortality rates and the occurrence of AKI [21].Our study findings are consistent with these observations.Impaired renal function continues to be a pivotal prognostic factor in SBP patients.Our study's Cox regression analysis revealed that hypoalbuminemia, irrespective of renal function, emerged as a significant prognostic indicator for both short-term and long-term outcomes in SBP patients. Hypoalbuminemia unveils several noteworthy clinical aspects in cirrhotic patients.Individuals with hypoalbuminemia not only exhibit reduced oncotic pressure, potentially leading to the development of ascites or edema but they also commonly experience malnutrition.It is unsurprising that these conditions can contribute to unfavorable prognoses in such patients.Another significant revelation is that hypoalbuminemia often signals inadequate liver reserve.In clinical practice, while albumin infusion may be beneficial for high-risk SBP patients, it might not solely enhance liver reserve.Our study's outcomes align with these established observations.In this study, the utilization of a National Population-based Dataset unveiled that individuals with SBP and hypoalbuminemia experience inferior near-term and extended outcomes compared to those without hypoalbuminemia.While prior research has indicated that albumin infusion is advantageous for SBP patients with AKI, it is evident that the unfavorable short-and long-term outcomes of SBP patients with hypoalbuminemia persist due to compromised liver reserve. The primary limitation of this present study pertained to the unavailability of precise serum albumin level data within the dataset.Nevertheless, under Taiwan's NHI, albumin is typically administered when the serum albumin level falls below 2.5 g/dL in cirrhotic patients with ascites.Owing to the absence of laboratory information in this claims-based dataset, pivotal indicators of LC severity, such as the Child-Pugh score or Mayo Clinic model for end-stage liver disease, could not be ascertained.However, variables pertaining to liver reserves, such as variceal bleeding, HE, or HCC, were included for analysis.In addition, it is conceivable that transient and mild renal function impairment might not have been adequately documented in discharge records.On the other hand, in this study, renal dysfunction was determined using ICD-9 coding, and we were unable to obtain precise data.This, of course, is a limitation of our study.Nonetheless, given the potentially reduced clinical significance of mild and transient renal function impairment, our study demonstrated consistently poor survival outcomes among all SBP patients with hypoalbuminemia, irrespective of renal function.Consequently, we maintain that the absence of laboratory data, such as creatinine levels, did not significantly impact the findings of our study.Indeed, in spite of that, we will still need real-world data in the future to validate our research.Finally, due to the limitation of reporting a maximum of five diagnoses to the NHIA, certain systemic disorders may not have been among the top five diagnoses, which could potentially result in inaccuracies. conclusIon In summary, within the context of cirrhotic patients with SBP, the existence of hypoalbuminemia serves as an indicator of adverse near-term and extended prognoses, irrespective of renal function.Acknowledgment of this circumstance can facilitate improved clinical management of these patients and stimulate further investigation into the significance of hypoalbuminemia in the context of cirrhotic patients with SBP. a Division of Gastroenterology, Department of Medicine, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi, Taiwan, b School of Medicine, Tzu Chi University, Hualien, Taiwan, c Department of Medical Research, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi, Taiwan, d Department of Mathematics, Tamkang University, New Taipei, Taiwan Figure 1 : Figure 1: Kaplan-Meier survival analysis curves of 3-year mortality in cirrhotic patients with spontaneous bacterial peritonitis, by the presence of hypoalbuminemia Figure 2 : Figure 2: Kaplan-Meier survival analysis curves of 3-year mortality in cirrhotic patients with spontaneous bacterial peritonitis, by the presence of hypoalbuminemia, and (a) with or (b) without renal function impairment b a Table 2 : Adjusted hazard ratios of the risk factors for 30-day and 3-year mortality in cirrhotic patients with spontaneous bacterial peritonitis HR: Hazard ratio, CI: Confidence interval, EVB: Esophageal variceal bleeding, HCC: Hepatocellular carcinoma, HE: Hepatic encephalopathy, RFI: Renal function impairment Table 3 : Adjusted hazard ratios of the risk factors for 30-day and 3-year mortality in cirrhotic patients with renal function impairment and spontaneous bacterial peritonitis Hazard ratio, CI: Confidence interval, EVB: Esophageal variceal bleeding, HCC: Hepatocellular carcinoma, HE: Hepatic encephalopathy
2023-12-15T16:04:35.883Z
2023-12-13T00:00:00.000
{ "year": 2023, "sha1": "39837a750a1eb17dafe9867fe4a09ebc832a904b", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/tcmj.tcmj_211_23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c76ceffa116aad3bfec41605ec45dbab52fda80c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
86500415
pes2o/s2orc
v3-fos-license
A replication-incompetent CD154/40L recombinant vaccinia virus induces direct and macrophage-mediated antitumor effects in vitro and in vivo ABSTRACT CD40 triggering may result in antitumor effects of potentially high clinical relevance. To gain insights important for patient selection and to identify adequate targeting techniques, we investigated CD40 expression in human cancer tissues and generated a replication-incompetent recombinant vaccinia virus expressing CD40 ligand (rVV40L). Its effects were explored in vitro and in vivo upon direct CD40 targeting on malignant cells or macrophage activation. CD40 expression was analyzed by immunohistochemistry in tumor and stromal cells in a multi-tumor array including 836 specimens from 27 different tumor types. Established tumor cell lines were used to explore the capacity of rVV40L to induce malignant cell apoptosis and modulate functional profiles of polarized macrophages. CD40 expression was detectable in significantly higher numbers of stromal as compared to malignant cells in lung and breast cancers. CD40 ligation following rVV40L infection induced apoptosis in CD40(+) cancer cells, but only in the presence of intact specific signal transduction chain. Importantly, rVV40L infection promoted the induction of TNF-α-dependent antitumor activity of M1-like macrophages directed against CD40(-) targets. CD40-activated M1-like macrophages also displayed enhanced ability to CXCL10-dependently recruit CD8+ T cells and to efficiently present cancer cell intracellular antigens through cross-priming. Moreover, rVV-driven CD40L expression partially “re-educated” M2-like macrophages, as suggested by detectable CXCL10 and IL-12 production. Most importantly, we observed that intra-tumoral injection of rVV40L-infected human macrophages inhibits progression of human CD40(-) tumors in vivo. First evidences of anticancer activity of rVV40L strongly encourage further evaluations. Introduction CD40 receptor is a 48-kDa type I transmembrane protein of the TNF receptor (TNFR) superfamily, physiologically expressed by a broad range of different cell types, including endothelial, epithelial, and immune cells. Furthermore, cancer cells from nearly all B-cell malignancies and a variety of solid tumors have also been reported to express CD40. 1,2 CD40 ligand (CD40L: CD154), a member of the "TNF ligands family", is expressed on the cell surface of activated helper T-lymphocytes and is released in soluble form by platelets. Triggering of CD40 receptor expressed by malignant cells has been shown to result in cell growth arrest and apoptosis. These effects have been associated with the recruitment of TNFRassociated factor (TRAF) adapter proteins to the cytoplasmic domain of CD40 receptor, leading to activation of pro-apoptotic JNK-pathway and cleavage of effector caspases. 1,3 Recently identified NORE1A protein (RASSF5) has also been proposed as an additional critical mediator of apoptosis and cell growth arrest in CD40-triggered tumor cells. 4 Triggering of CD40 on different antigen-presenting cell (APC) types by activated CD4+ T cells expressing CD40L results in their "licensing", promoting highly efficient induction of cytotoxic T lymphocyte (CTL) responses. [5][6][7] Based on these characteristics, CD40/CD40L interaction has emerged in the past decade as a crucial target for the development of innovative cancer immunotherapy strategies. 2,6,7 Monoclonal antibodies (mAbs) targeting CD40 receptor and viral vectors encoding CD40L have been tested in animal models and in cancer patients. 6,[8][9][10][11][12] Responsiveness to the CD40/CD40L pathwaytargeting reagents is associated with direct effects on malignant cells and with the induction of adaptive antitumor immunity. 2,6,[8][9][10][11][12] Furthermore, T-cell-independent mechanisms, prominently including the activation of tumor-associated macrophages, have also been proposed. 8,9,13 Vaccinia virus (VV) is a viral vector, characterized by extranuclear replication and capacity to accommodate large transgenes. Due to these characteristics and its safety, VV has been used in recombinant form (rVV) in a variety of cancer immunotherapy protocols. [14][15][16][17][18] In particular, rVVs have successfully been used to induce T-cell responses specific for tumor-associated antigens (TAAs) in patients bearing cancers of different histological origins. 15,18 Moreover, rVVs have also been used as oncolytic reagents upon systemic or intratumoral administration. 19 An oncolytic rVV expressing CD40L was recently shown to inhibit the growth of CD40+ human tumor cell xenografts in immunedeficient animals. 20 In previous investigations, we have shown that a replicationincompetent rVV encoding multiple TAA-derived epitopes and CD80 and CD86 co-stimulatory molecules efficiently induced specific CTL responses in patients with advanced melanoma. 18,21 More recently, we constructed an rVV encoding CD40L (rVV40L) and reported its ability to exquisitely promote the expansion of human "central memory" CD8+ T cells 17 in replicationincompetent form. These studies prompted us to explore the possibility of using this non-lytic reagent to elicit the multiplicity of antitumor effects associated with CD40 triggering. In the present study, we have analyzed CD40 expression in tumor and stromal cells in a large (n: >800) range of human cancers and identified malignancies potentially representing relevant clinical targets. More importantly, we here show that a replication-incompetent rVV40L is able to induce direct and macrophage-mediated apoptosis of tumor cells of different histological origins. Furthermore, we demonstrate the capacity of this reagent to promote macrophage-mediated recruitment of CD8+ T cells and cross-presentation of HLA-class I-restricted epitopes from human intracellular TAA. Most interestingly, we here report that rVV40L-triggered human macrophages are able to inhibit the progression of human CD40(-) tumors in vivo. CD40 receptor expression in clinical tumor specimens CD40 receptor has been reported to be expressed in nearly all B-cell malignancies and up to 70% of the solid tumors. 2 Triggering of CD40 receptor expressed on transformed cell surfaces has been associated to pro-survival effects or to inhibition of proliferation and/or apoptosis induction. 1,5 We analyzed CD40 receptor expression in a tumor microarray (TMA) including 836 specimens from 27 different cancer types. Examples of CD40-specific staining are reported in Figure 1a. In initial studies, overall CD40 expression, including both tumor and stromal cells, was evaluated. Samples showing moderate/strong CD40-specific staining in ≥10% of all cells were considered positive. According to these criteria, ˃15% of the specimens of breast tumors, non-small-cell lung cancer (NSCLC), hepatocellular carcinoma (HCC), as well as esophageal cancer were considered CD40+ (Figure 1a-b). Lower percentages of CD40+ specimens, ranging between 5% and 10%, were observed in ovarian, Gastrointestinal tract, head and neck, small-cell lung cancers and mesotheliomas. In contrast, CD40 expression was barely detected in colorectal, gallbladder, prostate cancers (<5% CD40+ specimens) and was undetectable in pancreatic adenocarcinoma (Figure 1a-b). Importantly, no CD40 expression could be observed in healthy tissues of corresponding histological origin (data not shown). Differential analysis of CD40 expression in tumor or stromal cells in clinical specimens was then addressed. Moderate/strong CD40-specific staining in malignant cells was only observed in limited numbers of cancer types including breast (8/121), ovarian (7/67), and lung cancers (1/32 Small Cell Lung Cancer (SCLC) and 6/121 NSCLC) (Figure 1c). Most interestingly, however, in NSCLC and breast cancers, CD40 was expressed in significantly higher percentages of stromal as compared to tumor cells (P< 0.0001 and P= 0.02, respectively). In contrast, no significantly different expression in either cell types could be observed in ovarian cancers (P= 0.7) and small-cell lung carcinoma (P= 0.8) (Figure 1c). Taken together, TMA data underline that CD40 receptor is expressed in different cell types present in the tumor microenvironment (TME), but predominantly in stromal cells in sizable percentages of cancers of high epidemiological relevance. rVV40L infection induces apoptosis/necrosis in CD40+ tumor cell lines TMA data prompted us to evaluate biological responses induced by replication-incompetent rVV40L infection in a panel of CD40+ and CD40-established tumor cell lines. In particular, CD40+ Na8 melanoma, HCT116 colorectal cancer, MDA-231 breast cancer, PLC, and HepG2 (CD40 DIM ) HCC cell lines were tested. As controls, we also investigated cell lines of the same histological origin (A375, BT-474, LS180, and HuH-7) with undetectable or negligible expression of CD40 receptor as assessed at mRNA and protein levels ( Supplementary Figure 1a, b). Infection of tumor cells by replication-incompetent rVV40L, irrespective of their histological origin and CD40 receptor status, resulted in surface expression of CD40L in >30% of the tumor cells 24 hours after their incubation with the virus (Supplementary Figure 1c), and in a significant decrease of CD40 expression in CD40+ infected cells (Supplementary Figure 1d). Importantly, rVV40L infection of CD40+ Na8 melanoma ( Figure 2a) and MDA-231 breast (Figure 2b) resulted in significant increases in percentages of apoptotic/necrotic cells as compared to control cultures. Surprisingly, however, rVV40L infection of CD40+ HCT116 colorectal cancer cells and HepG2 HCC cells did not impact on their survival ( Figure 2c,d), whereas PLC HCC cells underwent apoptosis ( Figure 2d). Remarkably, CD40 cross-linking by s40L/enhancer treatment, infection by VV WT, or their combination were totally ineffective (Figure 2a, b, d). As expectable, rVV40L infection, s40L/enhancer treatment, or their combination failed to induce apoptosis/necrosis in CD40-cell lines. Altogether, VV-mediated CD40L expression sensitized CD40+ tumor cell populations to cell death, with the exception of HCT116 and HepG2 tumor cell lines that appeared resistant. Impaired CD40 signaling pathway is associated with tumor cell resistance to rVV40L-induced apoptosis/ necrosis CD40 ligation results in receptor clustering, inducing, in turn, recruitment to its cytoplasmic domain, of TNFreceptor-associated factors (TRAFs) mediating intracellular signaling. 1 However, only TRAF-1 is regulated at transcription level in response to CD40 ligation and initiates signaling cascades leading to cell death. 3 has recently been reported to result in upregulation of NORE1A (RASSF5) protein, mediating pro-apoptotic JNK pathway and caspase activation, and inducing apoptosis of target cells. 4 Thus, we investigated CD40 signaling in tumor cells using TRAF-1 and NORE1A expression as downstream markers. In apoptosis-responsive CD40+ Na8 and MDA-231 cells, a significant upregulation of TRAF-1 gene expression was observed upon rVV40L infection, whereas s40L/enhancer, alone or in combination with VV-WT, was ineffective ( Figure 3a-b). In sharp contrast, triggering of CD40 receptor expressed on the cellular surface of HCT116 cells by rVV40L infection failed to induce upregulation of TRAF-1 gene expression level ( Figure 3c). Instead, both rVV40L and s40L treatment appeared to downregulate CD40 expression in HCT116 CRC cells. Regarding hepatocellular cell lines (HCC), in PLC CD40+ cells, a trend (P = 0.0671) toward TRAF-1 upregulation was also observed following rVV40L infection in (Figure 3d). This effect was undetectable in CD40+ HepG2 cells. Similarly, expression of NORE1A gene was significantly increased following rVV40L infection but not s40L or WT+ s40L treatment, only in cell death-sensitive CD40+ PLC but not in "insensitive" CD40+ HCT116 cells ( Figure 3e). As expected, no TRAF-1 or NORE1a upregulation was observed in CD40-cell lines. These data indicate that rVV40L is able to induce apoptosis upon infection of CD40+ cells. However, they also underline that CD40 expression "per se" is not predictive of sensitivity to rVV40L-mediated cytotoxic effects, and induction of functional adapter proteins is required to elicit apoptosis. 3,4 Phenotypic and functional profiles of M1/M2-like "in vitro" generated macrophages TMA data document the expression of CD40 in tumor-infiltrating stromal cells. Notably, macrophages have been shown to represent a majority of CD40+ tumor-infiltrating cells, 2,7,8 and a role of tumor-associated macrophages in the control of tumor progression has repeatedly been reported. 22 A key feature of macrophages is represented by their high plasticity with M1/M2 macrophages representing extremes of a continuum of differentiation states, characterized by specific functional attributes. 23,24 Therefore, we investigated the possibility that targeting CD40 might condition in vitro differentiation of CD14+ monocytes toward M1/M2 functional profiles. We generated M1-and M2-like CD14+ monocyte-derived macrophages by culturing peripheral blood CD14 + monocytes in the presence of GM-CSF (M1) or M-CSF (M2). 25 Phenotypic characterization of CD14+ monocyte-derived macrophages confirmed a significantly higher expression of CD16 and reduced levels of CD163 and CD204 on M1-as compared to M2-like macrophages 26,27 (Supplementary Figure 2a, b). Accordingly, analysis of cytokine gene expression pattern profiles revealed a significant IL-6 gene expression in M1 macrophages, whereas IL-10 gene expression was significantly higher in M2-like macrophages (Supplementary Figure 2c). Moreover, we observed a significantly higher expression of CD40 receptor in M1-, as compared to M2like, CD14-derived macrophages (Figure 4a). Modulation of M1/M2 functional profiles by rVV40L infection We evaluated the effects of the treatments under investigation on CD14+ peripheral blood monocytes undergoing cytokinedriven polarization. Infection by rVV40L induced IL-12 production by macrophages undergoing M1-like polarization, whereas s40L/enhancer treatment, alone or in combination with WT infection, was completely ineffective. No IL-10 release could be observed in these conditions. Unexpectedly, however, rVV40L infection also induced minor but detectable IL-12p70 release by macrophages undergoing M2-like polarization, with a delayed kinetic, following 6 d of culture. In contrast, IL-10 production during M2-like polarization was induced by s40L/enhancer treatment alone or in combination with WT infection, but not by rVV40L (Figure 4b). Co-culture with rVV40L-infected tumor cells promotes cytokine release and antitumor effects in polarized macrophages We then explored the potential relevance of CD40 receptor ligation in the modulation of effector functions of fully in vitro differentiated M1-or M2-like macrophages. To focus on effects on macrophages, excluding confounding effects possibly related to cancer cell apoptosis and damageassociated molecular pattern expression, we analyzed cytokine production pattern of polarized macrophages upon co-culture with untreated, VV WT, or rVV40L-infected CD40-HuH-7 HCC and LS180 CRC cells. TNF-α release was only detectable upon co-culture of M1-like differentiated macrophages in the presence of rVV40L-infected LS180 cells. CD40 ligation by infected tumor cells also induced IL-12p70 and IL-6 production in these cells. Interestingly, IL-12p70 and IL-6 were released, albeit to significantly lower extents, also by M2-like cells following exposure to rVV40Linfected LS180 cells. Notably, IL-10 release was observed upon co-culture of M2-like macrophages with LS180 cells, irrespective of viral infection ( Figure 5a). Similar results were also observed upon M1-or M2-like macrophage stimulation with rVV40Linfected Huh7 cells. However, these infected cells induced IL-10 release by M2-like macrophages (Supplementary Figure 3a). Taken together, these data indicate that CD40 is functional in both M1-and M2-like fully polarized macrophages and suggest that its triggering might partially steer M2 cells toward an M1-like functional profile. Consistent with their TNF-α production, M1-like macrophages significantly inhibited the proliferation of rVV40L-infected CD40-LS180 ( Figure 5b) and HuH-7 established tumor cell lines (Supplementary Figure 3b). In contrast, M2-like macrophages were unable to inhibit the proliferation of infected cells (data not shown). Although macrophages could release a variety of cytotoxic/ cytostatic mediators, the pivotal role of TNF-α in the elicitation of these effects was confirmed by its neutralization, resulting in significant inhibition of antiproliferative effects of CD40-stimulated M1-like cells on rVV40L-infected tumor cells (Figure 5c and Supplementary Figure 3c). NORE1a Fold /CTRL * * * * We tested CXCL10 release in supernatants of co-cultures of M1-or M2-like macrophages and LS180 CRC and HuH-7 HCC cell lines. As depicted in Figure 6a, rVV40L but not VV WT infection of tumor cells resulted in significant increases of CXCL10 production by both M1-and M2-like macrophages. Migration assays were performed to formally demonstrate the ability of CD40-stimulated M1-and M2-like macrophages to promote CXCL10-mediated CD8+ T-cell recruitment. Supernatants from macrophages, CD40L stimulated as described above, induced the migration of CD8+ T cells to significantly higher extents as compared to control supernatants. Importantly, antibody-mediated CXCL10 neutralization from these supernatants abrogated CD8+ T-cell migration (Figure 6b), thus confirming its critical relevance in lymphocyte recruitment. cross-present MART-1 27-35 HLA-A0201-restricted epitope was evaluated by measuring IFN-γ release by a specific CD8+ T-cell clone (Supplementary Figure 4b). This clone efficiently responded to antibody-mediated CD3 triggering and to presentation of the target peptide by HLA-0201+ M1-but not HLA-0201+ M2-like APCs, despite a similar HLA-class I expression ( Supplementary Figure 4c left panel). HLA-A0201+ M1-like macrophages exposed to HLA-A0201tumor cells infected by rVVMART-1 FG were able to crosspresent MART-1 27-35 HLA-A0201-restricted epitope to specific T cells, thereby inducing IFN-γ release. However, M1-like macrophage infection by rVV40L significantly enhanced activation of MART-1 27-35 reactive CD8+ T cells as compared to control culture conditions (Figure 6c). As expected, T cells were not activated in the presence of HLA-A0201+ M1-like cells previously exposed to tumor cells which were not infected by rVVMART-1 FG, thus confirming the integrity of the experimental design. On the other hand, HLA-A0201-tumor cells infected by the reagents under investigation were unable "per se" to induce IFN-γ production in macrophages alone or in CTLs alone ( Supplementary Figure 4c right panel). CD40L-expressing recombinant vaccinia virus (rVV40L) promotes in vivo tumor regression through macrophage activation We then addressed the antitumor efficacy of rVV40L in vivo. TMA data indicated that in a large majority of tumors tested CD40 expression was absent or present in very limited numbers of tumor cells (see above). Therefore, in order to exclude effects mediated by triggering of CD40 on model human tumor cells, while focusing on infiltrating immune cells, in these experiments, NSG immunodeficient mice were subcutaneously inoculated with LS180 CRC and HuH-7 HCC CD40-cells. Upon in vivo growth, tumors were first injected with replication-incompetent VV WT or rVV40L followed, after 48 hours, by human M1-or M2-like macrophages. Control and VV WT-infected tumors rapidly and similarly progressed in the presence or absence of M1-or M2-like cells (Figure 7a-f). In sharp contrast, consistent with "in vitro" observations, rVV40L infection of Huh-7 HCC and LS180 CRC resulted in a complete inhibition of tumor progression upon intratumor injection of M1-like cells (Figure 7b,e). Similar effects were also observed upon injection of the positive control Lipopolysaccharides (LPS)-stimulated M1-like cells (data not shown). Injection of M2like cells also resulted in a modest, nonsignificant inhibition of LS180 CRC tumor progression (Figure 7f). Remarkably, histological analysis of excised tumors revealed that rVV40L-mediated activation of M1-like macrophages resulted in a significant disruption of tumor tissues, comparable to LPS-stimulated M1-like macrophages (Figure 7g-h). Consistently, immunofluorescence studies provided evidence of significant caspase 3 cleavage in these tumors (Figure 7g-h). Discussion Due to its peculiar expression pattern and multifunctional potential, CD40 represents an important target for the development of innovative cancer immunotherapy protocols. 2,11 Administration of agonistic anti-CD40 mAbs and CD154 (CD40L)-expressing recombinant adenovirus has been shown to result in tumor regression in experimental models. 6,13 These effects have initially been largely attributed to CD40-mediated-APC licensing and activation of T-cell-mediated antitumor immune responses. 36 However, data from clinical studies in human pancreatic cancers, typically lacking T-cell infiltration, suggest that elicitation of antitumor effects of therapeutic anti-CD40 agonistic mAbs might rather rely on macrophage activation. 8,9,13 The effectiveness of CD40-targeting strategies developed so far appears to be limited. In particular, CD40 expression by tumorinfiltrating immune cells does not effectively predict responsiveness to agonistic anti-CD40 mAbs. 8,9 Furthermore, epitope specificity and isotype might critically affect the activity of therapeutic reagents. 7,37 Regarding CD40L-recombinant adenovirus, while safety has been reported, convincing evidence of clinical efficacy is still missing. [10][11][12] Potential limitations of adenovirus-mediated anticancer biological approach might be associated with the requirement of specific cell entry pathways and of the administration of high viral titers. [10][11][12] In order to develop a multipotent anticancer biological, we generated a replication-incompetent CD40L recombinant vaccinia virus (rVV40L) targeting multiple functional aspects of tumor-immune system interaction and tested its antitumor effects in vitro and in vivo. In particular, we addressed the ability of rVV40L to directly promote CD40+ tumor cell apoptosis, to modulate functional profiles of differentially polarized macrophages, to recruit immune cell populations of proven anticancer relevance, and to inhibit tumor progression in vivo. Evaluation of CD40 expression in a large cohort of clinical specimens (n = 836) derived from different tumor types (n = 27) indicates that, with some exceptions, CD40 is expressed on cell surfaces in limited percentages of transformed cells in tumors included in the TMA under investigation. Instead, CD40 expression was frequently detectable in significantly higher percentages of stromal cells. This observation underlines that the effectiveness of CD40-based therapeutic strategies should rely on infiltrating immune cells in addition to direct cytostatic/cytotoxic effects on CD40+ malignant cells. While the use of TMA might represent a limitation of our study, underestimating tumor heterogeneity, the large number of cases investigated powerfully corroborates the clinical relevance of our data. rVV40L infection effectively induced apoptosis/necrosis in several CD40+ melanoma, HCC, and breast cancer tumor cell lines. Instead, in line with previous reports, 3,4 mere CD40 triggering by cross-linked s40L failed to induce cytotoxic effects in tumor cells, although it was able to rapidly promote IL-10 release by M2like macrophages (Figure 4b). Most importantly however, CD40 expression by tumor cells cannot be considered "per se" a sufficiently predictive signature of the effectiveness of "direct" strategies, including the use of rVV40L, since impaired intracellular signaling due to TRAFs adapter molecule alterations and defective activation of effector proteins might prevent the induction of apoptosis/necrosis following receptor ligation on malignant cells. 3,4 In addition to malignant cells, TME includes a variety of different non-transformed cell types with a major impact on Figure 7. rVV40L promotes the antitumor activity of M1 macrophages in vivo. 10 5 LS180 CRC or HuH-7 HCC cells were inoculated subcutaneously (s.c.) into the flanks of NSG mice. Established tumors were then injected with PBS or 10 7 pfu of replication-incompetent VV WT or rVV40L; 48 hours later, 5 × 10 5 M1-or M2-like macrophages were injected into the tumor masses. Tumor volume of LS180 (a-c) and Huh7 (b-f) is expressed as fold increases as compared to tumor volumes measured at the time of macrophage inoculation. Data refer to two independent experiments with three mice in each condition. On d 8, mice were sacrificed and histology (H&E) and cleaved caspase 3 expression (green) were evaluated in LS180 (g) and HuH-7 (h) tumors. Data refer to representative stainings out of two tumors per condition displaying similar results (magnification 20x; scale bar 100 µm) (*P < 0.05; Mann-Whitney nonparametric test). clinical course. [28][29][30] For instance, macrophages have been suggested to be able to eliminate transformed cells and to promote adaptive tumor-specific immune responses in situ. 38 However, pro-tumorigenic functions, such as supporting tumor-associated angiogenesis, proliferation of malignant cells, and suppression of antitumor T-cell responses, have been convincingly demonstrated in experimental models. [39][40][41][42][43][44][45] Furthermore, clinical evidence underlines that in a majority of human tumors, high macrophage infiltration is frequently associated with severe prognosis. [39][40][41][42][43][44][45] Therefore, macrophage infiltration emerges as acritical target of antitumor therapeutic strategies, and the clinical potential of mAbs neutralizing CCL2 46 and M-CSF receptor 47 is currently being evaluated in specific clinical trials. These strategies point to the prevention of recruitment and differentiation of macrophages. [45][46][47] Alternative approaches focus on reconditioning of tumor-infiltrating macrophages through the modulation of their functional activity 8,9,13 by exogenous signals enhancing their antitumor activity. 27,48,49 Co-culture with rVV40L-infected tumor cells induced significant TNF-α release by M1-but not M2-like macrophages. This cytokine appeared to represent a critical mediator of cytostatic/ cytotoxic activity exerted by CD40-activated M1-like macrophages on malignant cells. Macrophages do orchestrate anticancer adaptive immune responses in situ. 49 Several studies have underlined their capacity to mediate, through chemokine production, selective recruitment of defined immune cell subsets, such as CD8+ T cells, with a critical antitumor activity, [28][29][30] in malignant tissues. 45,50 CXCL10 is the main mediator of CD8+ T lymphocyte recruitment, 32 and in our study, rVV40L-driven CD40-stimulation resulted in significant CXCL10 release by both M1-and M2-like polarized macrophages, leading to effective migration of CD8+ T cells. The elicitation of CD8+ T-cell immune responses is conditioned by cytokines produced by APC. 17,36,51 Production of IL-12 or IL-10 has been associated with the ability of tumorassociated macrophages to promote or, respectively, inhibit adaptive anticancer T-cell-mediated immune response. 42,49,50 rVV40L infection during macrophage polarization or co-culture with infected tumor cells significantly modulated their cytokine release profile. In particular, it promoted robust production of IL-12 by M1-like macrophages and, more surprisingly, although to significantly lower extents, also by M2-like macrophages. Most importantly, we observed that infection of tumor cells by rVV40L significantly enhanced M1-like-mediated cross-presentation of TAAs to CD8+ T cells. Importantly, oligomerized sCD40L was ineffective in a large majority of our "in vitro" experiments. Due to its effectiveness in B-cell activation, 52-55 the original CD40-mediated functional assay, oligomerized s40L is usually considered as the reference standard for CD40 triggering tests. However, the extent of CD40 polymerization has been shown to critically affect the elicitation of the multiplicity of functional effects induced by its triggering. [56][57][58][59] Our data suggest that rVV-mediated CD40L expression on infected cells is likely to result in a highly polymerized interaction with CD40, thereby allowing a full display of the effects associated with CD40 stimulation. In vivo studies critically reinforce our in vitro results. Indeed, we observed a significant inhibition in the progression of CD40-tumors upon rVV40L infection in the presence of adoptively transferred macrophages. In particular, exogenous administration of human M1-like macrophages induced massive destruction of tumor tissue mediated by cleaved caspase 3 activation. Intriguingly, detectable, albeit not significant, effects were also elicited by M2-like macrophages. Considering the absence of overt toxic effects of macrophage administration in animals bearing rVV40L-infected tumors, and the replication-incompetent nature of our reagent, these data may suggest innovative adoptive cancer immunotherapy protocols. Tumor tissues could be treated with rVV, followed by intra-tumor injection of polarized, patient-derived macrophages which could be easily obtained from peripheral blood cells. In these conditions, T cells might also be attracted to tumor sites, and the generation of adaptive immune responses might ensue. Further studies are warranted to validate this working hypothesis, requiring human autologous sets of cellular reagents, including cancerous cells and T lymphocytes. Taken together, our data underline the major antitumor potential of rVV40L. Indeed, transduction by this replicationincompetent reagent might lead to direct inhibitory effects on CD40(+) malignant cells. In addition, inhibition of tumor progression could also result from the marked ability of rVV40L to promote M1 activation and, possibly, from a partial repolarization of M2 macrophages. Most importantly, CD40activated macrophages are able to induce direct TNF-α-mediated antitumor cytotoxic activity, lymphocyte recruitment, and crosspresentation of cellular antigens to CD8+ T cells. Immunohistochemistry The multi-tumor tissue microarray used in this work has been extensively described in previous studies. 60,61 Briefly, tissue cylinders with a diameter of 0.6 mm were punched from morphologically representative areas of >800 blocks from individual tumors of 27 different histological origins and brought into recipient paraffin blocks (30 x 25 mm) by using a semi-automated tissue arrayer. To partially overcome limitations inherent in tumor heterogeneity, punches were derived from the center of each donor tumor tissue block so that each TMA spot consisted of at least 50% tumor cells. Corresponding healthy tissues were also included in the TMA. TMA slides were pretreated with CC1 reagent (Ventana) for 16 minutes at room temperature (RT). Thereafter, they were incubated for 32 minutes at RT with CD40-specific rabbit polyclonal primary antibodies (Abcam, ab58612) at 1:50 dilution. Specific binding was revealed by using optiView kit with DAB chromogen (Ventana) according to producer's instructions. Percentages of total, malignant, and tumor-infiltrating CD40+ stromal cells in each punch were evaluated by experienced pathologists blinded to any prior information. The intensity of staining was also analyzed, and samples were classified as negative (0), weakly positive (1), moderately positive (2), and strongly positive (3). Cells were scored positive if they displayed at least a moderate intensity staining. CD40 ligand-expressing recombinant vaccinia virus (rVV40L) CD40 ligand-expressing recombinant vaccinia virus (rVV40L) was generated as previously described. 14 In order to exclude the cytopathic/lytic effect of replicating a vector and focus on effects of the expressed transgene, viral replication was inactivated by DNA cross-linking by using psoralen (1 µg/ml) and long-wave UV (365 nm) irradiation. 14 A similarly inactivated wild-type vaccinia virus (WT VV) was used as a control. Soluble FLAG-tagged CD40 ligand (s40L) and enhancer (Enzo Life Sciences), ensuring receptor multimerization, 62,63 were also used as additional controls. Established tumor cell lines Established melanoma (A375), HCC (PLC, HepG2, and HuH-7), colorectal (HCT116, LS180, and HT29) and breast (MDA-231 and BT-474) cancer cell lines were purchased from European Collection of Cell Cultures (ECACC). Na-8 melanoma cell line was a gift from Dr Jotereau (Nantes, France). Breast and colorectal cancer cell lines were cultured in RPMI-1640 medium supplemented with glutamine, non-essential amino acids, sodium pyruvate, HEPES buffer, and Kanamycin sulfate (Gibco-Life Technologies), thereafter referred to as complete medium (CM) and 10% Fetal Bovine Serum (FBS). Melanoma and HCC, HepG2, and HuH-7 cells were cultured in D-MEM CM, 10% FBS. PLC cells were cultured in ALPHA-MEM CM supplemented with 10% FBS. When specific established tumor cell lines were required for the indicated experiments, early passage cells were thawed and maintained in culture for less than 2 months. In vitro generation of CD14+-derived macrophages CD14+ monocytes were isolated from peripheral blood mononuclear cells from healthy donors to a >95% purity by using anti-CD14 magnetic beads (Miltenyi Biotech). Magnetically isolated cells were cultured in RPMI-CM 10% FBS in the presence of 12.5 ng/ml GM-CSF (R&D Systems) to generate polarized M1like macrophages or M-CSF (R&D Systems) for M2-like macrophages. On d 6, cells were collected and stained with anti-CD16, anti-CD163, anti-CD204, and anti-CD40 receptor-specific, fluorochrome-conjugated mAbs, and specific fluorescence was evaluated by flow cytometry (see below). CD40 triggering "in vitro" Tumor cell lines were infected with rVV40L or VV WT at Multiplicity of Infection (MOI) of 10 for 1 hour at 37°C in 500 μl of their specific culture medium. Cells were then washed and cultured in specific media (see above). When indicated, cultures were supplemented with s40L recombinant protein and oligomerizing enhancer (0.5 and 1 μg/ml, respectively, Enzo Life Sciences, see above) alone or following VV WT infection (WT + s40L). Cells from different cultures were collected after 4 d, and their viability was assessed by Annexin V apoptosis detection kit (Becton Dickson). CD14+ monocytes from peripheral blood of healthy donors were infected in 500 μl RPMI-1640 CM for 1 hour at 37°C with rVV40L or with VV WT at MOI of 5. In addition, s40L and enhancer were also used alone or following VV WT infection (WT + s40L), as described above. Cells were then cultured in RPMI-1640 CM 10% FBS in the presence of either GM-CSF or M-CSF (see above). At the indicated time points, supernatants from different culture conditions were collected, and cytokine release was assessed by ELISA. Co-culture of M1-and M2-like macrophages with established tumor cell lines LS180 and HuH-7 established tumor cell lines were left untreated or infected with rVV40L or with VV WT at MOI of 10. After 24 hours, 3000 VV WT or rVV40L-infected or untreated tumor cells were co-cultured together with M1-or M2-like macrophages at different effector: target ratios in 96 well-plates. On d 4, cytokine and chemokine release in supernatants from different culture conditions was evaluated by ELISA. Cytostatic activity of macrophages on tumor cells was evaluated by adding 1 μCi of 3 H-thymidine for the last 18 hours of culture. Cells were then harvested, and tracer incorporation was measured by scintillation counting. TNFα neutralization was achieved by adding to cultures antihuman TNF-α mAb (BioLegend) at 10 μg/ml final concentration. Cross-presentation assays MelanA/Mart1 27-35 -specific HLA-A0201-restricted CD8+ T cell clones were generated as described previously. 64 Overall, 3000 cells from CD40-HLA-A0201-HT29 cell line were left untreated or infected with VV WT or rVV40L and co-infected with a recombinant VV encoding MART-1 full gene, inducing the production of the entire protein in infected cells, at MOI of 10. After 24 hours, differentially infected HT29 cells were cultured alone or in the presence of HLA-A0201+ M1-or M2-like polarized macrophages at 1:1 ratio. Two days later, 30,000 cells of a MelanA/Mart-1 27-35 -specific, HLA-A0201-restricted CD8+ T cell clones were added to the different cultures. After 48 hours, the presence of IFN-γ in the supernatants was assessed by ELISA. Migration assays Migration of CD8+ T cells toward supernatants from cocultures of polarized macrophages and tumor cells was assessed in 96-well trans-well plates (5 μm pore size; Corning Costar) upon 60 min culture at 37°C. When indicated, anti-CXCL10 neutralizing mAb (10 μg/ml; R&D System) was also added to culture supernatants. Cell migration was quantified by flow cytometry. In vivo experiments In vivo experiments were approved by the Basel Cantonal Veterinary Office (License Number 2266). NSG mice from Charles River Laboratories were bred and maintained under specific pathogen-free conditions in the animal facility of the Department of Biomedicine of the University of Basel. Eight-to 10-week-old mice were injected subcutaneously (s.c.) in the flank with tumor cells (1 0 5 cells/mouse), resuspended in 1:1 growth factor reduced Matrigel (BD Biosciences)/ Phosphate Buffered Saline (PBS) solution. Tumor formation was monitored twice weekly by palpation and caliper measurements. Once tumor masses reached an approximate diameter of 5 mm (1 × 10 6 tumor cells), 20 µl of virus solution (10 7 eq. pfu of VV WT or rVV40L) or PBS was injected in the tumor tissues. After 48 hours, PBS or 5 × 10 5 M1-or M2-like macrophages were also injected intratumorally. Tumor size changes were followed every day by caliper measurements. One week after, all mice were sacrificed and tumors were harvested. Tumor volumes (in mm 3 ) were determined according to the formula (length x width 2 )/2. 65 Samples from all tissues were harvested for subsequent histological examination. Gene expression analysis Total cellular RNA was extracted by using the RNeasyVR Mini Kit (Qiagen) and reverse transcribed according to manufacturer's instructions (Invitrogen-Life Technologies). Human TRAF-1 and NORE1A (RASSF5) gene expression was evaluated by quantitative RT-PCR (RT-qPCR) using specific primer sets (TaqMan® Assays, Applied Biosystems-Life Technologies) and normalized to human glyceraldehyde-3-phosphate dehydrogenase (GAPDH) housekeeping gene expression. Histological evaluation Cryo-sections embedded in Optimal Cutting Temperature (OCT) compound (Leica) were cut (10 μm) from each tumor and fixed in formalin. Sections were either stained for hematoxylin and eosin using a Continuous Linear Stainer COT 20 (Medite) or incubated with a rabbit anti-cleaved caspase 3 reagent (Cell Signalling), followed by secondary speciesspecific Alexa Fluor 488-conjugated antibody (Invitrogen) and DAPI for nuclei counterstaining. Sections were examined by using a Nikon TI fluorescence microscope (Nikon Switzerland), and images were captured with 20x magnification using a digital camera and NIS-Elements software. Statistical analysis Statistical analysis software SPSS (Version 14.0, SPSS Inc.) was used for statistical analyses. Skewness, kurtosis, distribution parameters, and respective standard errors were used to test the normality of the concerned populations. Mann-Whitney test was used for the analysis of non-parametric data with non-Gaussian distribution of the test population. All reported P-values were considered to be statistically significant at P≤ 0.05. Disclosure of Potential Conflicts of Interest No conflict of interest to be disclosed. Funding This work was funded by grants from Swiss National Science Foundation (no. PP00P3-133699 and PP00P3-159262 to G.I) and a grant from "Oncosuisse" to Paul Zajac.
2019-03-28T13:33:07.470Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "c77706ae835aaf2a8d6d6b5b03b38d95d3d1b332", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2019.1568162?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c77706ae835aaf2a8d6d6b5b03b38d95d3d1b332", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
245837242
pes2o/s2orc
v3-fos-license
Biexciton-like quartet condensates in an electron-hole liquid We theoretically study the ground-state properties and the condensations of exciton-like Cooper pairs and biexciton-like Cooper quartets in an electron-hole system. Applying the variational approach associated based on the quartet Bardeen-Cooper-Schrieffer (BCS) model to the four-component fermionic system consisting of spin-1/2 electrons and spin-1/2 holes, we show how Cooper pairs and quartet correlations appear in the equation of state at the thermodynamic limit. The biexciton-like four-body correlations survive even at the high-density regime as a many-body BCS-like state of Cooper quartets. Our results are useful for further understanding of exotic matter in the interdisciplinary context of quantum many-body physics with multiple degrees of freedom. I. INTRODUCTION Quantum many-body systems exhibit nontrivial states which are absent in classical ones. The interplay between quantum degeneracy and interactions leads to exotic condensation phenomena such as superfluidity and superconductivity [1]. The common states of matter surrounding us such as liquid droplets and crystalline solids are also deeply related to the interaction and quantum statistics of constituent particles from the microscopic viewpoint. While it is known that superconductors and fermionic superfluids are triggered by the formation of two-body loosely bound states called Cooper pairs as a result of the Fermi-surface instability in the presence of two-body attractions [2], it is an interesting problem to explore condensation phenomena accompanying more than twobody bound states. While spin- 1 2 fermions with swave interaction tend to form two-body Cooper pairs because of their spin degree of freedom and Pauli's exclusion principles, multibody counterparts such as Cooper triples [3][4][5][6] and quartets [7][8][9][10][11][12][13][14][15] can be formed in the presence of larger degrees of freedom for fermions (e.g., isospin, color, and atomic hyperfine states). To study the nontrivial superfluid state associated with the Cooper instability leading to multibody bound states, semiconductor systems consisting of spin- 1 2 electrons and holes can be promising candidates since these can be regarded as four-component fermionic systems with strong interactions. In such systems, two-and fourbody bound states called excitons and biexcitons are formed due to the attractive Coulomb electron-hole interaction [16]. Moreover, the formation of polyexcitons consisting of more than two excitonic bound states was reported [17]. While the system is dominated by * guoyixin1997@g.ecc.u-tokyo.ac.jp † htajima@g.ecc.u-tokyo.ac.jp ‡ haozhao.liang@phys.s.u-tokyo.ac.jp these bound states, e.g., excitons and biexcitons (or electron-hole plasma at finite temperature), in the lowcarrier density regime, the quantum droplet appears as a many-body bound state in the higher density regime (before the semiconductor-metal transition) at low temperature [18][19][20][21]. The Bardeen-Cooper-Schrieffer (BCS)-to-Bose-Einstein condensation (BEC) crossover associated with excitonic pairs with increasing the carrier density has been discussed extensively in previous theoretical works [22][23][24][25][26][27][28][29][30]. In highly-excited CuCl, the condensation of biexcitons was observed [31][32][33]. In the past years, the formation of biexcitons was observed also in transition metal dichalcogenide crystals [34][35][36][37]. Recently, it was reported that biexcitons play a key role for the formation of quantum droplets in photoexcited semiconductors [38]. Moreover, biexciton condensation has been found in an electron-hole Hubbard model at positive chemical potentials via a sign-problemfree quantum Monte Carlo simulation [39]. Also, twodimensional semiconductor systems in the biexcitondominated regime have been investigated at finite temperature [40]. These studies suggest that it is important to clarify physical properties of the exciton and biexciton condensates for understanding many-body states at sufficiently low temperature. Quartet condensation phenomena associated with the four-body bound states have also attracted much attention in nuclear systems [41]. Nuclear equations of state and their droplet properties are associated with strongly attractive nuclear forces leading to the formation of bound states such as deuterons, alpha particles, and heavier nuclei in the low-density region [42], and the Fermi degenerate pressure of nucleons and multibody forces in the high-density region [43]. Since alpha particles consisting of two neutrons and two protons is a stable cluster state with a large binding energy, the socalled alpha-particle condensation has been extensively studied in the context of Cooper quartets [7][8][9][10][11][12][13][14][15]. Note that fluctuation-driven quartet formations have also been investigated in unconventional superconductors [44,45]. Moreover, the quantum droplet state has been realized in ultracold Bose-Bose mixtures [46][47][48][49]. The stabilization of the dilute quantum droplet is achieved by the competition between the mean-field attraction and the repulsive quantum fluctuations [50]. While the Lee-Huang-Yang energy density functional can explain such saturation properties but exhibit a complex value in the region where the mean-field collapse occurs, it is reported that the complexity of the energy density functional can be avoided by considering the bosonic pairing [51,52]. This fact implies that a biexciton, which can be regarded as the two-exciton pairing state, plays a crucial role in the formation of self-bound quantum droplets in electronhole systems. Moreover, similar self-bound quantum droplets have been realized in dipolar Bose gases [53], which is analogous with an exciton gas with an electric dipole moment. In this paper, we theoretically investigate thermodynamic properties in an electron-hole system at zero temperature within the quartet BCS framework, which uses the extended BCS variational wave function involving Cooper pairing and quarteting in the momentum space at the thermodynamic limit [15]. Special attention is paid to the biexciton-like condensates, that is, the Cooper quartets consisting of two electrons and two holes as a result of the Cooper instability of Fermi seas. (Note that we call it "biexciton-like" since a Cooper quartet considered here is a loosely bound quantum state, unlike usual point-like bound states.) Recently, such a framework has been employed to study pair and quartet correlations in nuclear systems [10,[13][14][15]. Effects of Fermi degenerate pressure are automatically considered in this framework as in the usual BCS theory. The interplay among the Fermi degenerate pressure of electrons and holes and the formation of excitonlike Cooper pairs and biexciton-like Cooper quartets is examined microscopically. This paper is organized as follows. In Sec. II, we show a theoretical model for an electron-hole system and a detailed formalism of the quartet BCS theory. In Sec. III, the numerical results and the corresponding discussions for the ground-state properties are presented. Finally, we summarize this paper with future perspectives in Sec. IV. A. Hamiltonian In this paper, we consider a three-dimensional electron-hole system with the electron-electron, holehole, and electron-hole interactions. The corresponding Hamiltonian is written as In detail, the single-particle part reads where the creation operators e † and h † create an electron and a hole, respectively; p is the single-particle momentum, q = 1 2 (p 1 − p 2 ) is the relative momentum, s is the single-particle spin (s z is its third component), and P = p 1 + p 2 is the center of mass momentum. In addition, the single-particle energy reads ε i,p = p 2 2Mi − µ i (i = e, h), where µ i is the chemical potential and M i is the effective mass. Note that the particle-hole transformation is taken for the hole band such that a hole has the positive-curvature energy dispersion ε h,p . The low-energy interactions read where we have introduced the two-electron and two-hole pair operators and the exciton creation operators Here, S z is the z component of the total spin S of an exciton. The corresponding annihilation operators are their conjugates. Also, U e-e , U h-h and U e−h are the interaction strengths for the electron-electron, hole-hole, and electron-hole channels. In general, the most relevant interaction is U e−h which is an attractive Coulomb force and induces the formation of excitons. For U e-e and U h-h , these can be attractive when the phononmediated interaction is present as in conventional BCS superconductors. At high density, the Coulomb repulsion and the screening effect also may become important. In this paper, we assume attractive U e-e and U h-h for simplicity but eventually these interaction effects are ignored since attractive U e−h is expected to be stronger than U e-e and U h-h [29]. We briefly note that the present electron-hole system is like symmetric nuclear matter where the attractive electron-hole interaction can be regarded as a counterpart of the isospin-singlet neutron-proton interaction, which induces a two-body bound state (i.e., deuteron). Indeed, both systems are composed of four-component fermions and similar multibody bound states appear in a certain density regime. A simplified model enables us to discuss similarities and differences between two systems from an interdisciplinary viewpoint of many-body physics although their energy scales are largely different from each other. B. Quartet BCS theory With the consideration of the coherent state for the four-body sector, the trial wave function is adopted as [10,15] where the biexcition creation operator at the zero centerof-mass momentum is defined as The contribution of excited excitons with finite center-ofmass momenta is neglected since the low-energy cluster states can dominate the system at sufficiently low temperatures. We note that, a similar approximation has been employed in studies of nuclear systems [10]. The normalization condition is where the norms of the variational parameters are defined as |v q | 2 = S,Sz |v q,S,Sz | 2 and |x q | 2 = i |x q,i | 2 for convenience. We note that while more sophisticated variational wave functions with the use of Hubbard-Stratonovich transformation are proposed in the studies of finite nuclei [13,14], the present wave function has an advantage in the practical numerical calculation of the physical quantities at the thermodynamic limit because of its natural extension of the BCS wave function. The variational equations are obtained as where we introduced The BCS-type energy gaps can be expressed in terms of the variational parameters as The detailed derivations of the variational equations are further shown in Appendix A. In addition, we note that the well-known BCS results can be obtained by taking w q = 0 [15]. To obtain the ground-state energy E = Ψ|H + µ e n e + µ h n h |Ψ , where are the carrier density operators of electrons and holes, respectively, we need to calculate the expectation values of n e and n h . These quantities (i.e., ρ e,h = Ψ|n e,h |Ψ ) are given by In the numerical calculations, we solve Eqs. (10a), (10b), (10c), (9), (8a), (8b), and (8c) with respect to ∆ e−e q , ∆ h−h q , ∆ e−h q , Ω q , v q , x q , and w q self-consistently. Then, u q is determined by the normalization condition in Eq. (7). Substituting these variational parameters to Eqs. (A1), (12a), and (12b), we can numerically evaluate the ground-state energy E = H + µ e ρ e + µ h ρ h and the fermion number density ρ = ρ e + ρ h . Practically, in this paper we consider only the shortrange attractive electron-hole interaction described by the contact-type coupling as U e−h (q − q ′ ) = −U C [29]. A similar contact-type coupling has also been employed for the study of monolayer MoSe 2 [54]. Also, we consider the equal effective masses as M e = M h ≡ M . Although it is rather simplified compared with the realistic cases, such a model is sufficient for our purpose since we are interested in qualitative features of BCS-like pair and quartet correlations in an electron-hole system. Indeed, the long-range Coulomb attraction is necessary to be considered for the description of the droplet state [18][19][20][21]. Nevertheless, our approach is useful for understanding the Cooper pair and quartet correlations on the groundstate energy. III. RESULTS AND DISCUSSION To figure out the differences between the results with and without the biexciton-like Cooper quartet correlations, we take the electron M e and hole mass M h to be the same as 0.511 MeV, the four-body (biexciton) energy B XX as 500 meV to characterize the electron-hole interaction strength U C , and the momentum cutoff Λ = 100k F , where k F = (3π 2 ρ/2) 1 3 is the Fermi momentum. It should be noted that B XX = 500 meV is close to the value 434 meV employed in Ref. [40]. A. Ground-state properties of biexciton-like quartet condensates in an electron-hole system In the low-density limit, the ground-state energy density E is proportional to the cluster energy as [55] Since the fermion chemical potential µ e = µ h ≡ µ for the balanced system (ρ e = ρ h and M e = M h ) is given by µ = ∂E ∂ρ based on the thermodynamic relation, one can obtain −B XX = 4µ (ρ → 0). Figure 1 shows the total fermion number density ρ as a function of µ for the electron-hole interaction strength U C that corresponds to B XX = 500 meV. Note that B XX is associated with U C through the variational equations (8a) and (8c) and electron-hole pairing gap given by Eq. (10c), so that the value of B XX varies if U C changes and vice versa. In this figure, it is clearly seen that ρ starts to be finite at For the two-body sector, because the two-body (exciton) energy B X cannot be determined from E, we evaluate B X by solving the two-body problem with the same U C . The relation between U C and B X is summarized in Appendix B. We numerically confirmed that B XX is larger than 2B X in the region where we explored in the present model. While it is difficult to prove this relation of B XX and B X for arbitrary coupling strength, our trial wave function can describe both pair and quartet states in the common variational parameter space. Therefore, based on the variational principle, it indicates that the biexciton state is stable against the breakup to two exciton states in the dilute limit. It is known that for the contact-type interaction the cutoff dependence will appear in the numerical calculations, and a density-dependent cutoff is adopted here. However, we calculate B X according to Appendix B in the lowdensity limit (ρ ∼ 10 −6 nm −3 ) and obtain that B X ≃ 225 meV. Consequently, we regard that the two-body energy B X = 225 meV in vacuum. It is close to the value of exciton energy, 193 meV, adopted in Ref. [40]. Note that if we measure the biexciton binding energy E XX from the threshold for two exciton states given by 2E X = −2B X , we obtain E XX = −B XX − 2E X = −50 meV, which is also close to −43 meV in Ref. [40]. In addition, although the calculations performed in this paper are basically for the three-dimensional system, the present theoretical framework can be further applied to the two-dimensional ones by taking D = 2 in the momentum summation, q → d D q (2π) D . For instance, our framework is closely related to the model for CdSe nanoplatelets in Ref. [40] with a different dimension. Another relevant study [56] was performed in the twodimensional van der Waals materials with a long-range (momentum-dependent) interaction, where the coupled MoSe 2 -WSe 2 monolayers were taken as the objects of research. Nevertheless, the biexciton-like quartet correlation was not taken into account in those works. Therefore, the present theoretical framework can be applied to more realistic systems. Figure 2 shows the ground-state energy density E = H + µρ as a function of ρ. To see the role of quartet correlations, the energy density E without biexciton correlation is also plotted. Because the bound-state formation reduces the total energy, the equation of state becomes softer (i.e., the ground-state energy becomes smaller) compared with the result without biexciton correlations. As shown in Eq. (13), E decreases with increasing ρ in the low-density regime, indicating that the system obtains the energy gain associated with the bound-state formations (i.e., excitons and biexcitons). In turn, the absolute value of the quartet correction, indicated by the difference between the results with and without biexciton correlation, becomes larger with increasing ρ. This result indicates that the Cooper instability associated with the Fermi surface and the attractive electron-hole interaction assists the formation of Cooper quartets in the high-density regime. In this sense, the in-medium biexciton correlations in such a dense system are not the usual four-body bound states in vacuum but the BCS-like many-body states of biexcitonlike Cooper quartets, which are also different from polyexcitons. In the quartet BCS framework, the low-energy excitation is dominated by the quartet correlations. In the high-density regime, such a low-energy sector relatively increases with the increase of the Fermi energy. However, the quartet correlations themselves are negligible compared with the Fermi energy in such a regime. Although we do not explicitly show it here, the increase of E in the high-density regime can be understood from the behavior of the energy density E FG in an ideal Fermi gas which is a monotonically increasing function with respect to ρ. We note that a triexciton, which is a six-body bound state consisting of three electrons and three holes, is not considered in this paper because the Pauli-blocking effect tends to suppress such bound states involving more than two fermions with the same spins for the s-wave shortrange interactions. While the disappearance of quartet correlations with increasing density was reported in nuclear matter [7,9], it is deeply related to the form of the two-body interaction, such as the effective range corrections and the higher-partial waves, as well as the three-and fourbody interactions. Since we employ the contact-type two-body coupling with a large momentum cutoff Λ = 100k F , pair and quartet correlations are not suppressed in the high-density regime explored in this study. This result is also associated with the fact that the highdensity regime in our model with a contact coupling does not correspond to the usual weak-coupling case as in conventional BCS superconductors but rather the unitary (or crossover) regime from the viewpoint of the BCS-BEC crossover because U C involves the twobody bound state (i.e., positive scattering length) in the free space [57]. On the other hand, at finite temperature, the phase transition from Cooper-quartet condensates to an electron-hole plasma may occur even in the present model. More detailed investigations with realistic interactions in the high-density regime and the semiconductor-metal transition are out of scope of this paper and will be addressed elsewhere. Moreover, we do not find a minimum of E/ρ (i.e., the energy per one fermion) with respect to ρ, implying the absence of the droplet phase due to the artifact of the contact-type interactions in the present model. To overcome this, we need to consider the finiterange attractive interaction giving a finite Hartree-Fock contribution, which is approximately proportional to −ρ 2 [58]. Nevertheless, the present results showing how the quartet correlations affect the energy density could be useful for future detailed investigations of droplet phase with more realistic interactions. B. Energy dispersion and excitation gap In this subsection, we discuss how the quartet correlations affect the excitation energy of the system. First, in the absence of quartet correlations (w q = 0), one can obtain where is the usual BCS dispersion with ∆ 2 q = S,Sz |∆ e−h q | 2 . One can obtain the excitation gap E gap = inf q [2E q ] ≡ 2|∆ q=qmin |, where q min is the momentum on the bottom of E q . Note that |q min | = √ 2M µ in the present case with the contact coupling. In the presence of quartet correlations (i.e., ω q = 0), one can obtain where In analogy with the usual BCS dispersion (16), E ω q can be regarded as the quartet BCS dispersion [15]. Solving Eq. (18) combined with Eq. (17), one can evaluate the excitation gap E gap = inf q [2E ω q ] in the quartet BCS framework. The energy dispersions with and without the biexciton correlations (i.e., E ω q and E q ) as a function of relative momentum q = |q| are shown in Fig. 3, where we take B XX = 500 meV. Because we are interested in the quartet BCS regime where µ becomes positive and the Fermi surface effect is important, the high density case with ρ = 0.5 nm −3 is examined here. As shown in Fig. 1, µ reaches 100 meV at ρ = 0.5 nm −3 . With the consideration of biexciton correlations, the excitation gap E gap , namely, the minimum of the energy dispersion, becomes larger by around 5.5%, and the relative momentum which gives the minimum of energy dispersion also becomes larger by around 55.6%. While the quartet corrections are significant in the low-momentum regime, E ω q becomes closer to E q in the high-momentum regime. Thus, one can see that E ω q increases at low q compared with E q because of the quartet corrections as found in Eq. (18). This result indicates that excitons consisting lower relative momenta tends to form the biexcitonlike Cooper quartets and such quartets are energetically broken into two exciton-like Cooper pairs for larger q. Finally, in Fig. 4, we plot E gap with quartet correlations estimated from the minimum of E ω q shown in Fig. 3. For comparison, we also show the result of the excitation gap without quartet correlations. In general, E gap with quartet correlations becomes larger than the case without them. This behavior is natural since a larger energy is needed to excite a single carrier accompanying the breakup of quartets compared with the case with only two-body pairings. Also, one can find that the difference between the cases with and without quartet correlations becomes smaller with increasing ρ. At first glance, this tendency seems to be opposite to the quartet correlations on the ground-state energy E shown in Fig. 3, but actually these results are found to be consistent by considering how these quantities are associated with quartet correlations in a relativemomentum-resolved way. While the lower relativemomentum sector plays a significant role for the quartet corrections on E involving the q summation, E gap reflects the quartet correlations at q = q min , which is relatively large compared with the low relative momenta dominated by the quartet formation. Indeed, the difference between E ω q and E q near q = q min is smaller compared with that at q ≃ 0. In this regard, spectroscopic measurements for in-medium biexciton energy, which are not momentumresolved, would give the similar tendency of ρ dependence as shown in Fig. 4. IV. SUMMARY AND PERSPECTIVES In this paper, we investigated the microscopic properties of biexciton-like quartet condensates in an electron-hole system within the quartet BCS theory at the thermodynamic limit. The variational approach is applied to the three-dimensional electron-hole system, which is described as four-component fermions with short-range attractive interactions (corresponding to the Coulomb electron-hole attraction). Numerically solving the variational equations, we have obtained the groundstate energy density as a function of the fermion number density. On the one hand, the ground-state energy density decreases with increasing number density in the dilute region because of the energy gains associated with the biexciton formations. On the other hand, such a tendency for the ground-state energy density turns into the increase in the high-density regime due to the Fermi degenerate pressure. To see the role of quartet correlations, we compared the results with and without quartet correlations and pointed out that the quartet condensation leads to the lower ground-state energy. Moreover, we showed the density dependence of the excitation gap, which is defined as the minimal dispersion in analogy with the usual BCS theory. While the quartet correlations induce a larger excitation gap in the whole density regime, the difference from the result with only pairing correlations can be smaller in the high-density regime, because the dispersion minimum itself does not involve the quartet correlations associated with lower momenta. In this paper, we have employed a simplified model to explore qualitative features of the condensation energy of the biexciton-like quartet state. For further quantitative investigations of the electron-hole droplet phase, it is needed to apply more realistic models with longrange interactions (e.g., Coulomb interactions and their screening) and multi-body forces. For semiconductor systems such as layered transition metal dichalcogenides, the two-dimensional model is relevant. While the quadratic dispersion is adopted in this paper, the band structure of each material should be considered. Nevertheless, these extensions can be easily done in our quartet BCS theory at the thermodynamic limit. It can be achieved by replacing the three-dimensional momentum summation with a two-dimensional one and fermionic dispersion ε i,p with realistic bands, respectively. Also, quantum fluctuations associated with excited two-and four-body states can be important. The energy density functional involving these corrections would be useful for further developments not only in condensed matter but also nuclear and cold atomic physics. Moreover, since actual electron-hole systems are realized as a non-equilibrium steady state, the interactions with environments as an open quantum system would also be an interesting topic. These are left for future works. In this appendix, we derive the variational equation for biexciton-like quartet condensates. The condition δ Ψ|H|Ψ = 0 leads to the variational equations of v q,S,Sz , x q,e(h) , and w q shown in the main text. Appendix B: Exciton energy Here, we derive the exciton energy in the present model with the contact-type electron-hole interaction. The twobody wave function for a S z = +1 exciton reads where |0 is the vacuum state. The variational equation with respect to φ * q given by ∂ ∂φ * q ψ 2 |H 0 e + H 0 h + V e−h + B X |ψ 2 = 0 leads to φ q (ε e,q + ε h,−q − W ) = U C where Λ is the momentum cutoff. In the limit of Λ ≫ √ 2M r B X , we obtain the exciton energy as (B5)
2022-01-11T02:16:02.058Z
2022-01-09T00:00:00.000
{ "year": 2022, "sha1": "40b1a6bb78678ec7b1a82b4707fce144b274b796", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.4.023152", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "3bd50b3187e721f9391a0c9b5ea610268d5a2b91", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231961330
pes2o/s2orc
v3-fos-license
Pollen morphology and variability of Polish native species from genus Salix L. The pollen morphology was studied of 24 Salix species native to Poland, which represented two subgenera, 17 sections and five subsections occurring in Poland. The aim of this study was to discover the taxonomical usefulness of the pollen features under analysis, and to investigate the ranges of their interspecific variability. In total, 720 pollen grains were studied. They were analysed with respect to seven quantitative features (length of the polar axis ˗ P, equatorial diameter ˗ E, length of the ectoaperture ˗ Le, exine thickness ˗ Ex, and P/E, Ex/P and Le/P ratios) and the following qualitative ones: pollen outline and exine ornamentation. The most important features were exine ornamentation (muri, lumina and margo) characters. The pollen features should be treated as auxiliary because they allowed to distinguish eight individual Salix species, and five groups of species. Statistical analysis of the studied traits indicated a high variability among the tested species. The most variable biometric features were P, E and Le, while lower variability occurred in P/E, Le/P and d/E. Introduction The genus Salix L. (Salicaceae) consists of deciduous (and, rarely, semi-evergreen) trees and shrubs, including dwarf forms, with decumbent shoots, mainly distributed across the cold and moderate climate zones of the Northern Hemisphere. The number of Salix species is estimated to be from 330 up to even 530 worldwide, with the highest species concentration in northern Eurasia, northern North America, and in the mountains of China [1][2][3][4][5]. There are 65 willow species described in Europe, including 27 which are native to Poland [6,7]. Following the generic and subgeneric treatments of Euroasian Salix taxa proposed by Skvortsov [3], the Polish species represent three subgenera, 17 sections and five subsections. Salix is considered one of the most taxonomically difficult plant genera and its infrageneric taxonomy is still in progress. It is a result of, among others, a very simplified and undifferentiated flower structure, which limits the use of generative traits in Salix systematics. All willow species are dioecious, and their flowers and leaves usually develop at different times. Therefore, field observations of an individual plant are rather inconvenient. Many Salix species exhibit significant morphological variations, correlated with high infraspecific genotypical polymorphisms. This is often reflected in intra-species division into numerous subspecies, varieties and a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 forms. At the same time, the differences between some Salix species are difficult to define [3,5,8]. Willows are also highly cross-compatible, and numerous hybrids have been recognised, both natural and artificial. It is difficult to estimate the number of natural hybrids in Europe, or even in certain regions of the continent. Meikle [9] and Rechinger [10] believed that the formation of spontaneous hybrids between Salix species within Great Britain was a frequent phenomenon. Field observations in transect from Greece to arctic Norway recorded a total of 20 willow species, along with 12 hybrids [11]. Similarly field and herbarium analyses of the occurrence of willow species in the whole Latvia revealed as many as 68 hybrids and 20 "pure" species of Salix [12]. In turn, Oberprieler et al. [13] provided molecular and phytochemical evidence S. ×rubens to be natural hybrid between Salix alba and S. fragilis. The long history of Salix cultivation has resulted in the selection of over 850 cultivars, of which 734 are accepted by the International Poplar Commission [14]. The factors mentioned above contribute to the fact that not many researchers have undertaken extensive palynological investigations of this genus. Most of the palynological studies of Salix have focused on a specified geographical region with a limited number of species, up to 10 (compare [15][16][17][18][19][20][21][22][23][24][25][26] and others). One of the exceptions was Kim and Zsuffa's elaboration [27], describing the pollen morphology of 15 Korean species, two varieties and one form, representing six sections of the subgenus Salix L. Later, Sohma [28] examined the pollen grains of 72 Asian Salix taxa. He noted certain differences in the exine patterns and, based on these differences, described eight types of exine ornamentation. The "Pal dat" database, established by Diethart and Halbritter [29][30][31], contains brief descriptions of 12 Salix species. Knowledge of the morphological structure of Salix pollen grains is incomplete not only due to the limited number of species analysed. Researchers usually limit their analyses to individual and/or the most important pollen grain features (mainly pollen size and shape, or exine ornamentation). As yet, save the study by Kim et al. [22], no other research on the interspecific variability of willow species has been undertaken. It was assumed that the examined pollen grain characteristics could help to identify the individual Salix taxa under analysis. Therefore, the main aim of this study was to discover the taxonomical usefulness of the quantitative and qualitative morphological features of pollen. The second goal was to describe, for the first time, the interspecific morphological variability of the pollen grains from the studied species of the genus Salix. It was assumed that the research results would be representative thanks to a complex comparative analysis of the diagnostic, morphological features of pollen from suitably selected plant material, representing all the intrageneric taxa distinguished at the present time (24 species from all subgenera, sections and subsections of willows found in Poland). Due to SEM observations detailed pollen morphology of the 6 studied species (S. dasyclados, S. myrsinifolia, S. myrtilloides, S. rosmarinifolia, S. silesiaca, S. starkeana) has not been described in the palynological literature so far. Palynological analysis The study was conducted on 24 of the 26 species of Salix native to Poland. The taxa under examination represented all three subgenera, all sections (17) and subsections (5) of willows found in Poland, including two legally protected species in Poland-S. lapponum and S. myrtilloides (collected from a specific herbarium). A list of the species analysed with their affiliation to particular taxa is shown in Table 1. The presented classification system of analysed Salix species was partly based on the current phylogenetic studies [32][33][34]. According to the molecular data, subgenus Chamaetia and Vetrix are merged and subgenus Chamaetia/Vetrix is accepted. Traditional division into the sections and subsections proposed by Skvortsov [3] is given, because it is practically the only infrageneric system that includes all 24 currently studied species of willows. Verification of the taxa was performed by taxonomist Prof. Jerzy Zieliński (Institute of Dendrology, Polish Academy of Sciences in Kórnik) [6]. Several, randomly selected inflorescences (flowers) were collected from the Dendrological Garden of UPP in Poznań, Several, randomly selected inflorescences (flowers) were collected from the Dendrological Garden of UPP in Poznań (-permission granted by the director, dr. T. Maliński), from the Botanical Garden of Poznań Adam Mickiewicz University (-permission granted by the director, professor J. Wiland-Szymańska), from the field within Wielkopolska region (-no permission required) and from a herbarium belonging to the Institute of Botany at the Polish Academy of Sciences in Kraków (-permission granted by the herbarium curator, dr. hab. Beata Paszko). All the plants grew on natural sites in Poland, Finland and Russia (Table 1). Five Salix species were collected outside Poland, due to the limited occurrence in the country. The plant material was stored in the herbarium of the Department of Forest Botany at the Poznań University of Life Sciences (POZNF). The authors unequivocally state that the material for the current research has been collected in accordance with the principles of ethics, without threatening the existence of the populations of the studied species in the future. In accordance with the study by Wrońska-Pilarek et al. [35], each sample consisted of 30 randomly selected, mature and correctly formed pollen grains derived from a single individual (shrub or tree). In total, 720 pollen grains were studied. The pollen grains were prepared for light (LM) and scanning electron microscopy (SEM) using the standard methods described by Erdtman [36]. The prepared material was divided into two parts: one part was immersed in an alcohol solution of glycerine (for LM) and the other in 96% ethyl alcohol (for SEM). Morphological observations were carried out using both a digital light microscope (Levenhuk D320L) and a scanning electron microscope (Jeol 7001TTLS). Eight quantitative features of the pollen grains were analysed, i.e. the length of the polar axis (P) and equatorial diameter (E), the length of the ectoaperture (Le), the exine thickness (Ex), and P/E, Le/P, Ex/P and Ex/E ratios. The pollen shape classes (P/E ratio) were adopted according to the classification proposed by Erdtman [15]: oblate-spheroidal (0.89-0.99), spheroidal (1.00), prolate-spheroidal (1.01-1.14), subprolate (1.15-1.33) and prolate (1.34-2.00). The following qualitative features were also analysed: the outline, shape and exine ornamentation. Statistical analysis Firstly, the normality of the distributions of the studied traits (P, E, P/E, Le, Ex, Le/P, Ex/P and Ex/E) was tested using Shapiro-Wilk's normality test [39]. A multivariate analysis of variance (MANOVA) was performed based on the following model using a MANOVA procedure in GenStat 18: Y = XT+E, where: Y is (n×p)-the dimensional matrix of observations, n is the total number of observations, p is the number of traits (in this study p = 8), X is (n×k)-the dimensional matrix of design, k is the number of species (in this study k = 24), T is (k×p)-the dimensional matrix of unknown effects, and E-is (n×p)-the dimensional matrix of residuals. Following this, one-way analyses of variance (ANOVA) were performed in order to verify the null-hypothesis of a lack of species effect, as opposed to the alternative hypothesis of significant differences among the species, in terms of the values of the observed traits, independently for each trait, based on the following model: y ij = μ+τ i +ε ij , where: y ij is jth observation of ith species, μ is the general mean, τ i is the effect of ith species and ε ij is an error observation. The minimal and maximal values of the traits as well as the arithmetic means and coefficients of variation (cv in %) were calculated. Moreover, Fisher's least significant differences (LSDs) were estimated at a significance level of α = 0.001. The relationships between the observed traits were assessed based on Pearson's correlation coefficients using a FCORRELATION procedure in GenStat 18. The results were also analysed using multivariate methods. Mahalanobis [40] distance was suggested as a measure of "polytrait" species similarity [40], the significance of which was verified by means of critical value D α called "the least significant distance" [41]. The differences among the analysed species were verified by cluster analysis using the nearest neighbour method and Euclidean distances. All the analyses were conducted using the GenStat 18 statistical software package. General morphological description of pollen A description of the pollen grain morphology of the Salix taxa samples under analysis is given below and illustrated in the SEM photographs (Figs 1A-1H to 4A-4H). The morphological observations for the quantitative features are summarized in Table 2. Totally, the average length of the polar axis (P) was 22.27 μm, with extreme values of 15.75 μm in S. reticulata and 28.71 μm in S. retusa (Table 2, Fig 5). At the same time, these two species were characterised by, on average, the smallest P (in S. reticulata-17.34 μm) and the largest (in S. retusa-26.94 μm). On average, relatively small P axes were also observed in S. pentandra, S. phylicifolia, and S. triandra (less than 19.00 μm) and relatively high values of P axes (ca 25 μm) were noticed in S. cinerea, S. dasyclados, S. lapponum and S. starkeana. Considering all the studied species, the mean length of the equatorial diameter (E) was 18.94 μm, the smallest value of this feature totaling 13.58 μm (in S. reticulata) and the largest amounting to 27.40 μm (in S. retusa) ( Table 2). The shortest mean equatorial diameter occurred in S. phylicifolia (15.51 μm), while the longest was in S. retusa (20.40 μm). On average, relatively small E axes (less than 16.00 μm) were also observed in S. reticulata and S. triandra and relatively high values of E axes (above 20 μm) in S. cinerea, S.dasyclados, S.fragilis, S. myrsinifolia, S. myrtilloides and S. starkeana. The outline in polar view was mostly trilobate, less frequently circular or elliptic, whereas in equatorial view the outline was mostly elliptic and only sporadically circular (Fig 1A-1E). Generally, the mean P/E ratio was 1.19, and ranged from 0.82 in S. retusa to 1.57 in S. caprea (Table 2). On average, the smallest value of P/E ratio was in S. pentandra and S. retusa (1.08) and the largest in S. caprea (1.33) The largest range of P/E ratio was found in S. caprea and the smallest one in S. pentandra (Table 2, Fig 6). In all the investigated Salix species, subprolate (62.7%) and prolate spheroidal (30.8%) types of pollen shape classes predominated. Sporadically, prolate types of pollen shapes (approx. 5%) were noted, whereas the total participation of oblate-spheroidal and spheroidal pollen classes did not exceed 1%. The most frequently prolate-spheroidal class was visible in S. pentandra The exine consisted of two layers: the sexine was usually slightly thicker than the nexine. Totally, the mean exine thickness was 1.63 μm (with a range of 1.17 μm in S. viminalis up to 2.20 μm in S. retusa) ( Table 2). On average, the exine was the thinnest in S. fragilis (1.41 μm) and S. purpurea (1.42 μm), while the thickest occurred in S. retusa (1.90 μm). Totally, the relative thickness of the exine (Ex/P ratio) averaged 0.07 (ranging from 0.05 to 0.11) and the Ex/E ratio was 0.09 (0.06-0.13) ( Table 2). The above results indicated that the exine was characterised by an almost identical thickness along the entire pollen grain. PLOS ONE Pollen morphology and variability of Polish native species from genus Salix L. The pollen grains under analysis usually had three apertures ˗ colpori or colpi. The ectoapertures were arranged meridionally, regularly, and they were quite evenly spaced and long: with a mean length of 18.03 (11.54-24.67) μm (Table 2). On average, the length of the ectoapertures (colpi) constituted 81% (from 65 to 97%) of the polar axis length, with the shortest colpi found in S. reticulata (11.54 μm) and the longest in S. myrtilloides (24.67 μm). The colpi were wide and elliptic in outline. The ectoaperture membrane was usually ornamented PLOS ONE Pollen morphology and variability of Polish native species from genus Salix L. (granulate or microgranulate) and sometimes partly ornamented and partly psilate (Fig 1F-1H). In the ectoapertures of the majority of the studied species, margo was observed (e.g. in S. alba, S. capraea, S. cinerea, S. daphnoides, S. repens, S. reticulata, S. retusa, S. serpyllifolia, and S. triandra). The margines were quite wide, darker than the rest of the ectoaperture, psilate from the ectoaperture side and reticulate with a few, diffused lumina with small, but different diameters from the mesocolpium side. Exine ornamentation was reticulate, and created by the lumina and muri, which varied in shape and size (Figs 1-4). The studied pollen grains were heterobrochate, which means that the pollen had a reticulate pollen wall with lumina of different sizes and often irregular outlines. Their diameters ranged from 0.4 to 2.0 μm. The lumina were at its maximum size in the center of the mesocolpium area and then gradually or suddenly decreased in size towards the poles and colpi. In many studied species within the lumina, single to numerous free-standing columellae of different heights were present (e.g. in S. cinerea, S. daphnoides, S. repens, and S. retusa) (Figs 2D, 2E, 4A and 4C). These columellae were single or did not occur in, for example, S. alba, S. caprea, S. purpurea, or S. reticulata) (Figs 2A, 2C, 3H and 4B). The columella shape varied from spheroid to elliptical and polygonal, with rounded or triangular angles. The features of the muri were also very variable. The differences were in the height, width, and the rounded or angled margins of the muri. The borders of the muri were undulate or erect (see: Pollen key). The investigated pollen of the individual Salix species was classified, based on the exine ornamentation classification proposed by Sohma [28], into two types (1 and 2). In that study, eight exine ornamentation types were distinguished. The greatest number of the studied species (17) belonged to type 2, which was characterized by wedge-shaped muri, considerably variable in width. The differences in the shape and the dimentions of the lumina were also considerable. Type 1 was represented by three species (S. alba, S. eleagnos, and S. reticulata; Figs 2A, 2G and 4B). According to Sohma [28], type 1 was very specific and consisted of pollen grains with conspicuous keeled muri. Indeed, the muri were often acutely pointed at the trifurcate points where the immediate neighboured meshes joined together. The side walls of the muri delimiting the meshes were relatively straight, curved, or sinuous. The lumina were almost isodiametric and ellipsoidal to round polygonal in outline, and separated by relatively narrow muri. Among the examined species, four had both types of exine ornamentation (S. phylicifolia, S. romarinifolia, S. triandra, and S. viminalis; Figs 3G, 4G, 4F and 4H). PLOS ONE Pollen morphology and variability of Polish native species from genus Salix L. Pollen key PLOS ONE Pollen morphology and variability of Polish native species from genus Salix L. Pollen grain variability The results of the MANOVA indicated that all the samples were significantly different with regard to all of the eight quantitative traits (Wilk's λ = 0.01053; F 23;696 = 23.44; and P<0.0001). The results of ANOVA indicated that the main effects of the species were significant for all eight observed traits ( Table 2). The mean values, ranges and coefficients of variation (cv) for the observed traits indicated a high variability among the tested samples and significant differences were found in terms of all the analysed morphological traits ( Table 2). The intraspecific and inter-individual variability of the Salix pollen grains were studied based on eight selected quantitative features. Statistical analysis for the studied traits indicated a high variability among the tested species. The most variable biometric traits were P, E and Le, while lower variability occurred in P/E, Le/P and d/E ( Table 2). In the presented dendrogram, as a result of agglomeration grouping using the Euclidean distance method, all the examined Salix species were divided into four groups (Fig 8). The first group (I) comprised one species-S. retusa, while the second one (II) consisted of five species. The third group (III) included eight species and the final one (IV) comprised ten species. The most distinct species was S. retusa from group I, while from the other groups, S. caprea and S. fragilis (group III) and S. myrtilloides (group IV) were also distinct. Based on the dendrogram, taxonomic relationships between the studied species were analysed at the subgenus and section level. They belonged to two subgenera (Salix and Chamaetia/Vetrix) and formed four groups (Table 2, Fig 8). In general, willows did not form groups consistent with the currently used taxonomic division of the genus Salix into subgenera and sections. They fell into four clades with different number of species, including a separate group with only one species S. retusa. Species from the subgenus Salix belonged to three different groups. Similarly representatives of most numerous sections, as Chamaetia and Vetrix, were scattered. Multi-traits distances between the studied species determined by Mahalanobis distances showed that the most similar were S. eleagnos and S. myrsinifolia (0.81), while S. viminalis-S. aurita and S. lapponum-S. dasyclados were also similar (1.08) ( Table 3). The most distinct species were S. retusa and three willows-S. reticulata (11.20), S. phylicifolia (9.86) and S. pentandra (9.57) ( Table 3). Fig 9 shows the biplot of the variability of the pollen grain traits of 24 studied Salix species in terms of the first two principal components. In the graph, the coordinates of the point for particular species are the values for the first and second principal components, respectively. The first two principal components accounted for 97.68% of the total multivariate variability between the individual species. The goal of the study was to establish whether pollen grains collected from various Salix species growing in different habitat conditions (soil and climate) would differ from one another. Four groups of species were distinguished. The majority of the examined species were found in the first, large group (I). Just one or two willows (II-S. retusa, III-S. reticulata, S. pentandra and IV-S. caprea and S. lapponum) fell into the other three groups (Fig 9). The first group of species (I) was positively correlated with Ex, P/E, Le/P, Ex/E and Ex/P. Two species: S. retusa and S. fragilis were positively correlated with E (Fig 9). Discussion Based on the palynological literature, it can be concluded that genus Salix has relatively uniform pollen grains, as pollen shape and size are roughly similar and exine ornamentation is reticulate [21,22,24,25,27,28]. The results of this study confirmed all these conclusions. However, many researchers proved that a detailed analysis of exine sculpture can be used to distinguish the particular species [16,19,22,25,[27][28][29][30][31]. The most precise ornamentation division in Salix was developed by Sohma [28]. Straka [16] concluded that the pollen grains of 30 European Salix species could be classified into six types, based mainly on the exine ornamentation and the character of the ectocolpus margin. He also noted that S. silesiaca, S. herbacea, S. daphoides, S. pentandra and S. alba were likely to be distinguished from each other, whereas most of the other taxa could not be distinguished. Based on pollen morphology, and mainly on exine ornamentation traits, Faegri and Iversen [19] identified four pollen types (S. herbacea, S. polaris, S. pentandra and S. glauca) in Salix species. Several palynologists have used pollen traits for taxonomic studies on the genus Salix. Kim and Zsuffa [27] studied the pollen morphology of 15 Salix species belonging to six sections of the subgenus Salix and stated that, from a pollen-morphology point of view, the subgenus Salix was stenopalynous. The cited authors also examined the taxonomic relations and PLOS ONE determined that S. jessoensis (section Subalbae) was the most distinct of the species studied. The species of the section Humboldtianae were most evolved in this subgenus, with a closer relationship to the section Amygdalinae than any other section of this subgenus. Sohma [28] examined the pollen grains of 72 taxa of the genus Salix and noted certain differences in the exine patterns. Based on these differences he described eight major types of reticulate exine ornamentation. In the study, height, width, the course of the muri and the diameter of the lumina were considered. Some of Soma' s types were related to the genus Salix sections. Some sections were heterogeneous ˗ Humboltdianae (type 2), Amygdalinae (type 2), Pentandrae (type 1), Salix (type 6) and Helix (type 2a) had distinct types, other sections Subalbae (types 3, 5 and 8), Longifoliae (types 3 and 4), Hastatae (types 2-4, and 7), Vetrix (types 2 and 7) and Daphnella (types 1 and 2), while sections Glabrella, Vimen and Subviminalis were homogeneous (types 2). Babayi et al. [25] distinguished six pollen types (S. alba, S. issatissensis, S. elbursensis, S. excelsa and a type with three species ˗ S. acmophylla, S. zygostemon and S. cinerea). According to Sohma [28], section Salix conformed in its reticulation pattern with type 6, but the results from the study of Babayi et al. [25] did not support this. The species showed heterogeneous patterns, e.g. S. alba had type 6, S. excelsa ˗ types 3 and 6, and S. issatissensis ˗ type 1. Babayi et al. [25] reported that the exine was reticulate and the characteristics of the muri, such as the shape and size of the lumen, varied in different species. The lumen was isodiametric or heteromorphic and the patterns were orbicular, elliptic or polygonal with rounded angles. According to Babayi et al. [25], the exine features were different among the Salix species and could be used as diagnostic characteristics. In the current examination, the Salix species were divided into two pollen exine ornamentation types, based on the Sohma [28] classification. Within the studied species from the subgenera Salix and Chamaetia/Vetrix, two types of exine ornamentation (1 and 2) were found. This partly confirmed the thesis of Kim and Zsuffa [27], who claimed that, for example, the subgenus Salix had pollen of a similar structure. In contrast, Sohma [28] and Babayi et al. [25], who studied more species from these subgenera, distinguished five exine ornamentation types (1,2,4,6,7). According to the present research, the species from individual sections also usually had a similar exine ornamentation type. Only a detailed study of the exine ornamentation, contained in the attached pollen key, made it possible to distinguish eight of the 24 willow species studied, while the other species formed small groups of two to four species with very similar pollen characteristics. Most of the authors cited above described the pollen grains of genus Salix as small (10-25 μm) [15,24,25,[28][29][30][31], and rarely as medium (25.1-50 μm) [15,24,28]. The researchers agreed that pollen size was not a useful feature to distinguish the individual Salix species. Only according to Babayi et al. [25] was the size of the pollen grains of particular Salix species very variable. The measurements made in this study yielded different results, confirmed by other authors. The range of the length of the polar axis (P) was narrow (15.75-28.71 μm). Moreover, most of the tested pollen grains were small, while others usually only exceeded 25.1 μm slightly, which was the maximum value for medium-sized grains. In addition, many species had similar ranges of the trait P (Fig 5), therefore pollen size is considered a poor criterion to distinguish the individual Salix species. According to other authors, the pollen shape was various, oblate to subprolate [24], spherical to subprolate [25] or spherical [29][30][31]. The results from the study presented here were similar but much more accurate, because five pollen shape classes were distinguished, of which subprolate and prolate spheroidal pollen shapes dominated. Kim et al. [22] studied an inter-and intra-specific variation of pollen grains in S. discolor, S. eriocephala, S. lucida, and S. petiolaris. In their opinion, the pollen grains demonstrated significant interspecific variation, unequal distances between the species, and various degrees of intraspecific variation. Current statistical analyses give similar results. The presented classification system of analysed Salix species was partly based on the current phylogenetic studies, but division into the sections and subsections proposed by Skvortsov [3] was used. Skvortsov's classification is still the most comprehensive systematic survey for the willow species of Central and Eastern Europe. Clear and well-founded infrageneric grouping and broad species concept, based on extensive field observations and herbarium studies are strong advantages of this traditional system [10]. However, modern studies on the molecular phylogenies partly dispute Skvortsov's infrageneric classification [32][33][34][42][43][44]. Among others, Wu and co-workers [32] considered the section Triandrae should be excluding from subgenus Salix s.l. and treated as separate subgenus with solitary species S. triandra. Taxonomical distinctiveness of S. triandra was also proved in other studies [42,43]. The current research did not support so unequivocal evidence. The micromorphological patterns of exine (mainly the character of edges of muri) allowed to separate S. triandra pollen grains from the other tested species of subgenus Salix. At the same time pollen grains of this species were morphologically similar to S. myrtilloides, S. silesiaca and S. rosmarinifolia (subgenus Chamaetia/ Vetrix). The differences among the pollen grains of all these species were only in visibility of colpus margo. Current analysis of taxonomic relationships also did not confirm a separateness of pollen of S. triandra. Modern molecular studies undermined Skvortsov's division into subgenus Chamaetia and Vetrix. Wagner and colleagues [33] focusing on European species of both subgenera confirmed the monophyly of the Chamaetia/Vetrix clade by genomic RAD sequencing markers. They postulated merging subgenus Chamaetia and Vetrix. The same conclusion was drawn from other molecular studies [32,43,44]. Generally, these results were in accordance with current findings. The analysed species did not form groups with the taxonomy. It can be seen when analysing both the qualitative and quantitative characteristics of the pollen grains of the currently studied species. For example, although the pollen of all (five) species of the section Vetrix represented the same morphological type, only S. cinerea and S. aurita were practically indistinguishable. The morphological similarity of these species was also noted by Sohma [28]. On the other hand, a very close resemblance to S. cinerea and S. aurita was observed in pollen grains of S. lapponum (section Villosae) and S. purpurea (section Helix). Instead, the current analysis of quantitative features of pollen grains revealed the distinctiveness of S. cinerea and S. aurita. In this regard S. cinerea was closest to S. starkeana (Section Villosae), S. lapponum and S. dasyclados (section Vimen) and S. aurita to S. viminalis (section Vimen), S. purpurea and S. repens (section Incubaceae). In conclusion, the presented study proved that, according to all the analysed biometric pollen features from 24 Salix species, and mainly the exine ornamentation, there was no clear relationship between the pollen of the species representing the same subgenera and sections. The pollen traits were most important at species level.
2021-02-20T06:16:15.888Z
2021-02-18T00:00:00.000
{ "year": 2021, "sha1": "1b0fad6372b21f7743263e23a5ca23d3d86ac44c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0243993&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd60c9017a38654b3d367c9f140572642123a31b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251862281
pes2o/s2orc
v3-fos-license
Expanding beyond endoscopy: A review of non-invasive modalities in Barrett’s esophagus screening and surveillance Barrett’s esophagus (BE) is a condition that results from replacement of the damaged normal squamous esophageal mucosa to intestinal columnar mucosa and is the most significant predisposing factor for development of esophageal adenocarcinoma. Current guidelines recommend endoscopic evaluation for screening and surveillance based on various risk factors which has limitations such as invasiveness, availability of a trained specialist, patient logistics and cost. Trans-nasal endoscopy is a less invasive modality but still has similar limitations such as limited availability of trained specialist and costs. Non-endoscopic modalities, in comparison, require minimal intervention, can be done in an office visit and has the potential to be a more ideal choice for mass public screening and surveillance, particularly in patents at low risk for BE. These include newer generations of esophageal capsule endoscopy which provides direct visualization of BE, and tethered capsule endomicroscopy which can obtain high-resolution images of the esophagus. Various cell collection devices coupled with biomarkers have been used for BE screening. Cytosponge, in combination with TFF3, as well as EsophaCap and EsoCheck have shown promising results in various studies when used with various biomarkers. Other modalities including circulatory microRNAs and volatile organic compounds that have demonstrated favorable outcomes. Use of these cell collection methods for BE surveillance is a potential area of future research. INTRODUCTION The esophagus is normally lined by stratified squamous epithelium. The mucosa of the esophagus is regularly exposed to gastric acid and bile through reflux from the stomach which can result in mucosal damage. The injury is usually repaired by regeneration of the squamous mucosa. However, in some patients, the mucosal damage is repaired resulting in a metaplastic columnar epithelium with gastric and intestinal features. The condition is termed Barrett's esophagus (BE) and is recognized as the major risk factor for the development of esophageal adenocarcinoma (EAC) [1]. It is estimated that as many as 5.6% of United States adults have BE based on modeling using EAC rates [2]. Of note, endoscopic prevalence is widely variable based on geographic location of the population studied[3-6]. Current estimates are likely an underrepresentation given that upper endoscopy is required for diagnosis. Mean age at BE identification is approximately 55 years and is two to threefold more common in men than in women [7,8]. Among patients undergoing upper endoscopy, compared to Caucasians, African Americans have a significantly lower BE prevalence[9,10]. Accepted BE risk factors besides Caucasian race include age > 50, male, chronic (> 5 years) or frequent (> once weekly) gastroesophageal reflux disease (GERD), smoking, obesity (Body mass index > 35), and family history of BE or EAC (first degree relative) [11]. Endoscopy is currently the mainstay for BE diagnosis and management. An endoscopic approach is inappropriate for mass screening as it is not cost effective resulting in missed opportunities to discover patients with undiagnosed BE. Current screening guidelines target individuals with GERD and multiple risk factors for endoscopy to detect BE then enroll those with it into a surveillance program. However, this approach does not take into consideration that most EAC cases do not have a diagnosis of BE prior [12]. This provides an excellent opportunity for non-endoscopic techniques to be used as a public health tool for screening (identification of disease) and surveillance. Individuals belonging to the low risk BE strata are best suited for non-endoscopic modalities as they require minimal intervention whereas the high risk BE group would require more precise but also more invasive endoscopic based technologies [13]. Non-endoscopic techniques provide cost-effective, less invasive screening tools and more importantly increase the ease of accessibility to screening opportunities. WHO DO WE SCREEN Screening for BE is recommended in patients with multiple known risk factors including chronic or frequent GERD, age greater than 50 years, male sex, Caucasian race, smoking, obesity along with family history of a 1 st degree relative with BE or EAC. Currently, screening the general population with only GERD symptoms is not recommended per society guidelines [14,15]. ESOPHAGEAL IMAGING DEVICES Currently, high-definition white light endoscopy is the mainstay of BE screening and surveillance. Despite being safe and well-tolerated, it is invasive and associated with higher costs and side effects [16]. Trans-nasal endoscopy (TNE, Figure 1) has been proposed as a less invasive alternative to screen for BE. TNE is an ultra-thin endoscope (diameter < 6 mm) used in the outpatient setting to directly visualize the distal esophagus. This has occurred in primary care offices[17,18] and mobile vans for community BE screening [19]. Drawbacks of TNE are limited availability, cost, decontamination facility for reuse, and need for trained operators. ESOPHAGEAL CAPSULE ENDOSCOPY Esophageal capsule endoscopy (ECE), similar to small bowel capsule endoscopy, consists of a wireless capsule containing a camera, battery, and radio transmitter. Images are transmitted to a digital receiver and transferred to a computer for analysis [20]. A meta-analysis of 9 studies comprising of 618 patients showed a pooled sensitivity of 78% and specificity of 86% (EGD as the reference had sensitivity and specificity of 78% and 90% respectively) [21]. The suboptimal diagnostic accuracy is contributed to rapid esophageal transit time. Newer versions of ECE have been developed to overcome this issue and allow prolonged imaging. PillCam ESO (Medtronic Inc, Minneapolis, MN) was initially approved by Food and Drug Administration in 2004 and a second-generation device (PillCam ESO2, Figure 2A) with cameras at both ends of the capsule. The second-generation device captures images (PillCam ESO2, Figure 2B) at a rate of 18 frames per seconds (fps) [22]. A third-generation capsule (PillCam UGI) with a wider angle of view (174°) and higher recording rate (35 fps) is under investigation with pilot data suggesting inferiority to standard endoscopy regarding BE detection [23]. Another solution for the rapid esophageal transit issue is the detachable string magnetically controlled capsule endoscopy (also known as wireless magnetically controlled capsule endoscopy or WMCCE) which has been shown to be feasible and well tolerated in various studies [24][25][26]. A recent prospective multicenter study showed sensitivity of 92% and specificity of 80% for high-risk esophageal varices [27]. With regards to cost-effectiveness of ECE compared to traditional endoscopy, the results have been equivocal [28][29][30]. More studies are needed to determine of this modality is viable and cost-effective for BE screening. TETHERED CAPSULE ENDOMICROSCOPY Another variant of ECE is optical coherence tomography tethered capsule endomicroscopy (OTC-TCE, Figure 3) which uses optical-frequency domain imaging technology that rapidly acquires highresolution, 3-dimensional, cross-sectional images of the entire esophagus[31,32]. In a proof-of-concept study of 13 subjects (7 normal volunteers and 6 with known BE), no complications were reported, and 12/13 patients reported preference of this method over conventional endoscopy [33]. The feasibility and safety of this method was further demonstrated in a recent multi-center study of 147 patients with known BE, and a blinded comparison of maximum extent of BE measured by OTC-TCE and EGD showed a strong correlation (r = 0.77-0.79, P < 0.05) [34]. In this study, high-quality microscopic images of the entire esophageal wall were obtained in the majority of the cases (93.7%). Larger prospective studies are needed to assess diagnostic yield and cost-effectiveness in the general population setting. CELL COLLECTION DEVICES WITH BIOMARKERS Various devices have been designed for esophageal cell collection. These samples can be analyzed cytologically and coupled with various biomarkers. Cytosponge TM -Trefoil Factor Family Protein 3 The Cytosponge (Medtronic, Minneapolis, MN, Figure 4) is a 30 mm polyurethane sponge, compressed withing a gelatin capsule and attached to a string [13]. Once patient swallows the capsule and it reaches the stomach, the capsule opens up and reveals the sponge. As the string is pulled back, the Cytosponge collects cells from the lining of the entire esophagus and oropharynx. Although several biomarkers have August 28, 2022 Volume 28 Issue 32 been used with Cytosponge including a multi-gene next-generation sequencing panel, differentially methylated genes, and microRNAs, the most well-established biomarker used with Cytosponge has been trefoil factor family protein 3 (TFF3) immunohistochemical staining [35][36][37]. In a multicenter case-control study of 11 United Kingdom hospitals (total subjects = 1,110; 463 dyspepsia controls and 647 BE patients), Cytosponge-TFF3 was performed prior to endoscopy [38]. BE was diagnosed with specificity of 92.4% and sensitivity of 79.9% (87.2% in patients with larger circumferential BE than 3 cm). Fitzgerald et al [39] in the BEST3 trial, a multicenter RCT study in 109 United Kingdom general practice clinics, demonstrated in patients with GERD symptoms, Cytosponge-TTF3 results in improved detection of BE, treatable dysplasia, and early cancer. In a systematic review of 13 studies, this method was shown to be cost-effective and well-tolerated through multiple patient populations [40]. Another patient-level review of 5 prospective trials assessing Cytosponge performance in patients with reflux disease, BE and eosinophilic esophagitis, also showed tolerability and safety of this device [41]. The advantage of Cytosponge is that it is not operator-dependent, it is quick, and does not require specialized equipment or extensive training, so it could easily be applied to a primary care setting [40]. EsophaCap TM EsophaCap (CapNostics, Concord, NC, Figure 5) is a sponge on string device similar to Cytosponge, albeit smaller and softer [42]. EsophaCap has been used in a pilot trial using a panel of 2 methylated DNA markers (MDM), 3-VAV2 and zinc finger protein 682-ZNF682, on whole esophageal brushings (49 August 28, 2022 Volume 28 Issue 32 BE case and 36 controls), and in 40 subjects (20 BE cases and 20 controls) randomly assigned to swallow EsophaCap [42]. Overall, 80% of MDM candidates showed high accuracy for BE (AUCs 0.84-0.94) with sensitivity and specificity of 100%. The EsophCap was swallowed and withdrawn in 98% of subjects with no reported major complications, and 32% had minimal abrasions. More recently, in the same group conducted a multi-center case-cohort study of 268 subjects swallowed the capsule, but 201 met the inclusion criteria (112 cases and 89 controls) using the two previously mentioned MDMs and included 3 additional markers (NDRG4, FER1L4, and ZNF568) [43]. Cross-validated sensitivity and specificity were 92% and 94% respectively. EsophaCap was well tolerated in most patients (performed mostly by non-physicians) and 95% preferred the device over endoscopy. Currently a case-control trial is ongoing to identify potential biomarkers for the early detection of BE, esophageal carcinoma (both adenocarcinoma and squamous cell carcinoma) using EsophaCap (ClinicalTrial.gov ID: NCT04214119). August 28, 2022 Volume 28 Issue 32 EsoCheck TM EsoCheck (Lucid Diagnostics, New York, NY, Figure 6) is a balloon-based sampling device which consists of a collapsible balloon attached to thin silicone catheter connected to a syringe [13]. Once EsoCheck is swallowed and is in the stomach, the balloon is inflated by injecting air into the catheter and withdrawn through the distal 3-6 cm of the esophagus, collecting epithelial cells. After sampling the area described above, the balloon is deflated which leads to its retraction into to the capsule, thereby protecting the sample from bio-contamination from the mid or proximal esophagus as well as oropharynx. Several biomarkers including MDMs have been used with EsoCheck. In a pilot study, Moinova et al [44] performed a genome-wide screening and identified high-frequency methylation within the CCNA1 DNA locus. They tested CCNA1 and VIM DNA methylation (already an established BE biomarker) using EsoCheck in 173 individuals with or without BE and showed an AUC = 0.95 for discriminating metaplasia and neoplasia vs normal individuals for both, and with both biomarkers combined, the panel had a sensitivity of 95% and specificity of 91%. The results were replicated in an independent validation cohort of 149 subjects. The device was generally well-tolerated but 28 (18%) of subjects could not swallow the pill and 9% had poor DNA yield. A new multi-center, single-arm trial is underway to study of the screening efficacy of a new generation EsoCheck device in combination with EsoGuard (2-marker MDM panel) in at risk population (ClinicalTrial.gov ID: NCT042293458). Circulatory MicroRNAs MicroRNAs (miRNAs) are short (approximately 18-25 nucleotides in length) non-coding RNAs which regulate gene expression by binding to mRNAs to inhibit their translation or facilitate their degradation [45]. miRNAs play role in cell growth, differentiation and migration and can be dysregulated in malignancy [46]. Several miRNAs have been shown to be differently expressed in patients with BE. In a quantitative real-time PCR analysis of 60 disease/normal-paired tissues from 30 patients with esophagitis or BE, miR-143, miR-145, miR-194, and miR-215 were significantly higher, while miR-203 and miR-205 were lower in BE tissues [47]. Analysis on circulating miRNA levels confirmed that miR-194 and miR-215 were significantly upregulated in BE patients. Additionally, serum miR-130a was also shown to be elevated in BE and EAC patients [48]. Pavlov et al [49] in a study of 69 patients showed serum miR-320e and miR-199a-3p to be significantly lower in BE compared to patients with normal epithelium. Investigators have recently reported miR-4485-5p as a novel biomarker of esophageal dysplasia worthy of continued investigation [50]. These markers provide a non-invasive method, requiring only a patient's peripheral blood sample and likely increasing acceptability and tolerability, but validation studies are needed in larger cohorts to demonstrate adequate sensitivity as well as specificity for widespread use. Volatile Organic Compounds Detection of cancer through exhaled breath using volatile organic compounds (VOC) has shown promising results for various cancers [51,52]. Two techniques have been used in patients with BE or esophageal cancer, which are gas chromatography-mass spectrometry and electronic nose (E-nose) apparatus [53][54][55][56]. The gas chromatography method was used to analyze exhaled breath samples from 81 patients (including 48 esophageal cancer patients) and 129 controls (including 16 patients with BE), and although this method was able to discriminate esophageal cancer from controls (AUC = 0.97), it was not able to identify patients with BE [56]. The drawback of this method is that it is costly and labor-intensive. The electronic nose apparatus, which consists of an array of 3 metal oxide sensors, uses a chemical to electrical interface to measure VOC profiles associated with various diseases and can be combined with machine learning [57]. In cross-sectional study of 122 patients with dysplastic BE, breath samples while in a fasting state were analyzed in real-time using the e-nose device [54]. Subjects were at various stages of treatment or surveillance. The data was introduced into an artificial neuronal network to discriminate differences in subjects stratified by the presence or absence of BE on biopsies. The test showed the ability to detect BE with a sensitivity of 82%, specificity of 80%, and accuracy of 81% (AUC = 0.79). In another proof-of-concept study of 402 patients (129 patients with BE and 141 patients with GERD symptoms), 5-minute breath samples were collected, and the test was able to identify BE patients with a sensitivity of 91% and specificity of 74% [53]. Other advantages of E-nose compared to the gas chromatography-mass spectrometry is cost and portability. More validation studies at the general population level are needed for use as a potential BE screening method. August 28, 2022 Volume 28 Issue 32 FUTURE TRENDS Circulatory tumor DNA (CtDNA), miniscule amounts of fragmented DNA originating from tumor cells, has been proposed as a part of multi-cancer early detection (MCED) project. The Circulating Cell-free Genome Atlas trial in a prospective, case-controlled observational study showed that a blood-based MCED test using CtDNA in combination with machine learning could detect cancer signals for multiple cancer types and predict cancer signal origin with promising accuracy [58]. Most recently, a large multicenter MCED trial called PATHFINDER study began recruiting participants across 31 United States sites to examine DNA methylation patterns in blood samples to detect various cancers including esophageal cancers [59]. Since BE is a pre-malignant condition, more specific studies need to be conducted in the future to assess feasibility. SURVEILLANCE Patients with BE are currently recommended to enter a surveillance endoscopic program based on histological findings and degree of dysplasia [14,60]. This has been associated with a significant mental burden on these patients, presenting as various forms of anxiety and stress related to thoughts of disease progression as well as potential implications of the test, with many finding the program physically burdensome and intrusive [61]. This opens up opportunity for non-invasive surveillance options, which could even be performed in the setting of regular outpatient clinic visits. These modalities can potentially facilitate access to care for a larger population of patients and increase compliance similar to non-invasive tests for colorectal cancer screening programs [62]. As such, nonendoscopic cell collection methods have the potential for use in BE surveillance but so far, no studies have compared their use with conventional endoscopy for the purpose of BE surveillance, leaving this important area ripe for future investigation. CONCLUSION Despite advances in screening and surveillance programs for BE, their lack of efficiency is demonstrated by the fact that only one in ten cases of EAC is diagnosed within a surveillance program. Upper endoscopy remains the gold standard but barriers exist to function as an effective screening tool which include high cost, invasiveness, possible complications, need for a trained specialist, and patient desirability. Screening guidelines have been published by several societies that are based on risk factors and do not include the general population. Less invasive modalities have been proposed for BE screening ( CtDNA is now being used as a part of "multi-cancer early detection" campaign which may play a role in detection of pre-cancer states such as BE. Using these non-invasive methods could also play a role as a surveillance tool once patients are identified with BE. Large future studies are desired to demonstrate the efficacy and feasibility for the BE surveillance population. FOOTNOTES Author contributions: Shahsavari D performed the majority of the writing and prepared tables, Kudaravalli P contributed to writing and prepared figures; Yap JEL provided input in writing the paper; Vega KJ designed the outline, coordinated the writing of the paper, edited the paper for intellectual content and is the guarantor.
2022-08-27T15:09:15.510Z
2022-08-28T00:00:00.000
{ "year": 2022, "sha1": "35a4784b9422dbe942a9aade0b169846961194fd", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v28.i32.4516", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "86f29e20d29bb8d3353687c6eaece861ac1c25ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
46227261
pes2o/s2orc
v3-fos-license
Effects of erythromycin on pressure in pyloric antrum and plasma motilin and somatostain content in dogs Effects of erythromycin on pressure in pyloric antrum and plasma motilin and somatostain content in dogs Subject headings erythromycin/pharmacology; somatostatin/blood;motilin/blood;pyloric antrum/drug effects INTRUDUCTION Erythromicin (EM) is a potent agonist of motilin (MTL) receptors [1] . EM may enhance the gastroenteric motion by binding with MTL receptors [2] . However, effects of EM pyloric antrum and its mechanism are not clear. The purpose of this study was to investigate the relation between EM, plasma MTL and somatostatin in regulation of pyloric sphincter muscle function in dogs. MATERIALS AND METHODS A randomized study was performed using male or female dogs weighing between 11kg-19kg. Before operation, dogs was prohibited form eating food for 24h. Animals were anaesthetised with i.v. injection of pentobarbital (2.5%, 1mg/kg). The upper medial incision of abdomen was performed. The anterior wall of gastric antrum was cut about 0.5cm, the tube of the gastric pressure meter (WYY-1 type, Sapceflight Medicine Engineer Research Institute) was inserted and fixed. The pressure graph was recorded by the pressure transducer. Erythromycin lactate was dissolved in 5% glucose liquid, and transfused i.v. (5mg/kg per hour). Isoptin (1mg/kg) was injected i.v. at an interval of 60min. At 90min i.v. atropine sulfoacid (0.1mg/ kg) was given. During 120min of pre-and postinfusion, pressure measure was done for 2min and 1ml blood sample was collected from dog femoral vein every 15min. Nine blood samples (1ml each) were collected from each dog. 3ml EDTA-Na2 and 200KIu aprotinin was added into the samples. Blood samples were immediately centrafuged at 3 500r/min for 15min at 4 . The plasma was stored at -70 . MTL and SS was assayed by radioimmune method. (MTL and SS radioimmune reagment box, East-Asia Immune Technique Research Institute). The concentration of MTL and SS in blood was counted with a FJ-2003-50G counter. Statistical analysis Statistical analysis was carried out using the Statistical analysis System. When a significant analysis of variance was found, Student's t test between two samples was performed. RESULTS The changes of plyoric pressure and the concentration of MTL and SS in plasma before and after i.v. transfusion (Table 1). Our results showed that the dog pyloric antrum basic pressure, total pressure and wave amplitude significantly increased after administration of EM. The interval time of high pressure wave amplitude was reduced and the frequency increased. After i.v. injection of antagonists isoptin and atropine, the plyoric pressure was inhibited repidly. The level of MTL in plasma of dogs and the change of the plyoric pressue induced by EM was related significantly, and were also infuenced by atropine and soptin. The concentration of SS in plasma of dogs was increased after EM administration and not inhibited by atropine and isoptin. DISCUSSION EM is one of most common antibiotics. To investigate the gastroenteric side effects of EM, Pilot and Itol, et al [3] , have found that the i.v. infusion of EM might mimick the migrating synthetical electric current of muscles (or contraction) during dog digestion induced by MTL. The effect of EM was similar with MTL in vivo or vitro, EM competitively inhibited the compination of receptors and MTL, therefore EM is considered one of the agonists on MTL receptors. Sarna, et al [4] have found that i.v. EM 1mg kg -1 h-3mg kg - (far below the dose of antibiotics) might induce migrating synthetical electric current of III phase muscles, beginning at stomach and migrating downward, which was related to MTL release. The main physiological action of MTL is to enhance the gastroenteric motion and increase the gastric and plyoric pressure. By i.v. EM 5mg kg -1 /h, the plyoric pressure increased immediately, suggesting that EM could increase the pressure in plyoric antrum and it was highly sensitive; and the effect of EM was related with MTL. We believe that EM might be used to treat the gastroduodenal reflux diseases in future. It has been found that the effect of EM on dog stomach, duodenal and gall doct might be inhibited b y a t r o p i n e [5] , i n d i c a t i n g t h a t E M a c t o n preconjunctional receptors of cholinergic. Some studies have suggested that EM and MTL have a similar gastroenteral action and race specificity. EM could induce the contration of rabbit gastric smooth musles, and was not inhibited by atropine, There have been a lot of investigations on the effect of gastroenteric hormone. Increase in plyoric pressure was increased by human or dog duodenal infusion of HCl or florence oil, suggesting that gastroenteric hormone could regulate pyloric motion. We observed for the first time the plasma SS changes, and found that the plasma SS level was not blocked by isoptin and atropine after i.v. EM, and the plasma SS was higher in late stage. T h i s p h e n o m e n o n m a y b e r e l a t e d t o b e autoregulation in vivo in order to maintain the balance among gastroenteral hormones like MTL and normal plyoric pressure. In summary, the study suggested EM may increase the pressure in plyoric antrum. The effect may be related to the plasma motilin and somatostatin level.
2018-04-03T06:18:04.152Z
1998-06-15T00:00:00.000
{ "year": 1998, "sha1": "b471ce7fb7a7dc202546c0b635361c67a37e8d23", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v4.i3.275", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "93e0161f269a0900d53acc33fb185fd9cc53f38a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249183480
pes2o/s2orc
v3-fos-license
Construction of the miRNA-mRNA Regulatory Networks and Explore Their Role in the Development of Lung Squamous Cell Carcinoma Purpose: MicroRNA (miRNA) binds to target mRNA and inhibit post-transcriptional gene expression. It plays an essential role in regulating gene expression, cell cycle, and biological development. This study aims to identify potential miRNA-mRNA regulatory networks that contribute to the pathogenesis of lung squamous cell carcinoma (LUSC). Patients and Methods: MiRNA microarray and RNA-Seq datasets were obtained from the gene expression omnibus (GEO) databases, the cancer genome atlas (TCGA), miRcancer, and dbDEMC. The GEO2R tool, “limma” and “DEseq” R packages were used to perform differential expression analysis. Gene enrichment analysis was conducted using the DAVID, DIANA, and Hiplot tools. The miRNA-mRNA regulatory networks were screened from the experimentally validated miRNA-target interactions databases (miRTarBase and TarBase). External validation was carried out in 30 pairs of LUSC tissues by Real-Time Quantitative Reverse Transcription PCR (qRT-PCR). Receiver operating characteristic curve (ROC) and decision curve analysis (DCA) were conducted to evaluate the diagnostic value. Clinical, survival and phenotypic analysis of miRNA-mRNA regulatory networks were further explored. Results: We screened 5 miRNA and 10 mRNA expression datasets from GEO and identified 7 DE-miRNAs and 270 DE-mRNAs. After databases screening and correlation analysis, four pairs of miRNA-mRNA regulatory networks were screened out. The miRNA-mRNA network of miR-205-5p (up) and PTPRM (down) was validated in 30 pairs of LUSC tissues. MiR-205-5p and PTPRM have good diagnostic efficacy and are expressed differently in different clinical features and are related to tumor immunity. Conclusion: The research identified a potential miRNA-mRNA regulatory network, providing a new way to explore the genesis and development of LUSC. INTRODUCTION Lung cancer is the leading cause of cancer death worldwide, accounting for approximately 18% of all cancer deaths (Bray et al., 2018). Lung cancer has the highest morbidity and mortality rate in China, accounting for 24.7% and 29.4%, respectively. There are four main histological types of lung cancer: adenocarcinoma, squamous cell carcinoma, small cell carcinoma, and large cell carcinoma, of which lung squamous cell carcinoma (LUSC) is one of the major histological subtypes (Travis et al., 2015). Survival rates for lung squamous cell carcinoma remain low, with 5-year survival rates of only 10-20% in most parts of the world. The diagnostic stage is a significant determinant of the prognosis of lung squamous cell carcinoma, and the detection of molecular markers for early diagnosis, prognosis, and therapeutic targets of LUSC has become very urgent (Allemani et al., 2015). MicroRNA (miRNA) is a kind of single-stranded, non-coding small RNA molecule (containing about 22 nucleotides) found in plants, animals, and viruses. miRNA binds target mRNA to inhibit post-transcriptional gene expression and plays an essential role in regulating gene expression, cell cycle, and the timing of biological development (Bartel 2009). The expression patterns of miRNAs have been reported to have the potential to identify various types of cancers (Slattery et al., 2016;Liu et al., 2018;Zhang et al., 2019). Even though numerous studies have been conducted on miRNA expression and function in LUSC, systematic and comprehensive analysis of the miRNA-mRNA regulatory network based on clinical samples from LUSC are still lacking. The construction of a potential miRNA-mRNA regulatory network will help reveal the relatively comprehensive molecular mechanism of LUSC. Here, we downloaded data from the GEO, TCGA-LUSC, miRcancer, and dbDEMC databases to screen for differential miRNAs (De-miRNAs) and mRNA (DE-mRNAs) between LUSC and normal tissue. The interactions between DE-miRNAs and DE-mRNAs were determined using Tarbase and miRTarbase databases, which are experimentally validated miRNA-target interaction databases. We further validated the DE-miRNAs and DE-mRNAs in 30 pairs of LUSC tissues by qRT-PCR. In summary, the interaction of the miRNA-mRNA regulatory networks have been researched in detail, providing new ideas or strategies for further development and application in clinical setting, for early diagnosis and better treatment. Data Acquisition and Processing of miRNA and Gene Expression Profiles We downloaded the miRNA and mRNA sequencing expression profiles and associated clinicopathological data of TCGA-LUSC from the GDC data portal at the National Cancer Institute (https://portal.gdc.cancer.gov/). We searched lung squamous cell carcinoma relevant gene microarray expression datasets from the Gene Expression Omnibus (GEO) database (http:// www.ncbi.nlm.nih.gov/geo/) with the following keywords: "lung squamous cell carcinoma". Filters were set to "series" and "Expression profiling by array", "Non-coding RNA profiling by array" and "Homo sapiens". The inclusive criteria of datasets is that the datasets have miRNA or mRNA expression values in lung squamous cell carcinoma tumors and normal tissues. Then we screened 5 eligible miRNA datasets and 10 eligible mRNA datasets. First, we screened differentially expressed miRNA (DE-miRNAs) or mRNA (DE-mRNAs) in each dataset. The DE-miRNAs and DE-mRNAs were obtained from microarray expression profiles using the web analysis tool GEO2R, which is used to compare groups of samples by the GEOquery and limma R packages from the Bioconductor project in the GEO database (http://www.ncbi.nlm.nih.gov/geo/geo2r/). The cut-off criteria were p-value < 0.05 and |log2 (fold change) | ≥ 1. Then, we summarized the differential miRNAs or mRNAs screened out from each dataset. We also collected differentially expressed miRNAs from miRCancer and Database of Differentially Expressed MiRNAs in human Cancers (dbDEMC) (Xie et al., 2013;Yang et al., 2017). An overview of the workflow steps is shown in Figure 1. Identification and Function Analysis of miRNA-mRNA Networks TarBase (Karagkouni et al., 2018) (http://www.microrna.gr/ tarbase) is a reference database, specifically for index experiments support the miRNA targets. It integrates information on cell-type specific miRNA-gene regulation, while hundreds of thousands of miRNA-binding locations are reported. miRTarBase (Chou et al., 2018) is a comprehensive annotated and experimentally validated miRNA-target interaction database for miRNA-related researches. Tarbase and miRTarBase are used to construct the miRNA-mRNA regulatory networks. We analyzed the correlation between miRNA and mRNA in TCGA-LUSC, and screened the miRNA-mRNA regulatory networks according to the results. The online program DAVID (http://david.abcc.ncifcrf.gov/), a database for annotation, visualization and integrated discovery, is a comprehensive tool for researchers and scholars to understand the biological meaning behind multiple genes. We used DAVID, DIANA-MirPath (Vlachos et al., 2015), a miRNA pathway analysis web-server, and Hiplot (https://hiplot.com.cn), a comprehensive web platform for scientific data visualization, to perform Gene Ontology (GO) functional analysis and Kyoto Encyclopedia of Genes Genomes (KEGG) pathways analysis. Single sample gene set Enrichment analysis (ssGSEA) is an extension of the GSEA method that allows the definition of an enrichment score that represents the absolute degree of enrichment of the gene set in each sample within a given dataset (Hoadley et al., 2018;Xiao et al., 2020). The ssGSEA data of TCGA-LUSC were downloaded from UCSC Xena (https://xena.ucsc.edu/) to analyze the possible enrichment pathways of DE-miRNAs and DE-mRNAs. Table 1. Total RNA was extracted from FFPE specimens using the TIANGEN RNAprep Pure FFPE kit (Tiangen, Beijing, China). The acquired RNA from each sample was lysed in 40 μl RNase-free water and stored at −80°C until use. The concentration and purity of RNA were analyzed by the Nanodrop 2000 spectrophotometer (NanoDrop Technologies, Wilmington, DE, United States ) (A260/A280 = 1.8-2, A260/ A230 > 1.7). Quantitative Reverse Transcription PCR (qRT-PCR) Assay According to the manufacturer's instructions, the external validation was verified by qRT-PCR using PrimeScript RT reagent Kit (Takara) and SYBR Premix Ex Taq II (Takara) after adding a poly (A) tail to RNA by Poly (A) Polymerase Kit (Takara). The sequences of PCR primers are listed in Supplementary Table S1. This process was run on qTOWER³ 84 (Analytik Jena) at 95°C for 20 s, followed by 40 cycles of 10 s at 95°C, 20 s at 60°C. The expression levels of miRNAs and mRNAs in tissue were calculated using the 2 −ΔΔCt method (Livak and Schmittgen 2001). (U6 as endogenous reference miRNA for sample normalization; 18s rRNA as endogenous reference mRNA for sample normalization; ΔCt = Ct miRNA/mRNA − Ct normalizer ; Ct: the threshold cycle). Evaluation of Interactions of miRNA-mRNA Networks and Tumor-Relative Phenotypes We downloaded the infiltrating immune cell types data from the TCGA website and calculated using CIBERSORT (https:// cibersort.stanford.edu/index.php/), a general calculation method used to quantify cell fractions from a large number of tissue gene expression profiles (GEP). Combining support vector regression with prior knowledge of the expression profile of purified white blood cell subsets, CIBERSORT was able to accurately estimate the immune component of tumor biopsy (Chen et al., 2018). The stromal and immune levels of TCGA-LUSC samples were evaluated using ESTIMATE (Estimation of STromal and Immune cells in MAlignant Tumour tissues using Expression data) software. This method uses gene expression signatures to infer the fraction of stromal and immune cells in tumour samples (Yoshihara et al., 2013). We used the UCSC Xena platform (https://xena.ucsc.edu/) to obtain the methylation level of CpG sites in TCGA-LUSC samples, which is detected by the Illumina Infinium HumanMethylation450 BeadChips platform. The amount of somatic variants per megabase (MB) of the genome was measured by tumor mutation burden (TMB). Statistical Analysis We used GraphPad Prism software, IBM SPSS Statistics v.26 software (IBM Corporation, Armonk, NY, United States ), and R language v3.6.3 (https://cran.r-project.org/) for data analysis. The t-test was performed to analyze DE-miRNAs and DE-mRNAs. The statistically significant criteria of DE-miRNAs and DE-mRNAs are |log2FC| >0.58 and p < 0.05. The area under the ROC curve (AUC) and decision curve analysis (DCA) based on logistic regression were used to evaluate the diagnostic efficacy of miRNA-mRNA networks. The Pearson correlation method was used to calculate the correlation between DE-mRNAs or DE-miRNAs and tumor-related phenotypes. Prognostic analysis was performed using the "survival" package. Identification of Differentially Expressed miRNAs and mRNAs in LUSC We screened 5 miRNA and 10 mRNA expression datasets from GEO, and the information of 15 GEO datasets is shown in Table 2. The log2 fold change (LUSC vs. NC) was used to screen the DE-mRNAs and DE-miRNAs. As shown in Screening of miRNA-RNA Regulatory Networks Associated With LUSC We further screened miRtarbase and Tarbase database to establish potential miRNA-mRNA regulatory networks, and screened out 5 miRNA-mRNA regulatory networks ( Figure 3A). The complete prediction network of miR-205-5p is shown in Supplementary Table S3. Then, we filtered out 4 miRNA-mRNA networks with significant correlation (adjusted p-value < 0.05) in TCGA-LUSC listed in Figure 3B and full statistical results listed in Supplementary Table S4. Validation of the Expression of miRNAs and mRNAs in LUSC Tissue In order to prove the differential expression of the DE-miRNAs and DE-mRNAs, we further validated them in 30 pairs of matched tumors and adjacent normal tissues by qRT-PCR. As shown in Figure 4, the expression of miR-205-5p (p < 0.0001) and UBE2C (p = 0.0093) were upregulated in tumor tissues, while the expressions of miR-140-3p (p = 0.0293), PTPRM (p < 0.0001), GPD1L (p = 0.0002), and FOXF2 (p < 0.0001) were downregulated in tumor tissues. At the same time, miR-182-5p (p = 0.2286) and miR-210-3p (p = 0.0879) showed no significant difference in tumor tissues compared with normal tissues. Spearman correlation analysis of the interaction between DE-miRNAs and DE-mRNAs, showed that miR-205-5p was significantly correlated with PTPRM (p = 0.0186, r = −0.3031). IHC images from the HPA database showed that PTPRM expression in LUSC was lower than in normal control shown in Supplementary Figure S2. Correlation Analysis of LUSC Clinical-Pathological Features and Prognosis With the Expression of miRNA and mRNA Expression Levels Based on the analysis of FIGO stages, there was no statistically significant difference in the expression of miR-205-5p and PTPRM in the early stage (I) and advanced stage (II + III + IV) (Supplementary Figures S3A,B). The expression of miR-205-5p in male patients was higher than that in female patients Figures S3C,D), and PTPRM was higher in age over 68 years (Supplementary Figures S3E,F). Univariate and multivariate cox regression analyses were used to estimate the Hazard ratio (HR) of different clinical features in the TCGA-LUSC. K-M survival analysis and cox regression analysis showed that miR-205-5p and PTPRM were not associated with prognosis (Supplementary Figure S4). Analysis of Tumor-Related Phenotypes Associated With miRNA-mRNA Network We downloaded the ssGSEA enrichment score of TCGA-LUSC data from UCSC Xena, and then analyzed the correlation between expression values of miR-205-5p and PTPRM and enrichment score. The results showed that miR-205-5p and PTPRM were highly correlated with immune-related pathways, such as yaci and bcma stimulation of b cell immune responses and Translocation of ZAP-70 to Immunological synapse ( Figure 6A). Therefore, we further analyzed its correlation with immune cells to explore its role in tumor immunity. We conducted a differential analysis of the immune cell data in TCGA-LUSC and found that 13 types of immune cells differed between tumor and normal tissues listed in Supplementary Table S5. Then, the correlation analysis of differential immune cells with miR-205-5p and PTPRM were carried out in the following research. As shown in Figure 6B, the expression of miR-205-5p and PTPRM were correlated with dendritic cells activate. We used CIBERSORT to calculate the proportion of various immune cells in each TCGA-LUSC sample, ESTIMATE to estimate the stroma and immunity levels of the samples, and obtained the methylation levels of CpG sites in TCGA-LUSC samples from the UCSC Xena platform. In conclusion, miR-205-5p and PTPRM interact with DNA methylation, tumor immunity and inflammation in the tumor microenvironment. MiR-205-5p and PTPRM had no correlation with TMB ( Figure 6C). DISCUSSION The functional pattern of the miRNA-mRNA regulatory network has been shown in the occurrence and progression of various Frontiers in Molecular Biosciences | www.frontiersin.org May 2022 | Volume 9 | Article 888020 human diseases, including cancer (Muller and Nowak 2014; Wang et al., 2017;Cheng et al., 2018;Chen et al., 2020). In the current work, we aim to construct a potential miRNA-mRNA regulatory network in LUSC. Firstly, we researched the GEO database to screen the DE-miRNAs and DE-mRNAs preliminarily. The DE-miRNAs and DE-mRNAs showed consistent differential expression in 5 miRNA and 10 mRNA datasets. Then, the DE-miRNAs and DE-mRNAs preliminarily screened from GEO were further verified by TCGA-LUSC, miRcancer, and dbDEMC databases. Ultimately, 7 De-miRNAs (3 upregulated and 4 downregulated) and 270 DE-mRNAs (101 upregulated and 169 downregulated) showed consistent differential expression across the four databases. The miRNA-mRNA regulatory networks were verified by experimental screening from miRTarBase and Tarbase database, and 5 miRNA-mRNA regulatory networks were screened out. We used TCGA-LUSC data to conduct correlation analysis on the regulatory networks initially screened, and finally identified 4 miRNA-mRNA regulatory networks. We know that the DE-miRNAs are involved in various pathways related to fatty acid metabolism and degradation. Fatty acid metabolism and fatty acid degradation have recently been recognized as an essential aberration of metabolism required for carcinogenesis (Swierczynski, Hebanowska, and Sledzinski 2014). Energy metabolism reprogramming, which fuels fast cell growth and proliferation by adjustments of energy metabolism, has been considered as an emerging hallmark of cancer (Hanahan and Weinberg 2011). In cancer cells, the biosynthesis of fatty acids often increases to meet the needs of lipid synthesis membranes and signaling molecules. Cancer cells usually obtain higher lipid accumulation in lipid droplets than normal cells (Catalina-Rodriguez et al., 2012). After a series of bioinformatics analyses and external experimental verification, upregulated miR-205-5p and downregulated PTPRM in tumor tissues were finally screened out. The expression of miR-205-5p is closely associated with the incidence and development of tumors, such as head and neck cancer, ovarian cancer and breast cancer (Iorio et al., 2007;Tran et al., 2007;Wu and Mo 2009). It has been reported that the expression of miR-205 in non-small cell lung cancer (NSCLC) tissues is significantly increased and is related to the degree of tumor differentiation, which may lead to increased proliferation and invasion of lung cancer cells, thus leading to cancer progression (Lebanony et al., 2009;Jiang et al., 2013;Duan et al., 2017;Jiang et al., 2017). PTPRM is involved in cell-cell adhesion through same-sex interactions and may play a key role in signal transduction and growth regulation (Aricescu et al., 2006). PTPRM is associated with the prognosis of cervical cancer and promotes tumor growth and lymph node metastasis . In this study, a panel combined with the network was built using logistic regression analysis. The expression of miR-205-5p and PTPRM has a good diagnostic efficacy in distinguishing LUSC patients from normal controls. The correlation analysis results of miR-205-5p and PTPRM with ssGSEA and tumorassociated phenotypes showed that miR-205-5p and PTPRM seemed to have a certain correlation with tumor immunity. Tumor-infiltrating immune cells, such as T cells, macrophages, and neutrophils, are critical elements in tumor microenvironment and have shown close association with the clinical outcomes of various cancers (Gonzalez, Hagerling, and Werb 2018). T cells are composed of different subtypes with complicated phenotypes and functions, and tumor-infiltrating T cells play an extremely important role in the immune response system. Thymic epithelial cells (TECs) are essential regulators of T cell development and selection. miR-205-5p inhibits thymic epithelial cell proliferation via FA2H-TFAP2A feedback regulation in age-associated thymus involution (Gong et al., 2020). In glioma cells, miR-205-5p regulates TGF-β1 by targeting SMAD2, thereby influencing the immune response of TGF-β1 in tumors (Flavell et al., 2010;Meulmeester and Ten Dijke 2011;Duan and Chen 2016). Smad1 mediates BMP signaling and is involved in cell growth, apoptosis, morphogenesis and immune response (Bharathy et al., 2008). In esophageal squamous cell carcinoma, Mir-205-5p affects tumor immune response by targeting SMAD1 (Liang et al., 2014). PTPRM can activate STAT3 signaling pathway to enhance the anti-cancer immune responses and rescuing the suppressed immunologic microenvironment in tumors Im et al., 2020). CpG sites were identified in a region immediately upstream of the first exon of MIR205HG and in the miR-205 locus (Ferrari and Gandellini 2020). DNA methylation maintains the silencing of miR-205, thus playing a role in the occurrence and development of lung cancer (Tellez et al., 2011). The upregulation of FN1 reduced PTPRM by increasing its methylation, resulting in an increase of STAT3 phosphorylation and promoting GBM cell proliferation (Song et al., 2021). In general, miR-205-5p and PTPRM have a certain correlation with tumor immunity and global methylation, and we will further explore their role in tumor immunity in future experiments. Although we have conducted a comprehensive analysis and experimental verification of the miRNA-mRNA regulatory networks involved in LUSC, the internal threats to the validity of this study be different platforms, study time, study population and experimental methods across different datasets, and limited sample size of external validation. Based on the above reasons, we selected DE-miRNAs or DE-mRNAs in each dataset to eliminate the influence caused by different datasets as much as possible. We will also expand the sample size and increase multi-center samples for further research in future experimental studies. External threats to the validity may require more experiments and clinical validation before they can be truly applied to clinical practice. In future research, we will try to cooperate with different centers and further evaluate the miRNA-mRNA regulatory networks based on larger sample sizes for more generalizable findings.
2022-05-31T13:21:52.799Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "9607eeb33b68b1573721a7cdad4b2be8a7ac753c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2022.888020/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "9607eeb33b68b1573721a7cdad4b2be8a7ac753c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17387733
pes2o/s2orc
v3-fos-license
The End of the Dark Ages: Probing the Reionization of the Universe with HST and JWST Limiting the number of model-dependent assumptions to a minimum, we discuss the detectability of the sources responsible for reionization with existing and planned telescopes. We conclude that if reionization sources are UV-efficient, minimum luminosity sources, then it may be difficult to detect them before the advent of the James Webb Space Telescope (JWST). The best approach before the launch of JWST may be either to exploit gravitational lensing by clusters of galaxies, or to search for strong Ly-alpha sources by means of narrow-band excess techniques or slitless grism spectroscopy. Introduction Motivated by recent evidence that the epoch of reionization of hydrogen may have ended at a redshift as low as z ≈ 6 (e.g., Becker et al. (2001); Fan et al. (2002)), we have considered the detectability of the sources responsible for this reionization. The main idea is that reionization places limits on the mean surface brightness of the Population of reionization sources. We have defined a family of models characterized by two parameters: the Lyman continuum escape fraction f c from the sources, and the clumpiness parameter C of the intergalactic medium. The minimum surface brightness model corresponds to a value of unity for both parameters. A maximum surface brightness is obtained by requiring that the reionization sources do not overproduce heavy elements. Our general approach is applicable to most types of reionization sources, but in specific numerical examples we focus on Population III stars, because they have very high effective temperatures and, therefore, are very effective producers of ionizing UV photons (e.g., Panagia et al. (2003)). Our mean surface brightness estimates are compared to the parameter space that can be probed by existing and future telescopes, in order to help planning the The loci of surface density vs apparent AB magnitude for identical reionization sources that are either Population III (left hand panel) or Population II stars (right hand panel). most effective surveys. A full account of our work can be found in Stiavelli, Fall & Panagia (2003). Results and Discussion In Figure 1 we show the loci of the mean surface brightness of identical reionization sources as a function of their observable AB magnitude. The left panel refers to Population III sources with effective temperature of 10 5 K, the right panel to Population II reionization sources with effective temperature 5 × 10 4 K. In both panels, the lower solid line represents the minimum surface brightness model, (1,1), while the upper solid line represents the global metallicity constraint Z ≤ 0.01Z ⊙ at z = 6. The thin dotted lines represent the (0.5, 1) and (0.1, 30) models. The non-shaded area is the only one accessible to reionization sources that do not overproduce metals. The Lshaped markers delimit the quadrants (i.e., the areas above and to the right of the markers) probed by the GOODS/ACS survey (Dickinson and Giavalisco (2003)), the HDF/HDFS NICMOS fields (Thompson et al. (1999); Williams et al. (2000)), the Ultra Deep Field (UDF) and by a hypothetical ultra-deep survey with JWST. In Figure 2 we show the expected cumulative surface density distributions for reionization sources with a variety of luminosity functions. In each panel, the upper solid line represents the global metallicity constraint. The lower solid line represents the minimum surface brightness model, (1,1). The thin dotted The cumulative distribution of the surface density vs apparent AB magnitude of reionization sources with luminosity functions with different knees. lines give the luminosity function for the (0.5,1) and the (0.1, 30) models. The L-shaped markers delimit the quadrants probed by the GOODS/ACS survey, the HDF and HDFS NICMOS fields, the UDF, and an ultra-deep survey with JWST, respectively. From these results it appears that if reionization is caused by UV-efficient, minimum surface brightness sources, the non-ionizing continuum emission from reionization sources will be difficult to detect before the advent of JWST. On the other hand, if the sources of reionization were not extremely hot Population III stars but cooler Population II stars or AGNs, they would be brighter by 1-2 magnitudes and thus they would be easier to detect. Finally, Figure 3 presents the required surface density as a function of the Ly-α line flux of reionization sources. The left hand panel shows the loci of identical sources for two different (f c , C) models (thin dotted lines). The top solid line identifies the global metallicity constraint. The right hand panel illustrates two different luminosity functions. The solid line refer to a luminosity function identical to that of z = 3 Lyman break galaxies, (i.e., M * ,1400 = −21.2 and α = 1.6, while the dashed line refers to a luminosity function with M * ,1400 = −17.5 and a slope identical to the local Universe slope, α = 1.1. In both panels, the L-shaped marker identifies the quadrant probed by a narrow- band excess survey at z ≥ 6 (LALA survey, Rhoads and Malhotra (2001)). The oblique marker labeled Lens represents a hypothetical 100-orbit survey with the ACS grism on a cluster of galaxies to exploit gravitational amplification. The solid bar represents the density estimated from the detection at z = 6.56 by Hu et al. (2002) while the two points with down-pointing arrows represent their upper limits. It appears that searches based on narrow-band excess techniques or slitless grisms would be promising and might lead to the detection of the reionization sources within this decade.
2014-10-01T00:00:00.000Z
2003-09-15T00:00:00.000
{ "year": 2003, "sha1": "ae4101569003c52a3d7a51e2075e9d12629c5d72", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0309406v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "404f14e9da7a85703e03b5867e0b76a93a9b2805", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
272413235
pes2o/s2orc
v3-fos-license
Psychological Factors Influencing Students' Anxiety in Speaking English This study aimed to investigate the psychological factor influencing the students’ anxiety in speaking English. It used a descriptive qualitative approach. The data were acquired by using observation, open-ended questionnaires, and interviews. There were nine students in the second semester of ITB Adias Pemalang participating in this research. The data was analyzed through data reduction, display, and conclusion drawing established. The findings indicated that communication apprehension, test-anxiety, and fear of negative evaluation were three main factors causes of the students’ anxiety when speaking English. The findings of this research can be as a guide to examine the students’ anxiety during their studies in order to improve the studen ts’ motivation and confidence when speaking the target language. INTRODUCTION English is one of the international languages commonly used to communicate with others worldwide.In the era of globalization, where competition is made accessible to everyone worldwide, mastering English is becoming increasingly important.For example, to face the ASEAN Economic Community (AEC), where there is a free flow of trade, services, investment, and labor among ASEAN countries.The impact of the AEC is the increased competition for labor, which can lead to increased unemployment if it has low competitiveness. The growing importance of English proficiency is reflected in various aspects of popular culture, including music, with BTS (Bangtan Sonyeondan) as a prime example. Known for their captivating lyrics and dynamic performances, BTS strategically incorporates English into their songs to reach a broader international audience, bridging cultural and linguistic gaps and enhancing their global appeal (Yuliana et al, 2024).One of the things that can be done to prepare students to face world competition is to prepare students to master English.By equipping students with strong English skills, they can better navigate and compete in the global marketplace, much like BTS leverages their bilingualism to achieve international success.Integrating English proficiency into education not only opens up more opportunities for students but also empowers them to connect with diverse cultures and perspectives, fostering a more inclusive and competitive future workforce. One of the most important skills the students must have is speaking, as it is an essential tool for effective communication with others.It conveys ideas, expresses feelings, creates information, and builds conversation.People can be said to master English fluently if they can speak English well.Speaking is one of the most important parts of learning a foreign language, as Nunan (2000:39) points, because being able to have a conversation in the target language is a key component of language proficiency.As a result, developing students' speaking skills is crucial to helping them in their jobs. Speaking is challenging for many students because it requires interaction with others (Indriyanty, 2016).The other skills can be practiced alone while peaking requires partners to respond to the student's utterances, so students should try their hardest to locate someone with whom to communicate.However, it is not an easy thing to do.Speaking is the most difficult skill, especially for students at ITB Adias Pemalang.The majority of students still struggle with communicating in English.Students tend to be passive when they are in English class.When the lecturer asks questions in English, they seem hesitant and not confident when they have to say something or answer questions in English.It is not infrequent that students stammer and would prefer to answer in Indonesian.Students have much anxiety in the classroom when they have to speak in English.A lack of confidence and fear of making mistakes are just two of the many factors that need to be taken into account when explaining why students are reluctant to speak in English (Boonkit, 2010).Furthermore, according to Thornbury (2005: 28), anxiety and speaking failure might be caused by a lack of vocabulary, poor grammar, and a fear of making mistakes. Based on the observations during the teaching-learning process and their interview, it was found that the students had anxiety about speaking English.Students faced several problems in speaking English, such as their belief that English was challenging and their grammar, information, and pronunciation.The student's anxiety made them like to underestimate themselves, owing to feeling afraid, worried, frustrated, shy, or nervous about speaking in English classes.These things are classified as psychological problems.Psychological factors can cause of students' antipathy to speak the target language (Sari, 2022).According to Horwitz et al. (1986), anxiety is one of the psychological issues that students face and that affects their bravery when speaking the language they are studying.Anxiety includes emotions such as tension, worry, and uneasiness when speaking in English.It happens when someone is forced to deal with particular issues or subjects.Students that are anxious also have trouble comprehending and communicating with others when they speak.However, Hanim (2018) states that if students can raise their motivation and confidence to speak, they will no longer be anxious about expressing their ideas.It will be impossible for students to be a good speaker if they have lots of anxiety (Anggini and Arjulayana, 2011). Several previous studies related to the issue of foreign language anxiety in speaking performance have become the background of the study.Nijat et. al. (2019) conducted research in Malaysia Primary Schools.It was concluded that the majority of students were the victims of common physiological factors such as fear in class, shyness, and anxiety.Most pupils were not confident speaking because they feared speaking in English.Additionally, Sulistyowati (2023) reported that the students experience anxiety, uneasiness, nervousness, and fear when speaking English as a second language.During class presentations and conversations, their speaking motivation and fluency were impacted by their anxiousness. Moreover, there is a correlation between students' speaking anxiety and their skill ability.Andita et. al. (2019) stated in their study that there was a significant correlation between students' speaking skill and their anxiety.Students would find it harder to speak English if they had high anxiety.In order to determine whether there was a relationship between university students' anxiety when speaking English as a second language (ESL) and demographic characteristics like mother tongue, parents' educational background, and SPM English results, Sim, et al. ( 2020) also carried out quantitative research on the subject.The outcome demonstrated a positive relationship between the ESL speaking anxiety of university students and their SPM English Language scores.Meanwhile, Platika and Adnan's study in 2021 showed that there was a moderate and negative correlation between students' speaking anxiety and students' speaking ability.In addition, Darojah and Aminin (2023) reported a strong positive correlation between students' speaking abilities and their sense of self-efficacy.Students who have strong belief their ability can excel speaking well. In order to deal with the condition and situation mentioned above, students' anxiety about speaking English from students' perspective need to be investigated.This study will focus on students' anxiety about speaking English in the classroom.The results of this research are expected to enrich the study of students' anxiety in speaking English and contribute significantly to the theory of students' anxiety in speaking English.Furthermore, it is expected to broaden the information related to the influences of psychological factors on students' anxiety in speaking English. RESEARCH METHOD This research is part of a case study designed to investigate the psychological factors influencing students' anxiety about speaking English.It used descriptive qualitative methods to explain the psychological factors that influenced students' anxiety about speaking English from students' perspectives.The research was conducted in a natural classroom setting where human behavior and events occurred.The researcher was a data collector, observer, and interviewer in this research.The research participants were second-semester working students of the Information System Study Program of TB Adias Pemalang.Nine students, five female, and four male, participated in the second semester. The data were obtained through observation of class interaction, open-ended questionnaires, and interviews.An observation sheet was adopted by Pratiwi and Analido (2018).Observation aimed to recognize the students' condition, feelings, personal characteristics, and behavior in the classroom.The open-ended questionnaire analyzed the students' anxiety about speaking English.It was adopted by Horwitz et al. (1986).The interviews were used to confirm students' responses to the questionnaire information.The data analysis was carried out in three steps: reduction, display, and conclusion drawing. To avoid subjectivity, triangulation was suggested in this research as methodology triangulation.Some instruments, such as observation sheets, interviews, and open-ended questionnaires, were obtained. RESULTS AND DISCUSSION In the English classroom, some indications of students' anxiety in speaking English were found, such as shyness, nervousness, anxiety, fear of making mistakes, and lack of confidence.These made students unable to convey their ideas orally in the classroom. According to Umisara et.al. ( 2021), three signs of students' anxiety when speaking English exist.Those are general avoidance, physical actions, and cultural dependence signs.Based on the classroom observation, some signs show students' anxiety in speaking English.The first sign was general avoidance.Some students felt anxious in answering questions from the lecturer or other students.They stammer and cannot answer the question in English, so they prefer to answer it in Indonesian.The second sign was physical action.Students tend to touch objects around them, avoid eye contact with the lecturer, play with their fingers, and shake their legs when speaking English.The last sign was cultural dependence.Students avoided interactions in class, were reluctant to communicate and discuss, and displayed excessive behavior such as smiling, laughing, and joking when feeling anxious. These signs indicated that students felt anxious, which is caused by students' anxiety factors, such as communication apprehension, test-anxiety, and fear of negative evaluation. Communication Apprehension According to Horwitz et al. (1986), "communication apprehension is a type of shyness characterized by fear or anxiety."It focuses on the students' capacity to converse with others in a foreign language.The table below is the result of questionnaire adapted from Horwitz et al. (1986). 1 was the result of questionnaire of communication apprehension.The table showed that the students were feeling nervous, unconfident, shy, angry, self-conscious, and unable to understand.To ensure the factor of communication apprehension was to influence the students' anxiety in speaking English, interview was done to follow up the result of the questionnaire.Students stated that they were afraid of making mistakes in speaking English.They were afraid not to pronounce the words correctly and could make them feel embarrassed due to being laughed at other students.Some students statements, as follows: Student 1 : "Saya takut salah ngucapinnya, Miss karena kurang pandai bicara bahasa Inggris."(I'm afraid I'll pronounce it wrong, Miss, because I'm not good at speaking English ) Student 3 : "Saya takut salah ngomong Miss, malu nanti kalau ditertawakan sama yang lain."(I am afraid I make mistake, Miss.I am ashamed to be laughed at." The statements presented above supported that some students were feeling anxious to speak English because they were not fluent in speaking English.They were fear of making error and their friends would laugh at them.Other students stated that they did not feel afraid of making mistake but they felt confused to say it in English because the lack of vocabularies. They knew what the teacher's saying, but they did not know how to respond it in English.It indicated that the students had been frightened in speaking English when they realized their ability were insufficient. It can be concluded, communication apprehension occurred in the language classroom.The pressure of speaking English in the classroom, especially in front of the class, had created anxiety to them.Some students kept trying on in speaking English despite of their limited skill but others gave up and speak in Indonesia.It meant the students must speak English regularly to practice their speaking skill.They were required to speak English in the classroom to improve their ability.It is because outside the class, they are going to speak their mother language and not going to use English.Horwitz et al. (1986) describes test anxiety as the anxiety of performance coming from the fear of failure and making a mistake.Students feel fear of exams, quizzes, and other assignments used to evaluate their performance.It made them feel uncomfortable to speak English in the language classroom.Students are afraid it will affect their grades.Furthermore, Horwitz et.al explain that the students who have high anxiety to do the oral test will experience a greater difficulty in speaking and make mistakes more than students with low anxiety.The table below is the result of questionnaire adapted from Horwitz et al. (1986).The statement above supported that test-anxiety was influenced by fear of doing badly because it would influence their grade.Hotwitz et.al. (1986) state that fear of failure can detain their performance.In addition, some students stated although they've prepared it before performing, they still could not do well because fear of failure and fear of making errors. Test Anxiety Student 5 : "Saya sudah latihan Miss, tapi kalau di depan kelas itu lupa semuanya.Makanya saya bawa kertas.""I've already practiced it, but I forget as soon as I am in front of the class.That's why I bring a paper (to read it)" Based the explanation above, test-anxiety could occur in the language classroom.The pressure of speaking English in the classroom, especially when they knew the lecturer would asses them, had created anxiety to them.They stated that although they have already prepared it but they did not confident to perform it.In a line with their statements, students tend to stammer and nervous when doing their presentation.They stated that they lack of vocabularies and fear of mispronounce. It can be established that test anxiety was a factor influencing students' nervousness when learning to speak.Students' speaking anxiety stemmed from feelings of nervousness, shyness, confusion, and discomfort speaking English in front of the class.Test anxiety has a stronger psychological influence on students, such as fear of being tested or receiving low grades.The factors listed made the kids feel pressed and forced to speak English. Fear of Negative Evaluation Hotwitz et.al (1986) explain the dread of unfavorable assessment has a broader scope in speaking anxiety since it happens in numerous circumstances, such as social interactions, speaking activities, and speaking tests.The table below is the result of questionnaire adapted from Horwitz et al. (1986).I am afraid that my foreign language teacher is ready to correct every mistake I make 2 3 4 5 I always feel that the other students speak the foreign language better than I do. 3 6 6 I am afraid that the other students will laugh at me when I speak the foreign language. 3 3 1 2 7 I get nervous when the foreign language teacher asks questions which I haven't prepared in advance. 6 2 1 Table 3 was the result of questionnaire of negative evaluation.Based on the data above, fear of negative evaluation showed as an element that influenced students' anxiety in speaking English.Negative evaluation made students anxious to speak English.Indrianty (2016) explained negative correction and evaluation could be a problem which made the students anxious.According to the data shown above, students were relunctant to participate in speaking English for fear of having negative reactions.It influenced the students' anxiety, Table 1 . The result of Communication Apprehension Table 2 . The result of Test Anxiety Table2was the results of the questionnaire of test anxiety.The table showed students were not ready to speak in English in the classroom.The students felt tense and nervous in the classroom.They forgot what they were going to say in front of the classroom although they have already prepared it at home.To ensure the factor of test-anxiety was to influence the students' anxiety in speaking English, interview was done to follow up the result of the questionnaire.Student 7 : "Kalau pelajaran biasa tidak apa-apa Bu, tapi kalau pas presentasi itu takut gak bisa jawab pertanyaan.Nanti mempengaruhi nilai.Kan kelompok" "I am okay in the usual teaching learning process, Miss, but in the presentation, I am afraid I can't answer the question.It also will influence my group assesment." Table 3 . The result of Fear of Negative Evaluation
2024-09-06T15:12:24.972Z
2024-07-12T00:00:00.000
{ "year": 2024, "sha1": "b15cfcc32266fd8d2b130c606dfec69dec776915", "oa_license": null, "oa_url": "https://doi.org/10.31000/globish.v13i2.11481", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "518071190d69c28ffeb9d064c03ad98f1292e8d7", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [] }
93495889
pes2o/s2orc
v3-fos-license
X-ray Raman Scattering of Water Near the Critical Point: Comparison of an Isotherm and Isochore X-ray Raman spectra of liquid, sub- and super- critical water at the oxygen K-edge were measured, at densities 1.02 - 0.16 gcm^-3. Measurements were made along both an isotherm and an isochore passing near the critical point. As density is reduced there is a general tendency of the spectra to increasingly resemble that of the vapor phase, with, first, a well separated low-energy peak, and, eventually, at densities below the critical density, peaks appearing at higher energies corresponding to molecular transitions. The critical point itself is distinguished by a local maximum in the contrast between some of the spectroscopic features. The results are compared to computed X-ray absorption spectra of supercritical water. Heating a liquid under pressure allows one, eventually, to pass into the super-critical region where there is a continuous change in density between liquid and vapor phases. The neighborhood of the last point where there is a discontinuous change between the two phases, the critical point (see Fig.1(left)), is interesting for a variety of reasons. Fundamentally, the critical point is where strong density fluctuations occur on essentially all length scales, leading to unique physical properties. Practically, the properties of liquid solvents in the supercritical region facilitate different types of reactions, often without the (sometimes environmentally harmful) catalysts that can be needed in other conditions [1][2][3][4]. Water in the critical and supercritical region (critical point, T c = 647.096 K, P c = 22.064 MPa, ρ c = 0.322 gcm −3 [5]) has attracted significant attention both for fundamental and practical reasons. On the fundamental side, there is great interest in the nature of the bonding in this region: how do the hydrogen bonds, which are responsible for many of the interesting features of liquid water in ambient conditions, survive into the supercritical region? There have been several studies focusing on this topic (e.g. experimentally [6][7][8][9][10][11][12][13][14][15][16][17][18] and theoretically [19][20][21][22][23][24][25][26]). However, there remains significant debate and uncertainty, in part because experiments on water in this region (where, for example, its reactivity is sufficiently high to etch typical stainless steel) are difficult, and in part because the interpretation of spectroscopic experiments is severely complicated by the density fluctuations. In the present paper we apply X-ray Raman Scattering (XRS) to investigate the behavior of water in the critical region and compare our results to ab-initio calculations. Soft X-ray absorption spectroscopy (XAS) [27,28] and XRS [29] have revealed the details of hydrogen bonding (HB) environments in water by assigning the observed spectral features near the oxygen K-edge [17,[30][31][32][33][34][35]. (Non-resonant) XRS is alternative to XAS. Under the condition qr 0 ≪ 1 (r 0 : radius of core state), the dipole contribution is dominant and has the same matrix element as absorption. In this case, the spectral shape of XRS is proportional to XAS cross section. The higher energy of XRS experiments (6 -12 keV), as compared to XAS work (∼ 540 eV), means that it is much easier to penetrate into complex sample environments, as is very important for investigating supercritical water (SCW). The experiments were performed on the Taiwan beamline BL12XU [36] at SPring-8 in Japan. Incident radiation was monochromatized by a Si(400) reflection (−, +, +, −) 4-bounce high-resolution monochromator and focused onto the sample to a spot size of 80 (V) × 120 (H) µm 2 . The scattered radiation is analyzed using a 2-m radius spherically bent Si(555) analyzer crystal in a Rowland circle geometry. The analyzer angle of was fixed at 88.5 • , corresponding to an analyzer energy of 9888.8 eV. The incident photon energy was scanned from 10421 to 10437 eV. The total energy resolution was 260 meV (FWHM of the quasi-elastic line of the sample) measured at the analyzer energy. The energy scale was calibrated using scans of the Tantalum L 3 (9881.1 eV) and Rhenium L 3 (10535.3 eV) absorption edges. The sample cell was custom designed for inelastic Xray scattering experiments, with an inner chamber made of Hastelloy-X alloy with diamond windows to avoid reaction with the SCW. For this work, we chose a 3 mm sample length, corresponding to the 1/e absorption length for water of density 0.6 gcm −3 . The incident and outgoing angular acceptances were 22 • . This cell was then placed inside of a vacuum chamber (to aid thermal control and reduce air-scatter) with polyimide windows. The sample was ultra-pure water. Its thermodynamic conditions were maintained by heating the inner sample cell and providing pressure using an external hand press. As the sample density is extremely sensitive to temperature and pressure especially near the critical point, the pressure was continuously monitored throughout the experiment and the temperature was controlled using an Inconel-covered thermocouple in direct contact with the water in the cell (just to one side of the X-ray path). Both pressure and temperature measurement systems were carefully calibrated, with the temperature gauge expected to be accurate to 0.4 % and the pressure to ± 0.1 MPa and 0.1 %. During scans, measured temperatures were maintained to ± 0.1 K and pressures to ± 0.05 MPa. Measurements were made at an 18 • scattering angle, or q ≃ 15.7 nm −1 . This satisfies the dipole scattering approximation with qr 1 ∼ 0.1 ≪ 1 (r 1 = a 0 /Z is the radial extent of the oxygen 1s wave function, where, a 0 is Bohr radius and Z is effective nuclear charge for the orbital). The acceptance of the analyzer was 2.5 nm −1 . Scans over the full energy range, 532 -548 eV energy transfer were typically 90 minutes long (including a check of the elastic peak) and depending on the conditions, e.g. sample density, typically 10 to 20 scans were measured at each set of thermodynamic conditions, with, sometimes, a shorter range chosen to efficiently use beam time. For each scan, the signal was normalized by the incident intensity and backgrounds were subtracted assuming an exponential energy dependence, i.e. primarily, due to the Compton tail of the diamond windows. The spectra were then normalized to unity integral over 532 to 548 eV energy transfer, with a smooth continuation used if the measured spectra did not cover the full range. The measured conditions are indicated on the phase diagram in Fig.1, with the precise conditions given in Table I. Fig.2(A) and (B) present the measured spectra. The XRS/XAS spectra can be divided into three regions: (I) pre-edge (533-535 eV), (II) main-edge (535-538 eV), and (III) post-edge (> 538 eV). Fig.2(A) shows the isothermal response at the critical temperature and Fig.2(B) the response along isochore at the critical density, both passing close to the critical point (spectrum (d)). The clearest trend is visible along the isothermal line, showing a gradual progression from the relatively blurred out spectra at high density (b), similar to bulk water, to something closer to the gas-phase spectra at low density (f). There is also a shift, by about 0.5 eV, to lower energy of the pre-edge when the temperature is increased from 293 K to 653 K (spectrum (a) to (b)). The lowest energy peak corresponds, in the gas phase, to the 1s → 4a 1 transition of the H 2 O molecule, which has an excited state wave function spread out over both hydrogen atoms and separate portion about the oxygen atom. This peak becomes more distinct with decreasing density or pressure. Meanwhile there are also changes in the high-energy part of the spectrum, but these are hard to quantify, especially in the absence of a rather good model. The trends in the isochore, varying from 1.01 to 1.19 T c , are not so strong. The spectrum at the critical point, (d), is distinguished from the others by having a relatively strong contrast between the low-energy (presumably 1s → 4a 1 ) peak and the high-energy part of the spectrum. This is brought out in more detail in Fig.3, where the difference between the maximum and minimum intensities is plotted. The contrast in spectrum (d) is stronger than that in spectrum (c) and (e) (at T c but, respectively, at higher and lower pressure) and (h) on the high-temperature side of the critical isochore. This contrast is due to an increase in height of the low-energy peak relative to the high-energy edge. If one considers the response in the critical region to be the sum over the spectral response of clusters of various sizes, improved contrast in the conditions where the largest distribution of cluster sizes is expected is surprising. Thus these measurements might indicate that we are seeing effects from electronic motions on time scales comparable to the XRS. Our data is noticeably different than a recent publication [17] of data in nominally similar thermodynamic conditions. This difference might be the result of a large non-dipole contribution in Ref. [17] (they had qr 1 ∼ 0.45) or other experimental differences (they had somewhat worse, ∼1 eV energy resolution and oxygen-containing sapphire windows). If we apply their scaling argument, the size of the peak at 534 eV suggests about 73(10) % of the molecules are nearly gas-like in spectrum (c) (the point that was closest to the conditions stated in Ref. [17]) and about 85(10) % in spectrum (d), a much-larger gaslike percentage than the 35(20) % of Ref. [17]. However, such a scaling argument probably over-simplifies the problem and detailed simulations are really required to interpret the results. Simulations were performed using Car-Parrinello molecular dynamics (CPMD) [38] in the microcanonical ensemble with density functional theory. Initial structures were obtained from the DL POLY [39] simulations for densities 0.73, 0.64, 0.55, 0.38 and 0.35 gcm −3 at 653 K using the TIP2P water model. Exchange-correlation function [40] and norm-conserving pseudopotentials [41] were used. Oxygen K-edge XAS spectra were computed using the transition state potential approach of Slater [42] for the geometries obtained in the molecular dynamics runs. Details of the computation are presented in the supplementary materials. The calculated XAS along isotherm T = 653 K are shown in Fig.2(C). A comparison of the calculated XAS of SCW and vapor-phase from experiment shows the presence of the pre-edge peak in all of the spectra at a similar excitation energy (534 eV). The calculated preedge peak height increases with the decrease in density consistent with the trend of experimental results. The main-edge region narrows with the decrease in density in both experiment and calculation, though this is not clearly consistent especially near the critical density. Examination of the final state wavefunction (Fig.4) indicates that the origin of pre-edge peak (I) is excitations to final states that are a combination antibonding OH and Rydberg orbitals. Examination of the orbital plots shows that these states are confined to the water molecule with the tail of the wavefunction extending only to nearest neighbor molecules. Thus, changes in the HB at the excited water molecule will have little effect on the shape and energies of these highly localized orbitals. This explains why the pre-edge peak position and profile are largely unaffected by the surrounding environment, showing no significant modification with a change in the density of SCW. Examination of orbital plots from the calculated XAS indicate that transition in the main-edge region (II) are to final states with an- tibonding OH orbitals and a larger mixture of Rydberg states than found in the lower-lying excitations that are responsible for the pre-edge peak (I). At lower densities, where less interactions with surrounding water molecules are expected, there is a lower mixing in the diffuse Rydberg states, resulting in an increase of the energies and a narrowing of the distribution of these states. This is the reason for the narrower main-edge peak in the 0.35 gcm −3 spectrum compared to 0.64 gcm −3 . An examination of the post-edge region (III) orbital plots for SCW indicates that excitations in this region are to diffuse, continuum orbitals. A decrease in density shows some localization of the wavefunctions of these states between the molecules. Both measurements and calculation show sensitive changes to this region promising additional information if calculations are improved. In conclusion, measurements of the oxygen K-edge xray Raman scattering show marked changes as thermodynamic conditions are tuned in the neighborhood of the critical point. There is, broadly speaking, reasonable qualitative agreement with XAS calculations that allows us to interpret these results in terms of orbital structures. However, the contrast in the measured data is better than in the calculation, especially near the exact critical conditions, which hints that we may be seeing an effect of fast electronic dynamics in the XRS spectra. The complex high-energy structure of the spectra and calculations also suggest significantly more information might be obtained if the calculations can be modified to bring them into closer agreement with the data.
2012-10-16T07:30:54.000Z
2012-10-16T00:00:00.000
{ "year": 2012, "sha1": "51c7f0a476a7db86069126d574b19ee456a41794", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "51c7f0a476a7db86069126d574b19ee456a41794", "s2fieldsofstudy": [ "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
250216905
pes2o/s2orc
v3-fos-license
Intra-articular Infiltration of Platelet-Rich Plasma versus Hyaluronic Acid in Patients with Primary Knee Osteoarthritis: Preliminary Results from a Randomized Clinical Trial Objective  The present study aimed to compare the effects of intraarticular infiltration of platelet-rich plasma with those of hyaluronic acid infiltration in the treatment of patients with primary knee osteoarthritis. Methods  A randomized clinical trial was conducted with 29 patients who received an intraarticular infiltration with hyaluronic acid (control group) or platelet-rich plasma. Clinical outcomes were assessed using the visual analog scale for pain and the Western Ontario and McMaster Universities Arthritis Index (WOMAC) questionnaire before and after the intervention. In addition, the posttreatment adverse effects were recorded. Categorical variables were analyzed using the chi-square and Fisher exact tests, whereas continuous variables were analyzed using the Student t test, analysis of variance, and the Wilcoxon test; all calculations were performed with the Stats package of the R software. Results  An independent analysis of each group revealed a statistical difference within the first months, with improvement in the pain and function scores, but worsening on the 6 th month after the procedure. There was no difference in the outcomes between the groups receiving hyaluronic acid or platelet-rich plasma. There was no serious adverse effect or allergic reaction during the entire follow-up period. Conclusion  Intraarticular infiltration with hyaluronic acid or platelet-rich plasma in patients with primary knee gonarthrosis resulted in temporary improvement of functional symptoms and pain. There was no difference between interventions. Introduction Knee osteoarthritis (OA) is a degenerative disease affecting mostly females and resulting in progressive joint cartilage destruction. Osteoarthritis leads to joint deformity, potentially with muscle and ligament imbalance, and most abnormalities occur in regions subjected to greater load. Its typical radiographic signs include bone sclerosis, cysts, and osteophytes. [1][2][3] Knee OA has a great impact on physical performance, and it is considered one of the 10 main causes of disability around the world. Standard conservative treatments for knee OA include weight loss, exercise, non-steroidal antiinflammatory drugs (NSAIDs), analgesic agents, intraarticular injection of hyaluronic acid (HA) and glucocorticoids. 4 Hyaluronic acid is used in the treatment of degenerative joint diseases. It is a glycosaminoglycan that acts on the extracellular matrix providing greater joint lubrication and protection. Recently, however, orthobiologic injections have emerged as a potentially safe and effective option for knee OA treatment. These injections include bone marrow concentrate (BMC), mesenchymal stem cells (MSC), and platelet-rich plasma (PRP). Platelet-rich plasma consists in plasma with a high platelet concentration. 5 Depending on the method used for PRP processing, it may also contain white blood cells in abnormally high concentrations. 6 Platelets and white blood cells are sources of high cytokines levels, which play a well-documented role in controlling a number of tissue regeneration processes, including cell movement and proliferation, angiogenesis, inflammation regulation, and collagen synthesis. 6 In addition to their role in local hemostasis, platelets contain an abundance of growth factors and cytokines, which are crucial in soft-tissue healing and bone mineralization. 7 Moreover, they release a number of proteins that attract macrophages, mesenchymal stem cells, and osteoblasts, resulting in necrotic tissues removal and faster tissue regeneration. 4 Recently, some studies investigated the potential beneficial effects of PRP in chronic diseases, including lateral epicondylitis and plantar fasciitis. 4 However, most studies using PRP in the literature are non-randomized and have insufficient samples. The present study aims to determine the effect on pain and function outcomes of an intraarticular application of PRP in comparison to HA to treat knee OA patients. Method This is a randomized clinical trial with 29 consecutively included patients. All patients participating in the present study agreed and signed an informed consent form. This study complied with the Helsinki Declaration and Guideline for Good Clinical Practice. The research protocol Conclusion Intraarticular infiltration with hyaluronic acid or platelet-rich plasma in patients with primary knee gonarthrosis resulted in temporary improvement of functional symptoms and pain. There was no difference between interventions. Patient selection The total sample included 29 patients of both genders, aged between 49 and 75 years old, who met the clinical and radiographic diagnostic criteria of the American College of Rheumatology (ACR) for knee OA and categorized as grade II or III according to the Kellgren-Lawrence classification. 2 The exclusion criteria were the following: previous surgery on the affected knee at any time or orthopedic surgery on the lower limbs within the 12 months prior to the study; previous HA or steroid infiltration within 3 months prior to the study; advanced OA cases (grades IV and V); diagnosis of autoimmune or rheumatological diseases; body mass index (BMI) ! 35; secondary OA (i.e., fractures, neoplasms); history of acute or chronic communicable diseases; difficult-to-control or insulin-dependent type I or II diabetes; coxarthrosis diagnosed at the physical or radiographic examination; active infection or history of infection at the affected joint; axial deviation in 10 o varus, 15 o varus or 1 cm discrepancy in lower limbs; use of anticoagulants or immunosuppressants; discontinuation of oral chondroprotective therapy within the last 3 months; and abnormal renal and/or liver function. All included patients had a confirmed diagnosis of knee OA and underwent conservative treatment with physical therapy, stretching exercises, and analgesic agents for at least 6 months before the start of the study. Osteoarthritis was evaluated with knee radiographies in two views (anteroposterior and lateral) under load. Tests requested during preselection visits were the following: biochemical blood tests (aspartate aminotransferase, alanine aminotransferase, gamma-glutamyl transferase, fasting blood sugar, creatinine, sodium, potassium, hemoglobin A1C, complete blood count), serology for communicable diseases, magnetic resonance imaging (MRI) of the affected knee, bilateral knee radiography, and panoramic radiography of lower limbs. Randomization Patients were randomized using the Research Randomizer System. 8 Thus, the study had two arms: a study group, submitted to an intraarticular application of PRP, and a control group, receiving a HA application. Application method Patients from both study arms were scheduled on an outpatient basis for infiltration at the following week. Control group patients underwent a single knee intra-articular infiltration with Synvisc One Hylan G-F20 (Lancaster, Pennsylvania, United States) following specific asepsis and antisepsis protocols. Upon arrival at the hospital, subjects from the study group were directed to the blood collection sector, where a sample of 15 mL of blood was sterilely collected by peripheral access in a specific tube. The sample was then transported at a controlled temperature for processing at a laboratory from the same hospital. The sample was centrifuged at 1,500 rotations per minute for 5 minutes at room temperature. Next, the sample was quantified and considered acceptable if it presented a twofold increase in the number of platelets when compared with the baseline value. After obtaining approximately 5 mL of PRP, the knee infiltration procedure was performed in a small surgical room. Platelet-rich plasma was applied through an intra-articular puncture on the knee. The entire process was carried out using the Arthrex Autologous Conditioned Plasma system (Arthrex Inc., Naples, FL, USA). The application process was repeated over the next 2 weeks, at 7 and 14 days, respectively, totaling 3 PRP infiltrations. Clinical follow-up and outcomes evaluation The subjects' data were collected by the researchers, including age, laterality, BMI, edema, and stiffness in the affected knee. Both groups were followed-up at the same frequency after the control group received the 3 rd application, for a total period of 6 months. The standardized follow-up consisted in 5 outpatient medical visits over a 6-month period: the 1 st visit occurred after 1 week, and the following visits were at 2 weeks, 1, 3, and 6 months after the treatment. In addition, there were 2 telephone contacts with the patient, at 2 and 4 months after the procedure. The Western Ontario and McMaster Universities Arthritis Index (WOMAC) score was obtained at the following times: preinfiltration, 1, 3, and 6 months after treatment. The visual analog scale (VAS) for pain was used 2 and 4 months after the procedure. Statistical analysis Statistical analysis was performed using the Stats package of the R software (R Foundation for Statistical Computing, Vienna, Austria). 9 Continuous variables were descriptively analyzed using means and standard deviations, followed by a normal distribution evaluation using the Shapiro test. 10 Categorical variables were presented as proportions. For intra-group comparison, the analysis of variance (ANOVA) and Fisher's least significant difference tests were used 11 to determine any difference at WOMAC scores, whereas the VAS results were analyzed using a Student paired t test. Intergroup differences were assessed using the Student t tests 12 for parametric variables, and the Mann-Whitney test 13 for non-parametric variables. Categorical variables were assessed between the study and the control groups using the chi-squared test 14 or Fisher exact test. 15 Results No patient was lost at follow-up. Both the control and study groups were homogeneous, with no statistical difference between parameters, as shown in ►Table 1. Regarding functional outcomes (WOMAC scores), there was no statistical differences between the study and control groups from the preintervention level to 6 months after treatment (►Table 2). In addition, the VAS score for pain revealed no statistical difference at the 2 nd (p ¼ 0.50) and 4 th (p ¼ 0.45) month after the treatment, as shown in ►Figure 1. In the intragroup evaluation, function improved after the procedure, but worsened at the last month of evaluation. For the PRP group, there was a statistical difference between the WOMAC score at baseline and 3 months after the procedure (p < 0.05). In addition, there was a difference between scores from the 1 st and 6 th month after the procedure due to an increased score (p < 0.05). Pain was also influenced, with a mean difference in VAS score of 1.64 between the 2 nd and 4 th months after the treatment (p < 0.05). ►Figure 2 shows WOMAC scores from the study group at different times. Regarding the HA group, there was a statistical difference in WOMAC scores from baseline and 1 month (p < 0.05) and baseline to 3 months after treatment (p < 0.05). There was no statistical difference in the VAS score for pain at the 2 nd and 4 th months after the treatment (p ¼ 0.49). ►Figure 3 shows this distribution. No infections or allergic reactions were reported during the 6-month follow-up. Pain cases were treated with analgesic agents, cryotherapy, and rehabilitation. Discussion The main finding of our study was the lack of difference in functional outcomes and pain assessment at a medium-term follow-up (6 months) between patients undergoing intraarticular infiltration with HA and PRP. However, both treatment methods were effective in improving pain and function over the study period and proved to be safe. Functional assessment was performed using the WOMAC questionnaire, revealing no differences between the two groups over the 6-month follow-up. The literature is still controversial regarding this outcome. A recently published systematic review using the total WOMAC score for functional assessment concluded that PRP is superior to HA in the medium term (3-6 months). However, the same study found no differences between groups when analyzing fractional WOMAC scores for stiffness and physical function. 16 Another meta-analysis demonstrated the superiority of PRP over HA in pain improvement as assessed by the WOMAC score. However, the study concluded that there is no obvious superiority between PRP and HA in knee OA treatment. 17 Some randomized clinical trials comparing these two methods for OA treatment also found no differences in functional scores after 6 months of follow-up. 18,19 We found no differences regarding pain between the PRP and HA groups. Similarly, these outcomes are quite divergent to the literature. Zhang et al. 17 found no differences in the VAS for pain between both treatments 3 and 6 months after infiltrations. However, Cole et al. 19 demonstrated significant pain improvement according to the VAS in patients treated with PRP 6 and 12 months after the infiltration. Most systematic reviews on the subject report the challenge in comparing the several published studies due to major variations in the PRP preparation and composition, the number of infiltrations performed, small samples, short follow-up times, and different inclusion and evaluation criteria. 20 Our study used a standardized kit for PRP preparation; in addition, samples were homogeneous, and infiltrations were performed once a week for 3 weeks. Previous studies had shown advantages of multiple PRP applications when compared to a single infiltration, including longer PRP effects when more than one application was performed. 21,22 Our study demonstrated that both PRP and HA were effective in treating pain and improving function. However, their effects deteriorate over time and virtually disappear 5 to 6 months after the treatment. Di Martino et al. 23 found similar outcomes in a randomized clinical trial. According to these authors, patients reported symptom improvement up to 9 months after HA application and up to 12 months after intraarticular PRP infiltration, but with progressive effect loss. Filardo et al. 18 also observed similar outcomes in a randomized clinical trial with 1 year of follow-up. These authors showed an improvement in pain and function in patients treated with PRP or HA, but these outcomes remained virtually stable 2 months after treatment. Regarding adverse effects, both PRP and HA proved to be safe in our study. None of the drugs caused severe, lasting side effects. In a meta-analysis, Han et al found no differences between treatment groups regarding adverse effects. 24 Other studies have also concluded that both treatments are safe, and have few side effects during follow-up. 25,26 The study has some limitations. First, despite being a preliminary report, the sample size is small. Second, the follow-up period is relatively short (6 months), and some studies have shown that PRP effects last longer than those of HA. The absence of a sham group (placebo or steroid infiltration) and the lack of group blinding are other limitations from our study. Conclusion Knee intraarticular infiltration with HA or PRP in patients with primary gonarthrosis resulted in transient improvement of pain and function. Both treatments proved to be safe. There was no difference between these interventions. Financial Support There was no financial support from public, commercial, or non-profit sources.
2022-07-03T05:13:10.749Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "f2172c9e36709f481fcace013a34ac7a5db21337", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0041-1724082.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2172c9e36709f481fcace013a34ac7a5db21337", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2246461
pes2o/s2orc
v3-fos-license
Alzheimer’s disease hypothesis and related therapies Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and the most common cause for dementia. There are many hypotheses about AD, including abnormal deposit of amyloid β (Aβ) protein in the extracellular spaces of neurons, formation of twisted fibers of tau proteins inside neurons, cholinergic neuron damage, inflammation, oxidative stress, etc., and many anti-AD drugs based on these hypotheses have been developed. In this review, we will discuss the existing and emerging hypothesis and related therapies. Background Alzheimer's disease (AD) is a progressive neurodegenerative disorder, which is the most common cause for dementia and imposes immense suffering on patients and their families. According to the World Alzheimer Report 2016, there are currently about 46.8 million people suffering with AD worldwide. The ageing of world population will further compound this problem and lead to a steep increase in the number of AD patients. The numbers of AD patients are expected to double nearly every 20 years, and thereby the population of AD will reach 74.7 million in 2030 and 131.5 million in 2050 [1]. AD has become the third major cause of disability and death for the elderly, only after cardiovascular and cerebrovascular diseases and malignant tumors. However, only five drugs have been approved by the FDA to treat AD over the past hundred years since the first AD patient was diagnosed. Not only that, these approved drugs including cholinesterase inhibitors, Nmethyl-D-aspartate (NMDA) receptor antagonist or their combination usually provide temporary and incomplete symptomatic relief accompanied with severe side effects. The marginal benefits were unable to slow the progression of AD. Thus, developing drugs for more effective AD treatment is in urgent need. Current hypothesis about AD and anti-AD drug development AD is a complicated disease involving many factors. Due to the complexity of human brains, the lack of reasonable animal models and research tools, the detailed pathogenesis of AD is still unclear so far. Many hypotheses about AD have been developed, including amyloid β (Aβ), Tau, cholinergic neuron damage and oxidative stress, inflammation, etc. Thus, many efforts have been done to develop anti-AD drugs based on these hypotheses. Aβ cascade hypothesis Extracellular deposits of Aβ peptides as senile plaques, intraneuronal neurofibrillary tangles (NFTs), and largescale neuronal loss were the main pathological features of AD. Thus, Aβ peptides have long been viewed as a potential target for AD which dominated new drug research during the past twenty years [2]. The most direct strategy in anti-Aβ therapy is to reduce Aβ production by targeting βand γ-secretase [3]. Safety issues are the overriding problem. For targeting γ-secretase, undesirable side effects are inevitable due to its physiological substrates, eg. the Notch signaling protein [4][5][6][7], which is essential in normal biological process. Similarily, targeting β-secretase is also challenged for the side effects such as blindness and the large catalytic pocket [8]. More importantly, in sporadic AD cases, the majority of AD patients do not necessarily have over-producted amyloid precursor protein. Besides, Aβ isoforms could also serve as endogenous positive regulators for neurotransmitter release at hippocampal synapses [9]. Thus, inhibiting Aβ production may encounter many challenges. Aβ clearance by immunotherapy is the alternative choice. For active Aβ-immunotherapy, although the first active AD vaccine (AN1792) developed by ELAN showed some beneficial effects such as less cognitive decline, it was suspended owing to serious side effect, or meningoencephalitis [10][11][12]. Also, the passive immunotherapy did not do much better than active immunotherapy. Several antibodies targeting Aβ have failed in clinical trials, including bapineuzumab (Pfizer/Johnson & Johnson) [13,14], Crenezumab (Genentech) [15,16], solanezumab (Eli Lilly) [16][17][18] and ponezumab (Johnson & Johnson /Pfizer) [19][20][21]. In addition, although passive immunotherapy could overcome some problems of active immunotherapy, there were still inevitable side effects such as amyloid-related imaging abnormalities [22]. Likewise, the small molecule Aβ binder scyllo-inositol [23] and tramiprosate [24][25][26] also failed in clinical trials. These failures even cast more doubts on the Aβ theory [27]. Actually, the strategy of targeting only a single functional subregion of Aβ may partly account for these failures [27,28]. Furthermore, immunotherapy may influence the human immune system, which might cause beneficial or detrimental consequence (such as side effects). However, every cloud has a silver lining. A phase Ib trial of aducanumab (Biogen) showed a positive correlation between brain Aβ levels and disease exacerbation as measured by Clinical Dementia Rating [29][30][31]. Even the failed phase III EXPEDITION3 trial of solanezumab (Eli Lilly) still demonstrated better performance in Clinical Dementia Rating Sum of Boxes and beneficial impacts on Mini-Mental State Examination and Activities of Daily Living [17,18,32,33]. Thus, despite all kinds of problems, immunotherapy may still be the better approach to modify the extent of neurodegeneration in AD currently [34]. In fact, the original amyloid cascade hypothesis was that "Aβ is the causative agent in Alzheimer's Disease pathology, and that neurofibrillary tangles, cell loss, vascular damage, and dementia follow as a direct result of this deposition" [35]. After decades of research, although the bulk of data still supports a role for Aβ as the primary initiator of the complex pathogenic cascade in AD, more and more evidences indicate that Aβ acts as a trigger in the early disease process and appears to be necessary but not sufficient in the late stage of AD [36]. Especially, recent rapid progresses in understanding on toxic amyloid assembly and Aβ metabolism associated systemic abnormalities will provide fresh impetus and new opportunities for this interesting approach [37]. Tau hypothesis Neurofibrillary tangles, another intracellular hallmark of AD, are composed of tau. Tau is a microtubuleassociated protein working as scaffolding proteins that are enriched in axons. In pathological conditions, tau aggregation will impair axons of neurons and thus cause neurodegeneration. After numerous failures of Aβtargeting drugs for AD, more interests are turning to explore the therapeutic potential of targeting tau, particularly as studies of biomarkers suggest that tau pathology is more closely linked to the progression of AD [38]. O-linked N-acetylglucosamine (O-GlcNAc) modification [39]. Under pathological conditions, increasing of tau hyperphosphorylation will render the protein aggregation-proned, reduce its affinity for microtubules, and thereby influence neuronal plasticity. Consequently, strategies to target tau involve blocking of tau aggregation, utilizing tau vaccinations, stabilizing microtubules, manipulating kinases and phosphatases that govern tau modifications. However, most of these efforts have failed in clinical trials. For Tau aggregation blockers, TRx0237 failed to show treatment benefits in phase III trials [40]. As for vaccinations, tau-targeted active vaccines (ACI35 and AADvac-1) and passive vaccines (RG6100 and ABBv-8E12) are currently in phase I and II clinical trials [41,42]. Intravenous immunoglobulin (IVIG), the only passive vaccine in phase III clinical trials, failed to meet the primary end points in patients with mild-tomoderate AD [42]. Other tau-targeting strategies for AD, including stabilizing microtubules and manipulating kinases and phosphatases, have just been tested in preclinical studies. In general, tau-targeting therapies remain challenging because of incomplete understanding of AD, lack of robust and sensitive biomarkers for diagnosis and response-monitoring, and the obstruction of blood-brain barrier. Inflammation hypothesis Reactive gliosis and neuroinflammation are hallmarks of AD. Microglia-related pathways were considered to be central to AD risk and pathogenesis, as supported by emerging genetic and transcriptomic studies [43][44][45][46][47]. Increasing evidence demonstrate that microglia emerges as central players in AD. In very early stage, microglia, TREM2 and complement system are responsible for synaptic pruning [48,49]. The processes of activitydependent and long-term synaptic plasticity are the common and fundamental cellular underpinning of learning and memory which may manifest as influence on long term potential [50]. Following that, reactive microglia and astrocytes will surround amyloid plaques and secrete numerous pro-inflammatory cytokines. These events are regarded as an early, prime mover in AD evolution. However, non-steroid anti-inflammatory drugs (NSAIDs) did not show enough benefits in clinic. This is because that the relationship between innate immunity and AD pathogenesis is complex, and the immune response can be either deleterious or beneficial depending on the context [47,51,52]. However, the new observations that PD-1 immune checkpoint blockade reduces the pathology of AD and improves memory in mouse models of AD [53][54][55] give us a direction of future researches. The recent advances in our understanding of the mechanism underlying microglia dysfunction in pruning, regulating plasticity, and neurogenesis are opening up possibilities for new opportunities of AD therapeutic interventions and diagnosis [56,57]. Targeting these aberrant microglial functions and thereby returning homeostasis may yield novel paradigms for AD therapies. However, given the complexity and diverse functions of microglia in health and disease, there is a crucial need for new biomarkers reflecting the function of specific microglias [52,58]. Cholinergic and oxidative stress hypothesis Acetylcholine (ACh) is an important neurotransmitter used by cholinergic neurons, which has been involved in critical physiological processes, such as attention, learning, memory, stress response, wakefulness and sleep, and sensory information [59][60][61][62][63]. Cholinergic neurons damage was considered to be a critical pathological change that correlated with cognitive impairment in AD. Thus, cholinergic hypothesis was firstly tested with cholinesterase inhibitors in AD treatment. Tacrine, a cholinesterase inhibitor, was the first anti-AD drug available in clinic [64][65][66] although it was withdrawn from the market in 2012 due to severe side effects. Although inhibiting cholinesterase is a symptomatic relief treatment with marginal benefits, it is currently the most available clinical treatment which gives desperate AD patients a glimmer of hope. For other neurotransmitter dysfunction, such as Dopamine and 5-hydroxytryptamine, there are some studies about them, but not much as acetylcholine in AD. Oxidative stress is considered to play an important role in the pathogenesis of AD. Especially, the brain utilizes more oxygen than other tissues and undergoes mitochondrial respiration, which increases the potential for ROS exposure. In fact, AD is highly associated with cellular oxidative stress, including augmentation of protein oxidation, protein nitration, glycoloxidation and lipid peroxidation as well as accumulation of Aβ, for Aβ can also induce oxidative stress [67][68][69][70][71][72][73]. Thus, the treatment with anti-oxidant compounds would provide protection against oxidative stress and Aβ toxicity in theory. However, oxidative stress is only a single feature of AD, so antioxidant strategy was challenged for its potency to stop the progression of AD and thus it is proposed as a portion of combination therapy [74,75]. Glucose hypometabolism Glucose hypometabolism is the early pathogenic event in the prodromal phase of AD, and associated with cognitive and functional decline. Early therapeutic intervention before the irreversible degeneration has become a consensus in AD treatment. Thus, alleviation of glucose hypometabolism was emerged as an attractive strategy of AD treatment. However, most of these therapeutic strategies are targeting mitochondria and bioenergetics, which have shown promise at the preclinical stage but without success in clinical trials [76,77]. Although no strategies are available to alleviate glucose hypometabolism in clinical, glucose metabolism brain imaging such as 18 FDG-PET (Positron emision tomography with 2deoxy-2-fluorine-18-fluoro-D-glucose) has become a valuable indicator for diagnosis of neurodegenerative diseases that cause dementia, including AD [78]. Up to now, there're no effective treatments for changing the course of AD. Confronting these difficulties, we should get deeper understandings about these hypotheses, and meanwhile we should renovate our knowledge about AD and develop new hypothesis. New pathway to AD AD is conventionally regarded as a central nervous system (CNS) disorder. However, increasing experimental, epidemiological and clinical evidences have suggested that manifestations of AD extend beyond the brain. Most notably, research over the past few years reveals that the gut microbiome (GMB) has a profound impact on the formation of the blood-brain barrier, myelination, neurogenesis, and microglia maturation [79][80][81][82][83][84]. In particular, results from germ-free animals and animals exposed to pathogenic microbial infections, antibiotics, probiotics, or fecal microbiota transplantation showed that gut microbiota modulates many aspects of animal behaviors, suggesting a role for the gut microbiota in host cognition or AD-related pathogenesis [85][86][87][88]. The underlying mechanisms of gut microbiota influencing brain involve the communication through immune system, the endocrine system, the vague nerve, and the bacteria-derived metabolites. Immune pathway The intestinal mucosal lymphoid tissue contains 70%8 0% of the immune cells in the whole body, and is considered to be the largest and most important human immune organs. It is also the first line of host defense against pathogens. The human gut contains a large, diverse and dynamic enteric microbiota, including more than 100 trillion microorganisms from at least 1000 distinct species. There's a complex relationship between intestinal mucosal immune system and intestinal microbiota. Thus, gut microbiota induced immunomodulation is emerging as an important pathway that influences AD [89]. Gut microbiota can influence brain through immune system in several ways. Firstly, intestinal microbiome can induce cytokines secretion, which enter the circulatory system, pass through blood brain barrier, and directly affect the brain function. For instance, perivascular macrophages and cerebral small vessel epithelial cells can receive the intestinal microbiome produced IL-1 signal and affect central nervous system. Also, gut microbes can activate Toll-like receptors of the brain immune cells (such as microglia) through microbes associated molecular patterns (MAMP). MAMPs can either directly bind to intestinal epithelial cells or infiltrate to the intestine lamina propria to activate lymphocytes, promoting the release of pro-inflammatory cytokines, which further cause subsequent inflammation in brain. Secondly, gut microbes can produce metabolites such as short-chainfatty acids (SCFAs), gamma-aminobutyric acid (GABA) and 5-HT precursors, which could also travel to the brain via circulatory systems or signal through intestinal epithelials to produce cytokines or neurotransmitters that activate vagus nerve. Thirdly, gut microbes can activate enteroendocrine cells to produce 5-HT, which affect the brain through neuroimmune pathways. In addition to changing the functions of the immune system, such as through secretion of inflammatory factors or anti-inflammatory factors, intestinal microbiome can also affect the development and composition of immune system. For example, in germ-free mice, isolated lymphoid follicles in gut associated lymphoid tissue are unable to mature, and lymphocytes that are able to secrete IgA in the intestinal epithelium decreased [89][90][91][92]. For immune system in brain, the deletion of gut microbiota in germ-free mice have global influence on the cell proportions and maturation of microglia in the brain, and thus affect the properties and phenotype of microglia, as compared to conventionally colonized controls [93]. Similar results were obtained in antibiotic treated mice. Other research also demonstrates that the number of T regulatory cells and T helper lymphocytes (T helper 17, Th17) are significantly reduced in the germ free mouse, indicating the regulatory effects of intestinal microbiome on T cell composition, while microbiome tansplant to germ free mice can modify these variations and restore normal immune function [94,95]. All these modulations of gut microbiota may have direct and indirect effects on AD development and progression. Endocrine pathway and the vagus nerve The gut is also the largest endocrine organ in the body. Gut microbiota can regulate secretion of many hormones from intestinal endocrine cells, such as corticosterone and adrenal hormones, and thus establish the information exchange between the intestines and the brain. For example, the intestinal microbiome can affect the secretion of serotonin and regulate brain emotional activities [96,97]; intestinal microbial metabolism can also produce a variety of neurotransmitters, such as dopamine, GABA, acetylcholine and melatonin, which are transmitted to central nervous system through the vagus nerve [98]. Besides transporting these signal substances, the vagus nerve itself plays an important role in inflammation and depression [99]. The vagus nerve can influence the gastrointestinal tract, orchestrate the complex interactions between central and peripheral neural control mechanisms [100]. The stimulation of vagus nerve is able to regulate mood, and the immune system, suggesting the therapeutic potential of vagus nerve modulation to attenuate the pathophysiological changes and restore homeostasis [98][99][100][101][102][103]. Bacteria-derived metabolites Generation of essential nutrients for host physiology, such as vitamins and other cofactors, is an important physiological function of the gut microbiota [104]. The metabolites of microbiome, such as SCFAs including acetate, butyrate, and propionate, are able to modulate peripheral and central pathologic processes [105]. For example, butyrate is effective in reducing inflammation and pain. Once in the brain, acetate is able to alter the level of the neurotransmitters glutamate, glutamine, and GABA, as well as increases anorectic neuropeptide expression [106]. In addition, the gut microbiota can secrete large amounts of amyloids and lipopolysaccharides, which might contribute to the modulation of signaling pathways, the production of proinflammatory cytokines associated with AD pathogenesis and Aβ deposition [107][108][109]. In fact, microbiota-gut-brain axis has been established and a disturbed gut microbiota has been incriminated in many neurodegenerative diseases in animal and translational models. In theory, a role for the microbiota-gutbrain axis is highly plausible. However, the theoretical basis for the use of microbiota-directed therapies in neurodegenerative disorders still needs supports from high-quality clinical trials [110]. To date, only a few studies directly focused on the gut microbiota and AD [111,112], and studies on AD patients is particullarly deficient. A recent research from human showed an increase in the abundance of a pro-inflammatory GMB taxon and a reduction in the abundance of an antiinflammatory taxon are possibly associated with a peripheral inflammatory state in patients with cognitive impairment and brain amyloidosis. It is important for the research of gut microbiota and AD. However, further investigations are still necessary to explore the possible causal relation between GMB-related inflammation and amyloidosis [111]. The comprehensive understanding of these underlying mechanisms may provide new insights into these novel therapeutic strategies for AD. In particular, based on the gut microbiota hypothesis, Chinese traditional medicine and probiotic bacteria may play a more important role in therapy [113]. Conclusions Nowadays, new technologies are making it possible to get to know enough pathologic details of disease. More importantly, scientists are beginning to treat AD as a systemic disease and they are paying more attention to the correlation between brain and other organs [47,89,114]. Perhaps, for complicated disease such as AD, researches and therapies should be based on the principle that combined reductionism with holism, and great efforts should be made to search the fundamental laws of AD by means of multi-scale modeling and efficient numeric assessment. Maybe, just like Chinese traditional medicine [115], combination treatments or systematic therapy will be a final way out.
2018-01-30T12:39:26.804Z
2018-01-30T00:00:00.000
{ "year": 2018, "sha1": "9e64b2b5d7cb1c81bb59af998164880371bedca4", "oa_license": "CCBY", "oa_url": "https://translationalneurodegeneration.biomedcentral.com/track/pdf/10.1186/s40035-018-0107-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9e64b2b5d7cb1c81bb59af998164880371bedca4", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
255360797
pes2o/s2orc
v3-fos-license
Endogenous testosterone concentrations and muscle mass, strength and performance in women, a systematic review of observational studies Objective: To explore the associations between endogenous testosterone blood concentrations and muscle mass, strength and performance in community dwelling women. Design, Patients and Measurements: Online databases, including Ovid MEDLINE, EMBASE and Web of Science, were searched for observational studies, with at least 100 female participants, reporting associations between endogenous testosterone blood concentrations and muscle mass, strength and performance. The findings were synthesized in a narrative review. Heterogeneity in study design and analysis precluded a meta ‐ analysis. Results: Of the 36 articles retrieved for full ‐ text review, 10 met the inclusion criteria. Eight studies were cross ‐ sectional, 1 longitudinal and 1 provided both cross ‐ sectional and longitudinal data. Testosterone was measured by liquid chromatography ‐ tandem mass spectrometry in two studies and by immunoassay in 8. An association between total testosterone and muscle mass, strength or performance in women was not found. The studies of calculated free or bioavailable testosterone and lean muscle mass reported a positive association, but no association was reported for muscle strength or performance. Each included study was limited by a high risk of bias in at least one assessed domain. Conclusions | Eligibility criteria Studies were included if they were observational studies of community dwelling women aged 18 years and over; had more than 100 participants; measured blood total testosterone by liquid/gas chromatography-tandem mass spectrometry (LC/GS-MS) or immunoassay, or calculated free or bioavailable testosterone; and assessed any skeletal muscle outcome (mass, strength or performance). Where a study had both male and female participants, we included the study if the data for women could be extracted separately. We did not include studies of women with serious chronic disease or endocrine conditions, for example, polycystic ovary syndrome, cachexia associated cancer, human immunodeficiency virus, chronic liver or kidney disease, hypopituitarism or adrenal failure. We included studies that examined the association between androgens and muscle outcomes as the primary outcome, as well as studies that examined this association secondary to a different primary outcome, for example falls or frailty. | Analysis The data was synthesised descriptively and is presented as a narrative review. Studies were grouped according to testosterone exposure (total testosterone or free testosterone and bioavailable testosterone); type of study (cross-sectional or longitudinal); and type of outcome (muscle mass, muscle strength or muscle performance). Similarities and differences between the findings of the studies were examined. Differences in the study samples, assessment of exposures (androgen concentrations) and outcomes, and the application of different statistical approaches including adjustment for confounding, precluded the conduct of a meta-analysis. | Characteristics of included studies The study selection process is detailed in the PRISMA flow diagram ( Figure 1). The searches of the databases identified 8015 records. Of the 10 included studies, 4 were from Europe, [13][14][15][16] 3 were from the United States 17-19 and 3 were from Asia. [20][21][22] The majority of the studies included middle aged or postmenopausal women; 1 study was of premenopausal women under 40 years of age. 17 Eight studies reported cross-sectional associations between testosterone and muscle outcomes, [13][14][15][17][18][19][20]22 1 reported longitudinal associations, 21 and 1 reported both cross-sectional and longitudinal associations. 16 Of the 9 studies reporting cross-sectional associations, [13][14][15][16][17][18][19][20]22 the smallest of these had 187 participants 13 Seven studies examined the association between total testosterone and muscle outcomes. 13,15,[17][18][19][20][21] Two studies reported associations with both total and calculated free testosterone, 14,16 and 1 study reported free and bioavailable testosterone. 22 Of these, 1 did not report how free testosterone was calculated, 14 positive association between total testosterone and both total lean mass and total appendicular skeletal mass. 18 The same study found an inverse association between total testosterone and percent lean mass in the appendicular skeleton. 18 The analysis appeared to be a correlation analysis with no adjusting variables included. | Muscle strength No evidence for an association between total testosterone and muscle strength was found in 5 of the 6 studies reporting this outcome [13][14][15]17,20 (Table 1). Haren et al. 18 found an inverse association between knee extension peak torque and total testosterone. The authors did not adjust for any confounders and reported no association for any other lower limb strength measures assessed. 18 | Muscle performance Three of the 4 studies did not find any evidence of association between total testosterone and muscle performance 14,15,20 (Table 1). One study, reporting the findings of unadjusted Pearson's correlation analysis, found a positive association between the 6 m walk time and testosterone, and an inverse association between the distance walked in 6 min and testosterone, but no evidence for association with the sit-stand test result. 18 | Longitudinal studies: association between total testosterone and muscle mass Neither of the 2 longitudinal studies examining the association between baseline total testosterone and muscle mass at follow-up found evidence for an association 16,21 (Table 2). There was also no evidence, before or after adjustment, for a significant association between testosterone levels at baseline and the likelihood of low skeletal muscle mass at 8 year follow-up, 21 or for an association at the 10 year follow-up between change in testosterone from baseline and muscle mass. 16 3.4 | Cross-sectional studies: association between free and bioavailable testosterone and muscle outcomes | Muscle mass All 3 studies found a positive association between free testosterone and muscle mass (appendicular lean mass, skeletal muscle mass or total lean body mass) 14,16,22 (Table 3). Only 1 study calculated free testosterone from LCMS/MS testosterone results. 16 | Muscle performance There was no evidence for association between free or bioavailable testosterone and muscle performance in either of 2 studies, with 1 assessing this outcome as quartiles of gait speed, and the other as age-adjusted timed up and go test results 14,22 (Table 3). | Longitudinal studies: association between free testosterone and muscle mass Longitudinal outcomes between free testosterone and muscle mass were examined by 1 study, with testosterone measured by LCMS/ MS 16 (Table 3). The authors found a positive association between free testosterone at baseline and appendicular lean mass after 8−10 years, but not for change in free testosterone. 16 | Risk of bias of included studies Six of the 8 cross-sectional studies measuring total testosterone ( (Table 4). Two did not control or adjust for relevant confounders. 16,22 Risk of bias in the studies was assessed to be low for the other items. In the 1 longitudinal study reporting free testosterone, high risk of bias was only found regarding incomplete follow-up and an absence of strategies to address this. 16 | DISCUSSION The available observational data does not support an association between testosterone and muscle mass, strength or performance in women. [13][14][15][16][17][19][20][21] Conversely, the 3 studies that estimated free or bioavailable testosterone reported a positive association with lean muscle mass, but not with muscle strength or performance. 14,16,22 Although only 2 of the 10 studies measured the blood concentration of testosterone with LCMS/MS, 16,17 which is considered the gold standard method for testosterone measurement, 24 the findings were consistent regardless of the assay methodology used. There was however a clear discrepancy in findings between serum testosterone and estimated free or bioavailable testosterone which merits consideration. Testosterone circulates primarily bound to SHBG, but also to albumin, cortisol-binding globulin (CBG) and orosomucoid. 25 Formulae to calculate nonprotein bound testosterone only take into account albumin and SHBG and ignore and contributions of CBG and orosomucoid. 25 Furthermore, the available formulae assume linear binding of testosterone. These do not take into account SHBG having two binding sites that do not bind testosterone with equal affinity, or the impact of SHBG binding of other sex steroids. 25 Thus, the estimation of free or bioavailable testosterone is crude, and primarily determined by serum SHBG and testosterone concentrations. Furthermore, when a study such as Bann et al. 16 has reported no association between serum total testosterone and lean mass but a positive association between calculated free testosterone and lean mass, it is arguable that the association is governed by the SHBG concentration. Low SHBG is a known independent marker of increased metabolic disease risk, notably diabetes. 26,27 Therefore, the apparent association between free or bioavailable testosterone is most likely representing the association between low SHBG and lean mass. In addition to the above, evidence that the small nonprotein bound fractions of sex steroids are of the greatest biological importance remains to be established. It has been suggested that unbound sex steroid fractions may be the most readily degraded and that their estimated serum concentrations may not represent the overall bioactivity. 28 Study Item 1 Item 2 Item 3 Item 4 Item 5 Item 6 Item 7 Item 8 Item 9 Item 10 Total testosterone Total testosterone Bann et al. 16 Abbreviations: N, no; N/A, not applicable; Y, yes. a Modified Hoy tool (items 1−4 indicate external validity, 5−10 internal validity). Item 1: national representativeness; item 2: sampling frame; item 3: sampling technique; item 4: minimal nonresponse bias; item 5: data collected from subjects; item 6: acceptable case definition used; item 7: valid and reliable study instrument used; item 8: same mode of data collection for all subjects; item 9: exposure measured in valid and reliable way; item 10: relevant confounders controlled/adjusted for. b Modified Joanna Briggs Institute critical appraisal checklist for cohort studies. Item 1: groups similar and from same population; item 2: exposures measured similarly across groups; item 3: exposure measured validly and reliably; item 4: confounding factors identified; item 6: participants free of outcome at start; item 7: outcomes measured in valid and reliable way; item 8: follow-up appropriate; item 9: follow-up complete, or explained if not; item 10: strategies used to address incomplete follow-up; item 11: appropriate statistical analysis used. importance of harmonized outcome reporting. Recent consensus statements on the definition and diagnosis of sarcopenia 1,2 may facilitate greater consistency in research in this field. To the best of our knowledge, there are no previously published systematic reviews that have examined the association between endogenous blood testosterone levels and muscle mass, strength or performance. This systematic review, therefore, provides a novel synthesis of the evidence to date. The strengths of this systematic review are the inclusion of community-based women that had over 100 participants, and reporting cross-sectional and longitudinal findings for muscle mass, strength and performance outcomes separately. The differences between the studies in recruitment, exposure and outcomes assessment and statistical methodology precluded the conduct of a meta-analysis. In conclusion, an association between testosterone and muscle mass, strength and performance in women is not supported by this systematic review. However, as the majority of studies employed immunoassays for testosterone measurement a conclusion of no association can yet be made. While a relationship between free testosterone and muscle mass was observed, this should be interpreted with caution. Further studies are required to determine whether blood testosterone concentrations, measured with precision, such as by LCMS/MS, are indicative of skeletal muscle mass and/or performance in women.
2023-01-02T16:15:10.187Z
2022-12-30T00:00:00.000
{ "year": 2022, "sha1": "c74268ee741cb7298715d6a029b70152551ca5c7", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1111/cen.14874", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9df816013cd4f89757e1e4784f95489816936412", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
230508001
pes2o/s2orc
v3-fos-license
S. : Due to the growth of users and linked devices in networks, there is an emerging need for dynamic solutions to control and manage computing and network resources. This document proposes a Distributed Wireless Operative System on a Mobile Ad-hoc Network (MANET) to manage and control computing resources in relation to several virtual resources linked in a wireless network. This prototype has two elements: a local agent that works on each physical node to manage the computing resources (e.g., virtual resources and distributed applications) and an orchestrator agent that monitors, manages, and deploys policies on each physical node. These elements arrange the local and global computing resources to provide a quality service to the users of the Ad-hoc cluster. The proposed S.O.V.O.R.A. model (Operating Virtualized System oriented to Ad-hoc networks) defines primitives, commands, virtual structures, and modules to operate as a distributed wireless operating system. Introduction The inclusion of communications on computer systems has allowed for new dynamics between devices and users, the deployment of new services, and the demand for new features. The constant evolution of hardware and communication systems enables new kinds of technologies that increase the productivity of traditional systems such as the Internet, education, surveillance, monitoring systems, networks, and others. Therefore, there are more wireless technologies embedded in a mobile device and supporting different cellular networks such as 2G (GSM CSD, GPRS); 3G (UMTS/HSDPA); 4G (WiMAX, LTE, LTE-A (HSPA + LTE)); and 5G technologies. These rely on a central base station and a set of cells to provide services and coverage. However, these systems are prepared to support short-range communications that have rapidly evolved by reducing the size and increasing the computing capacity of microchips, allowing for overlapping networks within the same device. Two technologies are outstanding in that evolution: Bluetooth and IEEE802.11.x. When using wireless mobile devices, it is possible to build networks known as MANET (Mobile Ad-hoc Networks), which can be configured autonomously, do not require centralized control, and can automatically recover in case of failure. An alternative to exploit the features offered by MANET is to use mobile clouds [1] as a platform to sharing resources in a cluster lead to take advantage to MANET features. The massive number of users and services linked together brings new challenges due to the dynamic and stochastic behavior of distributed systems such as the Internet. Users consume applications and computing resources that are eventually shared on the network. The newly developed architectures provide users with Information Technologies services (IT), applications, platforms, information, monitoring, control systems, etc., to manage services and administer resources between the edge and the core network. Currently, the devices and systems deployed on the edge layer can be classified in three types [2]: Mobile Edge Computing (MEC), Fog Computing (FC) and Cloudlet Computing (CC). The MEC includes interactions with cellular networks and offers some cloud services in the cellular cell; the FC has a computing layer before the Cloud layer to store and process data; finally, the CC is deployed on dedicated devices with robust computing capacities, commonly called micro data centers. The main contribution of this paper is the design and implementation of a distributed wireless operating system, dedicated to the dynamic management of computing resource in MANET devices [3]. The goal is to find a solution to dynamically (re-)organize computation tasks on available network resources and node status. The operating system uses two elements to create and deploy virtual resources: abstraction and the primitives, which are managed by the Local Agent (LA) that resides as an instance on each physical node and the Orchestrator (OR) that work on one or more nodes to monitor and deploy policies on the distributed operating system. This paper is organized as follows: Section 2 presents a review of the existing solutions similar to the one proposed here. Section 3 introduces the reference models adopted in the debate for the system architecture and application workload. Section 4 describes the implemented prototype and the experimental demonstration of the proposed solution, showing its effectiveness. Finally, Section 5 presents the final observation and future work. Wireless Distributed Systems Distributed systems were developed to manage resources in different locations and computing devices through network links. Today, services and systems need mobility as an extra feature in addition to cloud computing architectures that mitigate the problem of processing and latency. Since issues related to latency and computational load can arise, it is essential to manage workflows between applications and final devices. In this context emerges the concept of Edge Clouds [4,5], which have been proposed to improve the Quality of Experience (QoE). In this paper, we use the Wireless Distributed Computing (WDC) [6] concept as a framework to develop and validate the S.O.V.O.R.A. Model proposed. WDC exploits wireless connectivity to share processing-intensive tasks among multiple devices. Network Management To manage a network, wired or wireless, a lot of protocols, architectures, tiers and abstraction levels exist and are used to deliver services to the users, and this condition made the network management and control complex. The SDN proposes to decouple the control plane from the data plane on a network. With this architecture, the routers and switches became logical elements of the network provided by an external entity called Network Operation System (NOS). This controller allows for the generation of abstractions that deploy orchestrated services at logical and virtual levels allowing automation and control. In this sense, NFVs are in charge of generating Virtualized Network Functions (VNF) by separating the network functions from the hardware and offering it through virtualized services or in general-purpose servers [7][8][9]. This deployment of functions requires less hardware but includes more software abstractions for better traffic management. Distributed Architectures For this case study, the enabling technologies are wireless communication, embedded system with high multiprocessing capabilities and the virtualization techniques, which allows for the development of a virtual system on a wireless cluster or network in order to provide systems with real time data capture, processing and dissemination, information sharing between users and nodes, robust process control and resource management. In order to reach this goal, it is necessary to identify and analyze the distributed architectures that are used nowadays. As classical distributed architecture Cloud computing allows for the modeling of ubiquitous or on-demand network access and resource sharing capabilities to be quickly and automatically provisioned. In this sense, customers only pay for what they use. The service models in cloud computing are divided into three categories: Software as a Service (SaaS), Platform as a Service (SaaS), and Infrastructure as a Service (IaaS). Orchestration services involve the dynamic deployment, administration, and maintenance of services on the mentioned platforms, according to the needs of customer's. However, on wireless networks, distributed architectures such as MANETS, Fog Computing, and IoT systems exist as study cases [10,11]. MANET Mobile Ad Hoc Network A MANET [12] is a set of wireless mobile devices (nodes), which can form a network without the support of any infrastructure or centralized control. MANETs are multi-hop networks in which a packet is sent from a source to its destination and must cross a path formed by two or more hops (nodes). Therefore, every node has a dual behavior in a MANET as a router and host. MANETs are autonomous networks that can determine their configuration parameters and recover in the case of failure by themselves. Further, they are implemented to set up communications for specialized applications where there is no pre-existing infrastructure or the one available is not the most suitable for the operation's needs. There can be two types of MANETs: Wireless Mesh Networks (WMNs) [13] and Wireless Sensor Networks (WSNs) [14]. As is common in MANETs, a node can function as a router and as a host, while the WMN nodes are classified as mesh routers or mesh nodes. On the one hand, mesh routers have minimal mobility, provide access for normal nodes and mesh nodes, and can communicate with other mesh routers; besides, they are in charge of routing, bridging and network functions and do not have power limitations. On the other hand, mesh nodes can be stationary or mobile and require the efficient use of their energy supply like MANET nodes. WSNs are made up of a set of wireless sensor nodes that are usually deployed in hostile environments and are used for event detection (e.g., temperature and pressure measure, etc.). These sensors can perform some processing over the information obtained and transmit the data through the network, which provides the final user with a better understanding of the current state of the environment. Compared to common wireless mobile devices, WSN nodes are smaller, less expensive, and have fewer hardware characteristics and power consumption. However, due to their operating nature, once a node has depleted or damaged its battery, it may never be retrieved. Fog Computing It is possible to deploy an architecture with heterogeneous devices and to extend the characteristics of the cloud to edge devices, avoiding the network bottlenecks. Since the edge devices consume data that can provide elastic computing, they can be connected ubiquitously and share their resources, in many cases collaboratively, achieving the primary purpose of fog computing [15]. Fog computing is the relationship between edge devices and network core (cloud) [16][17][18]. Fog computing resources are the same for all nodes (networking, computing, and storage), and in most cases, share the same logical abstractions of virtualization and multi-tenancy. Fog computing is typically Information 2020, 11, 581 4 of 23 used for low latency and geo-distributed applications (e.g., sensor networks and surveillance systems). Additionally, they are used in large-scale distributed control systems as smart grids, smart buildings and smart farming. Inside the fog environment, tenants perceive the resources as dedicated. This is the result of sharing resources, using virtualized file systems and a network infrastructure (e.g., Software Defined Networks (SDN)) [19]. In Reference [20], the author proposes a set of features to deploy services and an application within a fog environment, including node configuration, nodal collaboration, resource/service provisioning metrics, service level objectives, applicable networking system, and security. The fog computing is possible to implement and design with the addition of MANET features as the duality node/router on each node, without infrastructure to exploit the computing resources on the fog environment and scale the scope to cloud computing. In References [21,22], some architecture is proposed to deploy services and share resources. The main objective is to optimize the Quality of Service (QoS) of heterogeneous resource attributes and applications known as a fog colony similar to an ad hoc cluster. In Reference [23], the infrastructure as a service approach is introduced for fog and cloud computing. The authors proposed a resource pool manager to detect and resolve deadlock and manage resources within the fog environment, and to achieve this objective, they employed a so-called free space fog resources. In this case, our model is oriented to probe the feasibility to use a fog computing or Ad Hoc Network as a cluster of resources on the paradigm of the mobile cloud. In Reference [24], another model was proposed that employs three tiers: the first, Things tier that manages wireless sensors, actuator networks, and mobile devices. These devices send information to the Fog (second tier), which includes the fog nodes (switches and routers) that link the system to the Cloud (third tier). This approach implies more computation for complex applications that cannot be executed by a fog node alone. The authors present an algorithm called unit-slot optimization, based on Lyapunov's optimization technique to balance the average response time, average cost, and average number of application loss (three-way trade-off). These prototype sets are based on cloud computing and edge network devices to reduce latency and improve the resources time consumption. The control is reduced and system changes are limited; however, they can be managed by each cloud's control systems and edge devices. Our approach is a prototype based on this kind of architecture in order to validate the feasibility of management and to create virtual operative systems as Wireless Distributed Computing. In some cases, the most interesting element in a fog computing architecture is the orchestrator [25]. The orchestrator is a useful abstraction to manage and control distributed systems. It observes, decides, and acts, through policies defined by the system administrator, an artificial agent, or some script. This kind of computing artifact, used at different tiers of computing infrastructure and services [25], is commonly shown in systems such as Software Defined Networks (SDN), Network Virtualization Functions (NFV), and Cloud computing application resource management. These three technologies converge in the same artifact: the orchestrator, as shown in Figure 1. resource/service provisioning metrics, service level objectives, applicable networking system, and security. The fog computing is possible to implement and design with the addition of MANET features as the duality node/router on each node, without infrastructure to exploit the computing resources on the fog environment and scale the scope to cloud computing. In References [21,22], some architecture is proposed to deploy services and share resources. The main objective is to optimize the Quality of Service (QoS) of heterogeneous resource attributes and applications known as a fog colony similar to an ad hoc cluster. In Reference [23], the infrastructure as a service approach is introduced for fog and cloud computing. The authors proposed a resource pool manager to detect and resolve deadlock and manage resources within the fog environment, and to achieve this objective, they employed a so-called free space fog resources. In this case, our model is oriented to probe the feasibility to use a fog computing or Ad Hoc Network as a cluster of resources on the paradigm of the mobile cloud. In Reference [24], another model was proposed that employs three tiers: the first, Things tier that manages wireless sensors, actuator networks, and mobile devices. These devices send information to the Fog (second tier), which includes the fog nodes (switches and routers) that link the system to the Cloud (third tier). This approach implies more computation for complex applications that cannot be executed by a fog node alone. The authors present an algorithm called unit-slot optimization, based on Lyapunov's optimization technique to balance the average response time, average cost, and average number of application loss (three-way trade-off). These prototype sets are based on cloud computing and edge network devices to reduce latency and improve the resources time consumption. The control is reduced and system changes are limited; however, they can be managed by each cloud's control systems and edge devices. Our approach is a prototype based on this kind of architecture in order to validate the feasibility of management and to create virtual operative systems as Wireless Distributed Computing. In some cases, the most interesting element in a fog computing architecture is the orchestrator [25]. The orchestrator is a useful abstraction to manage and control distributed systems. It observes, decides, and acts, through policies defined by the system administrator, an artificial agent, or some script. This kind of computing artifact, used at different tiers of computing infrastructure and services [25], is commonly shown in systems such as Software Defined Networks (SDN), Network Virtualization Functions (NFV), and Cloud computing application resource management. These three technologies converge in the same artifact: the orchestrator, as shown in Figure 1. [26]. The development of this kind of computational artifact adds a new set of policies to manage fog nodes and interact with cloud nodes. The framework for the management and orchestration of smart cities applications is presented, following the guidelines of the European Telecommunications Standards Institute (ETSI) and NFV Management and Orchestration (MANO) architecture. The fog orchestrator is responsible for managing the life-cycle of micro-services, monitoring and analyzing the system, generating an interface with the fog decision model, and registry of the new VNF and The development of this kind of computational artifact adds a new set of policies to manage fog nodes and interact with cloud nodes. The framework for the management and orchestration of smart cities applications is presented, following the guidelines of the European Telecommunications Standards Institute (ETSI) and NFV Management and Orchestration (MANO) architecture. The fog orchestrator is responsible for managing the life-cycle of micro-services, monitoring and analyzing the system, generating an interface with the fog decision model, and registry of the new VNF and Network Services (NS). Other architecture used to deploy distributed wireless systems such as the Wireless Sensors Networks (WSN) as a low cost computing system that provide scalable features at communication, architecture and resources, for the frameworks to manage the sensors and the communications, the theoretical solutions are based on algorithm that evaluate the performance and state the sensors, while they perform measurements simultaneously, the comparison of distributed algorithms is presented at Reference [27] with a simulation tool on two different network topologies. Operating Systems for Distributed Systems For deploy distributed systems such as IoT/Fog environments, it is common to use embedded system such as main devices, exist software and operating systems to manage the computing resources and communications on these devices. Some of them are for general purpose end devices used for deploy services and to create environments between end devices and services on the cloud or local servers. A taxonomy for IoT operating systems is proposed at Reference [28], based on the amount of computing resources, communication features, energy consumption, multi threading capability and support programming languages. An example of an Iot operating system is RIOT ( [29]), which is based on microkernel architecture to manage computing resources on IoT devices creating resource abstractions on each board. The operating system mapping the CPU and creating a software abstraction based on APIs for the create modular application, which is a compact and modular model to improve the requirements from IoT environments. However, there is no distributed approach to share and manage resources on an IoT environment or network cluster, instead of allowing the optimal use of board resources and exploiting the device capabilities. Other approaches for IoT architecture include the frameworks that help manage some resources or to control the processing in some of the system's member nodes. A case study is the prototype of a service-oriented middleware CoTWare [30] for the management of large-scale IoT applications, which visualize resources such as services to deploy applications, and this prototype was made with Arduino devices and three computers to simulate service calls under the RESTful APIs model. Another framework based on layer architecture for Iot proposes the use of Named Data Networking for edge architectures and cloud computing [31]. In the NDN architecture, the content is treated as the first-class citizen rather than host and names, which are used for network layer communication instead IP addresses. This paper shows, in order to solve the challenges of real-time services and to improve the performance of real time services, a combination of well-known innovative technologies Named Data Network and Edge Cloud Computing to reduce the latency between services and to resolve the user request. The new approaches are based on container technologies such as Docker to deploy distributed services that are joined with a middle-ward, as shown in Reference [32], which adds concepts such as real-time containers, presented in Reference [33]. Finally, the most similar approach to this paper was simulated with the GNS3 Simulator on a Fog Computing architecture based on virtual objects, created on the simulator as computing resources, sensors and applications. The model and results are included in Reference [34]. S.O.V.O.R.A. Wireless Distributed Systems One of the main problems within the architectures and structures presented so far is data management, i.e., how can you control and improve the operation of a system that has a distributed behavior? It is relevant to highlight that a distributed system operates at different stations. This means that the resources are in different machines and users are not aware of the interactions between them [35]. These distributed characteristics bring a variety of challenges for information management and monitoring the entire status of the system. It is key to identify which services require information consistency, availability, and partitions tolerance [36]. Architecture The architecture used on the prototype is one of the microservices. The software architecture provides autonomy, isolation, and the definition of a task for each service or software module, so the set of microservices composes the overall software architecture for the Distributed Wireless Operating System. The microservices architecture is based on a simple concept: the creation of a system from the set of services, each one of them with its small, independent, isolated, scalable, and resilient to failure data. The decomposition of the systems into a discrete and isolated subsystem is reported in well-defined protocols. The key factor in microservices is the isolation, which is a prerequisite for resilience and elasticity. It requires asynchronous communication to decouple the limits of microservices in time to allow for concurrency and distribution. In addition, it requires Space that allows the services to move around the system (mobility). Figure 2 shows a set of microservices defined for the local and the OR agents to create the minimum components to deploy S.O.V.O.R.A. on a MANET: the client and multi-thread servers for wireless communication and message passing, the module to monitor resources, a monitor device, log information and command execution of some message from the orchestrator or the user. Table 1 shows the commands that are available for users on the wireless operating system. The Orchestrator The OR is a process that runs periodically on the fog controller node(s) in the user space at the top of the OS. As a centralized controller, the main tasks are: (i) to collect the status information from each node to build an overview of the distributed system's status, and (ii) to send the commands to compose the workload or user requests to the distributed applications. The Ad hoc mode supports communications between the local agents and the applications on the system. It has the following microservices: • Server: It is a microservice that create a server multi-thread for the communication between nodes and applications. • Client: A microservice managed by the local agent that creates a multi-thread client to collect information about the node applications, send messages to deploy applications, and set the throughput and the resources consumption in each node. • Alfred Service: A microservice used as module to enable the node discovery service, on the network through B.A.T.M.A.N. protocol that allows signaling and to identify the orchestrator on the network. • Server Apps: It is a microservice that receives the information or results from the applications and sends it to the user or destination node. The orchestrator has the policies with the information about the current state of the system, which is the desired state. To reach it, the task distributor microservice program the tasks via the scheduler and the network manager that has the information about the network state and the identification of the nodes. The resource manager is a module that has microservices to monitor and make the assignment of computing resources in the network, deploying more services among MANET clusters. Microservices are applications, containers (Docker containers used to deploy distributed applications), networks, and computing resources. The Local Agent Similarly to the OR, the LA is a process that runs periodically in the user space at the top of the OS as one or multiple instances on the node to manage different resources or applications. The LA interacts with the OS to collect information related to the status of hardware resources, running applications, and MANET. Perform resources and workload control are based on that information. As shown in Figure 3, the LA has a set of microservices to allow for interaction with the running applications and to carry out monitoring and control at the application-level, estimating the computing resources consumption of the node, communicating periodically with the OR, sharing the status of the node and receiving execution requests. The modules are as follows: • Ad hoc Mode: This module enables the communication between the LA and the OR to send and receive the commands or the log information with the Alfred service. • Monitor App: A microservice that monitors the application performance. A controller links the LA to applications designed for use in the system. Log: A microservice that stores all logging information about resources, messages, network interface state, and local and global interactions in each node or instance. • Command Exec: A microservice that receives the messages from the users or the OR to execute, stop, or exit some distributed application. (CPU, Memory, Storage, I/O) and applications state, and the LA sends it to the OR.  Monitor Device: A microservice that shows the information in real-time executed as a thread.  Log: A microservice that stores all logging information about resources, messages, network interface state, and local and global interactions in each node or instance.  Command Exec: A microservice that receives the messages from the users or the OR to execute, stop, or exit some distributed application. Algorithm 1 shows the LA routine and the tasks carried out to keep the system in operation and communication between member nodes of the system and the OR. The input for the algorithm are the main services for the framework as the communication in the Ad hoc mode, as well as the OR discovery service and the communication servers with the different entities signalized in the system. Subsequently, the applications are deployed in docker containers, where the application controller is located and deployed, considering the number of cores available for execution. Algorithm 1 shows the LA routine and the tasks carried out to keep the system in operation and communication between member nodes of the system and the OR. The input for the algorithm are the main services for the framework as the communication in the Ad hoc mode, as well as the OR discovery service and the communication servers with the different entities signalized in the system. Subsequently, the applications are deployed in docker containers, where the application controller is located and deployed, considering the number of cores available for execution. The main process occurs on lines 5 to 28, where arrangements to save the history of resources are consumed in the node. For example, the %CPU is the resource to be saved as a value. This task is done globally at the node level and locally at the container level. In the same way, the performance of the running applications is monitored. These values are stored and sent to the OR (lines 29 to 31). In this process, it is important to highlight the execution of policies (lines 1-4) where there is a server thread that waits for the instructions to execute the application policies contained in the LA. Network In the proposed empirical model, the MANET was deployed with the B.A.T.M.A.N. protocol [37] that works on an ad hoc mode in the IEEE 802.11 standard set on the wireless NIC. The selfconfiguration is based on the avahi-daemon for setting an IP address dynamically. The discovered node uses the ALFRED messaging service [38] that publishes in the MANET and the information of each node as messages with an ID. Only nodes in the same cluster or ESSID can reach the messages on the network. While each node has a label to link and register with the OR, the LA enables the ALFRED protocol to send information over the MANET (Figure 4). The messages that pass between the nodes are through MANET control messages, while the ALFRED protocol messages are used as a json file with information about the node label, local states, available resources, application states, and network status. The information is collected by the local agent and sent to the OR to be analyzed and saved in its repository. The information collected is used to manage resources, applications, and tasks in the distributed wireless OS. The B.A.T.M.A.N. routing protocol is useful to reduce the latency and to know the location of the nodes according to the number of jumps between the nodes and the MAC address of the wireless NIC. The main process occurs on lines 5 to 28, where arrangements to save the history of resources are consumed in the node. For example, the %CPU is the resource to be saved as a value. This task is done globally at the node level and locally at the container level. In the same way, the performance of the running applications is monitored. These values are stored and sent to the OR (lines 29 to 31). In this process, it is important to highlight the execution of policies (lines 1-4) where there is a server thread that waits for the instructions to execute the application policies contained in the LA. Network In the proposed empirical model, the MANET was deployed with the B.A.T.M.A.N. protocol [37] that works on an ad hoc mode in the IEEE 802.11 standard set on the wireless NIC. The self-configuration is based on the avahi-daemon for setting an IP address dynamically. The discovered node uses the ALFRED messaging service [38] that publishes in the MANET and the information of each node as messages with an ID. Only nodes in the same cluster or ESSID can reach the messages on the network. While each node has a label to link and register with the OR, the LA enables the ALFRED protocol to send information over the MANET (Figure 4). The messages that pass between the nodes are through MANET control messages, while the ALFRED protocol messages are used as a json file with information about the node label, local states, available resources, application states, and network status. The information is collected by the local agent and sent to the OR to be analyzed and saved in its repository. The information collected is used to manage resources, applications, and tasks in the distributed wireless OS. The B.A.T.M.A.N. routing protocol is useful to reduce the latency and to know the location of the nodes according to the number of jumps between the nodes and the MAC address of the wireless NIC. The second phase is controller-LA communication, as shown in Figure 5. This communication is made from the LA to the distributed application; in this case, the application is located in a Docker container with the controller, which links the applications to monitor the performance. Thanks to this communication, it is also possible to adjust the operation speed of the applications, and in this sense, the number of resources used by the application. This scheme allows for monitoring at three levels: the hardware, the state of the mapped resources, and the applications deployed in the system. The second phase is controller-LA communication, as shown in Figure 5. This communication is made from the LA to the distributed application; in this case, the application is located in a Docker container with the controller, which links the applications to monitor the performance. Thanks to this communication, it is also possible to adjust the operation speed of the applications, and in this sense, the number of resources used by the application. This scheme allows for monitoring at three levels: the hardware, the state of the mapped resources, and the applications deployed in the system. The node discovery module works on the ad hoc mode using the messaging service provided by ALFRED. For deploy applications, the Community Edition (CE) docker engine, and the micro-services architecture are used to create, deploy, and monitor applications. The modular model allows for the transition from edge networks to cloud computing. The second phase is controller-LA communication, as shown in Figure 5. This communication is made from the LA to the distributed application; in this case, the application is located in a Docker container with the controller, which links the applications to monitor the performance. Thanks to this communication, it is also possible to adjust the operation speed of the applications, and in this sense, the number of resources used by the application. This scheme allows for monitoring at three levels: the hardware, the state of the mapped resources, and the applications deployed in the system. The second layer is composed of three elements: the communication module that connects to the ad hoc kernel module and generates multi-thread servers and client services in the MANET; the load balancer that defines the abstractions of the control and data planes through the flow control strategies; and the local agent that manages the other two elements of the layer are present in each node (minimum one instance per node), and this also performs the execution and monitoring of tasks and service requests from users. Monitoring local resources and application states is one of the basic services, and the OR executes this feature. The working layer is deployed in a single physical node for the control of the operating system (it is possible to deploy one or more ORs for MANET cluster). The last layer creates a simple distributed surveillance application with a monitoring service inside with some micro-services such as monitoring and web services deployed in docker containers. The prototype contains two artifacts for its operation, the LA agent and the OR agent, both working over a MANET. Considering that the nodes can work as routers and clients, the artifacts use the Better Approach To Mobile Adhoc Networking (B.A.T.M.A.N.) as routing protocol, giving this model the ability to mitigate problems associated with coverage and hidden nodes, as well as allowing node discovering. The LA acts with two roles: first, as a reactive agent, which executes the resources control policy, this model allows for controlling the applications to be deployed. In this case, each application is in a container and has a controller for its deployment. The model allows for having one or more local agents in one device to control resources and to monitor the applications. Second, as a deciding agent, the agent decides to apply the policies by sending them across the MANET. The local agent decides the amount of resources in the application container for the applications. The second element is the OR is an agent that can be deployed on unique node or each node on the network. It decides if an application is executed and continues the load balancing policy for the devices on the network. The agent has information about the global and local states of each device (CPU throughput, memory consumption, available bandwidth, network availability, and apps performance). This information is periodically received from local agents that inform the status of each node. In this environment, the OR model offers the possibility of monitoring different nodes and applications, allowing for the control of variables such as energy consumption, application distribution, and computing resources. These experiments only use one instance of an orchestrator agent due the limited physical nodes to manage the network (the experiment has six physical nodes). The OR also allows for the control of distributed wireless operative systems and their interactions, e.g., communication between nodes, allocation, and assignment of computing resources on all cluster, distribution and management of throughput of the applications distributed on each node. Experiment and Results To validate the report of this work, the abstraction of a distributed wireless operating system was implemented, which dynamically manages computing resources in Devices on a MANET. The experiment and test bed were based on the embedded devices referenced in Table 2 and the goal was to demonstrate the feasibility and viability and of this approach. These devices have LA and OR similar to the distributed applications in docker containers (launched and monitored by the LA). Global states are sent to a multi-thread server in the OR and are stored for use when the tasks are distributed in the system (workload). The first part of the deployment is the MANET creation and the discovery of nodes and resources. Then, the OR processes the user's requests as a workload for the wireless OS. The test application was designed as a distributed video-conferencing application segmented into containers, as shown in Figure 6. Each container has a microservice of the application processing, as shown in Table 3, which comes from the benchmarking suite MiBench [39]. In addition, each application is launched in a core or a different node depending on the OR policy. The throughput is set in the same way to manage and distribute workloads evenly on the network nodes; the results are sent to the source or a client for display. This model makes it possible to control computer resources such as energy, and to reduce the processing time of distributed applications; the power was measured with the smart power 2 [40] with more granularity and precision to create a consumption power model for each device on the MANET. [39]. In addition, each application is launched in a core or a different node depending on the OR policy. The throughput is set in the same way to manage and distribute workloads evenly on the network nodes; the results are sent to the source or a client for display. This model makes it possible to control computer resources such as energy, and to reduce the processing time of distributed applications; the power was measured with the smart power 2 [40] with more granularity and precision to create a consumption power model for each device on the MANET. Proposed Testbed To characterize the energy consumption, an experimental validation was carried out with the MiBench benchmarking suite, with the algorithms listed in Table 3 to measure the energy consumption at different CPU rates, which were controlled through processor frequency. The linearization of the output function can be seen in Figure 7. This generated output function is the basis of the OR policy to select the destination node in terms of the task to be performed on the network or application selected by the user. Proposed Testbed To characterize the energy consumption, an experimental validation was carried out with the MiBench benchmarking suite, with the algorithms listed in Table 3 to measure the energy consumption at different CPU rates, which were controlled through processor frequency. The linearization of the output function can be seen in Figure 7. This generated output function is the basis of the OR policy to select the destination node in terms of the task to be performed on the network or application selected by the user. In the same way, communication in the ad hoc mode was validated through the proactive routing protocol B.A.T.M.A.N. With the transmission of data (14 MB for test) validating the routes and the TQ value-Transmit Quality-as an additional factor for the selection of the destination node, taking the measurement of power consumed by each transmission carried out; the results can be seen in Table 4 for network protocol and in Table 5 for energy on a RPI 3. The OR monitors the network nodes' status and its resources, creating a virtual abstraction of the resources mapping on each node and the amount of resources free for use on the distributed wireless OS and managing it. The test policy is quite simple but it can be customized, for the purposes of the experiment, when mapping the available resources from the information provided by each local agent, and it is stored in the orchestrator which compares it with the information referring to the energy consumption model, the TQ value and the state of charge of the processors of each node. Based on this information, it creates a workload and dispatches it depending on the state of resources and the available route for delivery of the task. OS and managing it. The test policy is quite simple but it can be customized, for the purposes of the experiment, when mapping the available resources from the information provided by each local agent, and it is stored in the orchestrator which compares it with the information referring to the energy consumption model, the TQ value and the state of charge of the processors of each node. Based on this information, it creates a workload and dispatches it depending on the state of resources and the available route for delivery of the task. Logs and Events The OR main goal is to manage, with equality, the resources available in the MANET cluster depending on the user's requests and the global status of the system. For the users, a simple terminal as an interface to send the requests to the system to processing with the OR is available. Figure 8 shows the output of information collected for the OR of the network, OS resources available and tasks running on each node. Based on the events and the information generated by all nodes sent through the local agent (Figure 9), the operating system abstraction generates an event log ( Figure 10) with data of connectivity state, application status and workload of each of the local agents in the system. It is important to indicate that, given the number of devices, the experiment can be performed with a single orchestrator and the local agent instances in each of the member nodes of the ad hoc network. This information is local and sent to the orchestrator who is signaled through the network as the only member with the correct information of the system, making consensus between the members of the network and the operating system for its operation. If there are more orchestrators, the raft consensus algorithm [41] is used to guarantee the integrity and availability of the information among the local agents that are members of the network. Based on the events and the information generated by all nodes sent through the local agent (Figure 9), the operating system abstraction generates an event log ( Figure 10) with data of connectivity state, application status and workload of each of the local agents in the system. It is important to indicate that, given the number of devices, the experiment can be performed with a single orchestrator and the local agent instances in each of the member nodes of the ad hoc network. This information is local and sent to the orchestrator who is signaled through the network as the only member with the correct information of the system, making consensus between the members of the network and the operating system for its operation. If there are more orchestrators, the raft consensus algorithm [41] is used to guarantee the integrity and availability of the information among the local agents that are members of the network. Experiment Results With the test bed on execution, we ran some application on the system as an operating system task, and then take the measure of energy and application throughput on each node, as shown in Figure 11, which shows the measure of the energy consumption of the Odroid node carried out when Experiment Results With the test bed on execution, we ran some application on the system as an operating system task, and then take the measure of energy and application throughput on each node, as shown in Figure 11, which shows the measure of the energy consumption of the Odroid node carried out when using the smart power meter 2. In this case, the peak shows the maximum CPU consumption of these nodes with a simple task and as these measure, which is stabilized through the orchestrator messages and parameters sent to a local agent on this node. In this way, the CPU heartbeats resource, available in the MANET, is managed, and their relationship with the energy consumption is controlled. The peaks occur at the beginning of the execution of each application since the application is always trying to use 100% of the resources. The OR sets the processing rate and sends the desired throughput to the LA that processes it on the docker container. The first study case is with one node performing the deployment of six containers in each core of the embedded system Odroid ( Figure 12). This parallel application can be executed in the Odroid due to its capabilities (Table 2), which have four little architecture cores, four of big architecture, and a variable frequency on each core. This test shows the possible execution of concurrent applications, in this case the set of algorithms from MiBench as workload created by a users that do not have a huge load for processing. These set of applications running concurrently are the model of a distributed application, as a paradigm to deploy services and applications on the wireless distributed operating systems, the traditional application models do not allow for the deployment of services in a distributed way. The second experiment was performed using five nodes (4 RaspBerry PI-3 and 1 RaspBerry Pi-Zero W) running different applications distributed on them. Nodes 2 and 4 run the SHA encryption algorithm, while nodes 1 and 5 perform images processing using JPEG coding-encoding and node 3 performs ADPCM audio coding. After processing of all this information, it is sent through the MANET and to the LA, which delivers the result to the final user. Figure 13 shows the results of the execution of each core on each node begin with the maximum throughput and then converge to the throughput indicated for the OR, for instance node 2 begins with a throughput of 16 heartbeats by second and converge at 10 heartbeat at second, at the same way the other nodes reach the performance given for the orchestrator policy. The first study case is with one node performing the deployment of six containers in each core of the embedded system Odroid ( Figure 12). This parallel application can be executed in the Odroid due to its capabilities (Table 2), which have four little architecture cores, four of big architecture, and a variable frequency on each core. This test shows the possible execution of concurrent applications, in this case the set of algorithms from MiBench as workload created by a users that do not have a huge load for processing. These set of applications running concurrently are the model of a distributed application, as a paradigm to deploy services and applications on the wireless distributed operating systems, the traditional application models do not allow for the deployment of services in a distributed way. The first study case is with one node performing the deployment of six containers in each core of the embedded system Odroid ( Figure 12). This parallel application can be executed in the Odroid due to its capabilities (Table 2), which have four little architecture cores, four of big architecture, and a variable frequency on each core. This test shows the possible execution of concurrent applications, in this case the set of algorithms from MiBench as workload created by a users that do not have a huge load for processing. These set of applications running concurrently are the model of a distributed application, as a paradigm to deploy services and applications on the wireless distributed operating systems, the traditional application models do not allow for the deployment of services in a distributed way. The second experiment was performed using five nodes (4 RaspBerry PI-3 and 1 RaspBerry Pi-Zero W) running different applications distributed on them. Nodes 2 and 4 run the SHA encryption algorithm, while nodes 1 and 5 perform images processing using JPEG coding-encoding and node 3 performs ADPCM audio coding. After processing of all this information, it is sent through the MANET and to the LA, which delivers the result to the final user. Figure 13 shows the results of the execution of each core on each node begin with the maximum throughput and then converge to the throughput indicated for the OR, for instance node 2 begins with a throughput of 16 heartbeats by second and converge at 10 heartbeat at second, at the same way the other nodes reach the performance given for the orchestrator policy. The second experiment was performed using five nodes (4 RaspBerry PI-3 and 1 RaspBerry Pi-Zero W) running different applications distributed on them. Nodes 2 and 4 run the SHA encryption algorithm, while nodes 1 and 5 perform images processing using JPEG coding-encoding and node 3 performs ADPCM audio coding. After processing of all this information, it is sent through the MANET and to the LA, which delivers the result to the final user. Figure 13 shows the results of the execution of each core on each node begin with the maximum throughput and then converge to the throughput indicated for the OR, for instance node 2 begins with a throughput of 16 heartbeats by second and converge at 10 heartbeat at second, at the same way the other nodes reach the performance given for the orchestrator policy. If a new service is required, information about the status of the resources of each node is sent to the OR, which is the source of the application distribution. Figure 14 shows the constant energy consumption of node 2 in the second experiment when a distributed application (containers) is deployed. To send the task, the Orchestrator sends messages every second to neighboring nodes and local agent instances to validate their status based on the CPU consumption and the node's energy measurement, in the same way it validates the available resources to assign the task, and this, with the orchestrator's clock, is a measure of time in the operative system. If a new service is required, information about the status of the resources of each node is sent to the OR, which is the source of the application distribution. Figure 14 shows the constant energy consumption of node 2 in the second experiment when a distributed application (containers) is deployed. To send the task, the Orchestrator sends messages every second to neighboring nodes and local agent instances to validate their status based on the CPU consumption and the node's energy measurement, in the same way it validates the available resources to assign the task, and this, with the orchestrator's clock, is a measure of time in the operative system. If a new service is required, information about the status of the resources of each node is sent to the OR, which is the source of the application distribution. Figure 14 shows the constant energy consumption of node 2 in the second experiment when a distributed application (containers) is deployed. To send the task, the Orchestrator sends messages every second to neighboring nodes and local agent instances to validate their status based on the CPU consumption and the node's energy measurement, in the same way it validates the available resources to assign the task, and this, with the orchestrator's clock, is a measure of time in the operative system. Figures 15-17 show the cases in which an application was deployed in each core available in each node, and its throughput was controlled to manage energy and CPU consumption. This process allows for the management of the resources distributed in a MANET in a stochastic and dynamic medium. In the same model, it is possible to manage other distributed resources as memory, storage, and I/O devices. The OR verifies the global performance and tries to converge the throughput in all the nodes to optimize the resources consumption and the workload distribution among the MANET nodes. Figure 18 shows the performance results of each node in the distributed wireless OS. With this test scheme, we have demonstrated the possibility of having an implementation of a distributed wireless operating system with two abstractions, the local agent and the orchestrator, which can exist as instances in the nodes of an ad hoc network. In this case, the control of the nodes is by the orchestrator agent based on a policy of energy consumption, which is used to distribute loads through the wireless network, together with the information sent by each node on the state of the resources computation of the nodes and their load at the 468 application level. The OR verifies the global performance and tries to converge the throughput in all the nodes to optimize the resources consumption and the workload distribution among the MANET nodes. Figure 18 shows the performance results of each node in the distributed wireless OS. The OR verifies the global performance and tries to converge the throughput in all the nodes to optimize the resources consumption and the workload distribution among the MANET nodes. Figure 18 shows the performance results of each node in the distributed wireless OS. With this test scheme, we have demonstrated the possibility of having an implementation of a distributed wireless operating system with two abstractions, the local agent and the orchestrator, which can exist as instances in the nodes of an ad hoc network. In this case, the control of the nodes is by the orchestrator agent based on a policy of energy consumption, which is used to distribute loads through the wireless network, together with the information sent by each node on the state of the resources computation of the nodes and their load at the 468 application level. With this test scheme, we have demonstrated the possibility of having an implementation of a distributed wireless operating system with two abstractions, the local agent and the orchestrator, which can exist as instances in the nodes of an ad hoc network. In this case, the control of the nodes is by the orchestrator agent based on a policy of energy consumption, which is used to distribute loads through the wireless network, together with the information sent by each node on the state of the resources computation of the nodes and their load at the 468 application level. This method allows for the generating of an abstraction of the operating system in order to manage distributed computing resources, to create a model to distributed applications, with all of them running on a MANET, even though these models can be replicated on other wireless infrastructures and technologies. Our first approach was with the IEEE802.11x ad hoc mode. Taking advantage of proactive routing protocols as a mechanism for discovering nodes, routes and services available in the S.O.V.O.R.A. without the traditional load in the IEEE 802.11 infrastructure mode. Conclusions The wireless distributed operating systems are a paradigm that involves the network stochastic and dynamic behavior, due the mobility and the channel shared conditions on wireless networks. On our approach, the nodes on the network are a pool of computing resources that are available for the users in order to deploy applications and services as a distributed operating system. For the deployment, this system is needed to create a model of distributed applications in order to test the complete scenario for a wireless distributed operating system. Wireless distributed OS have potential in environments such as IoT, fog computing, edge computing, and similar, as they allow for load balancing, sharing and managing computing resources under the mobile cloud model. Our approach uses two kinds of agents: the Local Agent and Orchestrator based on a scheme that can check the local and global data from each node on the network to control the distributed computing resources of the network-in this case, a MANET. Thanks to its auto-configuration properties, this kind of network allows for the deployment of services without infrastructure and to mitigate some routing problems. Due to continuous communication and signaling supplied by the B.A.T.M.A.N. protocol, the proactive routing protocol allows for the distributed application approach, and the deployment of the isolated micro-services in containers on each node as instances, setting the amount of resources needed for the deployment of applications in different nodes of the distributed wireless OS. This is similar to how B.A.T.M.A.N.-adv works on the second network layer, which reduces the traditional network layers workload, if it compares the infrastructure mode on IEEE 802.11. The model proposed allows for deploy services as the node discovery, node signaling, fail detector, instant messaging and so on, due to the updating of routes on all networks, and the disseminated of routes and system information in all nodes on the network every second, these features allow for the distribution of tasks, to know the status of resources on the network, to create software and hardware abstractions by passing messages onto the agents (LA and OR) on ad hoc mode and to deploy distributed applications.
2020-12-17T09:10:55.771Z
2020-12-14T00:00:00.000
{ "year": 2020, "sha1": "2ba8c08ddd41b4a5ff66678564688b657d81cebc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2078-2489/11/12/581/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b1de18d23a7893822ed5b2bb138b4bd2c7ea3dad", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
235220151
pes2o/s2orc
v3-fos-license
Comprehensive Analysis and Identification of Key Driver Genes for Distinguishing Between Esophageal Adenocarcinoma and Squamous Cell Carcinoma Background: Esophageal cancer (EC) is one of the deadliest cancers in the world. However, the mechanism that drives the evolution of EC is still unclear. On this basis, we identified the key genes and molecular pathways that may be related to the progression of esophageal adenocarcinoma and squamous cell carcinoma to find potential markers or therapeutic targets. Methods: GSE26886 were obtained from Gene Expression Omnibus (GEO) database. The differentially expressed genes (DEGs) among normal samples, EA, and squamous cell carcinoma were determined using R software. Then, potential functions of DEGs were determined using the Database for Annotation, Visualization and Integrated Discovery (DAVID). The STRING software was used to identify the most important modules in the protein–protein interaction (PPI) network. The expression levels of hub genes were confirmed using UALCAN database. Kaplan–Meier plotters were used to confirm the correlation between hub genes and outcomes in EC. Results: In this study, we identified 1,098 genes induced in esophageal adenocarcinoma (EA) and esophageal squamous cell carcinoma (ESCC), and 669 genes were reduced in EA and ESCC, suggesting that these genes may play an important role in the occurrence and development of EC tumors. Bioinformatics analysis showed that these genes were involved in cell cycle regulation and p53 and phosphoinositide 3-kinase (PI3K)/Akt signaling pathway. In addition, we identified 147 induced genes and 130 reduced genes differentially expressed in EA and ESCC. The expression of ESCC in the EA group was different from that in the control group. By PPI network analysis, we identified 10 hub genes, including GNAQ, RGS5, MAPK1, ATP1B1, HADHA, HSDL2, SLC25A20, ACOX1, SCP2, and NLN. TCGA validation showed that these genes were present in the dysfunctional samples between EC and normal samples and between EA and ESCC. Kaplan–Meier analysis showed that MAPK1, ACOX1, SCP2, and NLN were associated with overall survival in patients with ESCC and EA. Conclusions: In this study, we identified a series of DEGs between EC and normal samples and between EA and ESCC samples. We also identified 10 key genes involved in the EC process. We believe that this study may provide a new biomarker for the prognosis of EA and ESCC. INTRODUCTION According to the cancer statistics in 2018, the mortality rate of esophageal cancer ranks sixth among all tumors all over the world (Bray et al., 2018;Gu et al., 2020b). Esophageal carcinoma (EC) is divided into esophageal adenocarcinoma (EA) and esophageal squamous cell carcinoma (ESCC) . ESCC mostly occurs in the upper and middle portions of the esophagus and related to alcohol and nicotine abuse . ESCC is particularly prominent in China, accounting for about 88% of EC (Wang et al., 2014). Esophageal adenocarcinoma is a highly invasive histological subtype, which is dominant in western countries (Abbas and Krasna, 2017). EA occurs in the lower portion of the esophagus and arises as a consequence of persistent gastroesophageal reflux from areas with specialized intestinal metaplasia in Barrett's esophagus (Gindea et al., 2014), The 5-year survival rate is as low as 20% (Abbas and Krasna, 2017). At present, the treatment methods of the two EC are similar, including chemotherapy, radiotherapy, and surgery, in which surgery is the most common treatment (Kelsen et al., 1998). Identifying biomarkers for EC development, progression, and prognosis is essential for understanding EC and improving clinical decision-making. In the past few decades, a large number of studies have revealed the potential mechanism of regulating EC progression. For example, N-myc-downregulated gene 4 (NDRG4) plays a role in cancer suppression of EA (Cao et al., 2020). Inhibition of DCLK1 can reduce the incidence of EC and improve its chemosensitivity by inhibiting β-catenin/c-myc signal (Whorton et al., 2015;Zhang et al., 2020). Notch signal pathway mediates Barrett's esophageal differentiation and promotes its development to adenocarcinoma (Kunze et al., 2020). Abnormal WNT5A/ROR2 signaling pathway is a characteristic of Barrettrelated EA (Lyros et al., 2016). At the same time, multiple bioinformatics analysis of EC was carried out based on RNA sequences and microarray datasets (Zhang H. et al., 2019). For example, a total of 345 DEGs were identified by Zhang H. et al. (2019) in normal esophageal and ESCC samples, including Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway of endocytosis, pancreatic secretion, and fatty acids. However, the regulatory mechanism in EC is still not clear. In this study, we downloaded GSE26886 (Wang et al., 2013) from the Gene Expression Omnibus (GEO) database. DEGs among esophageal squamous epithelium, Barrett's esophagus, EA, and ESCC were analyzed. Then, the KEGG pathway and protein-protein interaction (PPI) network of DEG are analyzed. Finally, the survival rate of the identified core gene was verified and analyzed. The core gene may be a novel biomarker and therapeutic target for esophageal cancer. GEO Gene Expression Data In this study, we aimed to identify differently expressed specific biomarkers to distinguished EA from ESCC. By screening GEO datasets, only GSE26886 include four types of EC-related samples, including healthy controls, Barrett's esophagus, EA, and ESCC, thus selected for further analysis. GSE26886 (Wang et al., 2013) were obtained from the GEO database. A total of 69 frozen specimens were collected, including 19 healthy controls, 20 Barrett's esophagus, 21 EA, and 9 ESCC. Data Processing and DEGs Filtering The Database for Annotation, Visualization, and Integrated Discovery (DAVID) 6.8 (Huang da et al., 2009) was used to analyze the GO function of integrating DEG and KEGG paths (Shi et al., 2018b;Gu et al., 2020dGu et al., , 2021. GO term and KEGG pathways with P < 0.05 were selected as enrichment functions (Gu et al., 2020c). PPI Network Analysis Protein-protein interaction network is an online tool for building data from STRINGS 1 . The platform reveals protein interaction and functional analysis (Shi et al., 2018a;Shi X. et al., 2020). The most important modules in the PPI network were identified by insertion molecular complex detection (MCODE) with criteria (Shannon et al., 2003): degree value = 2, node score value = 0.2, and K score = 2. Then, the GO function and KEGG pathway of genes in these modules were using DAVID, with statistical significance (P < 0.05). Validation of Hub Genes in EC UALCAN 2 data were analyzed to compare the expression of hub gene in esophageal squamous epithelium, Barrett's esophagus, EA, and ESCC (Chandrashekar et al., 2017). Gene expression profile interaction analysis (GEPIA) (Tang et al., 2017;Gu et al., 2020a) was used to analyze the overall survival curve of each key gene, where P < 0.05 was considered to be statistically significant. Identification of DEGs GSE26886 datasets were used to compare the gene expression among different types of EC. First, 2,667 genes were identified to be induced, and 2,106 genes were identified to be reduced in ESCC compared to esophageal squamous epithelium samples (Figures 1A,C). Meanwhile, 2,532 genes were identified to be induced, and 1,468 genes were identified to be reduced in EA compared to Barrett's esophagus samples (Figures 1B,D). Finally, we revealed 1,098 common induced genes in both EA and ESCC ( Figure 1E) and 669 common reduced genes in both EA and ESCC compared to normal samples ( Figure 1F), suggesting that these genes may have a crucial role in the tumorigenesis and progression of both EA and ESCC. Of note, we also found that 1,669 ESCC-specific upregulated, 1,434 EA-specific upregulated, 1,437 ESCC-specific downregulated, and 799 EAspecific downregulated genes, further confirming that that there are significant differences in the pathogenesis between EA and ESCC (Figures 1E,F). Bioinformatics Analyses of Common DEGs in EA and ESCC Database for Annotation, Visualization and Integrated Discovery was used for bioinformatics analysis. GO functions analysis results showed that the common induced gene was related to mitotic chromosome condensation, spindle organization, chromosome segregation, negative regulation of cell migration, RNA processing, sister chromatid cohesion, protein SUMOylation, transcription, DNA replication, extracellular matrix organization, cellular response to DNA damage stimulus, and cell division (Figure 2A). The common reduced gene was related to flavone metabolic process, flavonoid biosynthetic process, negative regulation of cellular glucuronidation and fatty acid metabolic process, flavonoid glucuronidation, serine/threonine kinase activity, substantia nigra development, protein stabilization, vesicle-mediated transport, and cell-cell adhesion ( Figure 2C). Identification of DEGs Between EA and ESCC In order to reveal the expression signature that was used to distinguish EA from ESCC, we analyzed the different expression of genes. Finally, we revealed 857 induced genes and 880 reduced genes in EA compared to ESCC samples (Figures 3A,B). Bioinformatics Analyses of DEGs Between EA and ESCC Samples GO functions analysis results showed that the induced genes in ESCC were related to telomere capping, nucleosome assembly, telomere organization, DNA-templated transcription, initiation, and chromatin silencing at ribosomal DNA (rDNA) ( Figure 4A). KEGG pathway analysis showed induced genes in ESCC were related to the regulation of pluripotency of stem cells, FoxO signaling pathway, Rap1, Hippo, and PI3K-Akt signaling ( Figure 4B). GO functions analysis results showed that the reduced genes in ESCC were related to fatty acid degradation, fatty acid metabolism, Mucin-type O-glycan biosynthesis, N-glycan biosynthesis, peroxisome, Fc gamma R-mediated phagocytosis, metabolic pathways, and endocytosis ( Figure 4C). KEGG pathway analysis showed reduced genes in ESCC were related to carbohydrate transport, protein N-linked glycosylation, COPII vesicle coating, O-glycan processing, endoplasmic reticulum (ER) to Golgi vesicle-mediated transport, cytoskeleton organization, carbohydrate metabolic process, and cell-cell adhesion ( Figure 4D). Identification of Hub Tumor Progression Genes Between EA and ESCC Finally, we identified 148 common induced genes that were also differently expressed between EA and ESCC ( Figure 5A) and 131 common reduced genes that were also differently expressed between EA and ESCC ( Figure 5B). In order to confirm the expression of these hub genes, we analyzed The Cancer Genome Atlas (TCGA) dataset. As expect, we found that 47 common reduced and 49 common induced genes were also differently expressed in EC samples compared to normal samples using TCGA database (Figures 5C,D). Validation of Hub Genes and Survival Curve Analysis Furthermore, we confirmed the expression levels of 10 hub genes using the TCGA dataset. The results showed that GNAQ, SCP2, RGS5, MAPK1, ATP1B1, SLC25A20, HADHA, HSDL2, ACOX1, reduced in EC samples, and NLN were significantly induced in EC samples compared to normal tissues ( Figure 7A). Furthermore, the area under the curve (AUC) of GNAQ for distinguishing EC samples from normal tissues was 0.8835 ( Figure 7B). The AUC of RGS5 for distinguishing EC samples from normal tissues was 0.7951 ( Figure 7C). The AUC of MAPK1 for distinguishing EC samples from normal tissues was 0.8432 ( Figure 7D). The AUC of ATP1B1 for distinguishing EC samples from normal tissues was 0.6958 ( Figure 7E). The AUC of HADHA for distinguishing EC samples from normal tissues was 0.8108 ( Figure 7F). The AUC of HSDL2 for distinguishing EC samples from normal tissues was 0.8373 ( Figure 7G). The AUC of SLC25A20 for distinguishing EC samples from normal tissues was 0.7872 ( Figure 7H). The AUC of ACOX1 for distinguishing EC samples from normal tissues was 0.8457 ( Figure 7I). The AUC of SCP2 for distinguishing EC samples from normal tissues was 0.7597 ( Figure 7J). The AUC of NLN for distinguishing EC samples from normal tissues was 0.8477 ( Figure 7K). Next, the transcription expression data of hub genes in normal tissues, EA, and ESCC were obtained using UALCAN, which were differently expressed between EA and normal samples and between ESCC and normal samples ( Figure 8A). As presented in Figure 8, we found that GNAQ (Figure 8B) We utilized the Kaplan-Meier Plotter online tool to analyze the correlation between OS time and hub genes expression in EA and ESCC. We found higher expression levels of MAPK1 were related to longer OS time in patients with ESCC, not EA (Figures 9A,B). Higher expression levels of ACOX1 were related to shorter OS time in patients with ESCC and longer OS time in patients with EA (Figures 9C,D). Higher expression levels of SCP2 were related to shorter OS time in patients with ESCC, but not EA (Figures 9E,F). Higher expression levels of NLN were related to shorter OS time in patients with EA, but not ESCC (Figures 9G,H). However, we did not observe a significant correlation between OS time and GNAQ, RGS5, ATP1B1, HADHA, HSDL2, and SLC25A20 (data not shown). DISCUSSION Although there are marked differences in the pathogenesis, the treatment for ESCC and EA are similar, including chemotherapy, radiotherapy, and surgery, in which surgery is the most common treatment (Campbell and Villaflor, 2010). Identifying biomarkers for EC development, progression, and prognosis is essential for understanding EC and improving clinical decision-making. The aim of this study was to identify the similarities and differences between ESCC and EA. In this study, we analyzed GSE26886 datasets and identified 1,098 common induced genes in both EA and ESCC and 669 common reduced genes in both EA and ESCC, indicating that these genes may have a crucial role in EC tumorigenesis and progression. We also revealed 857 induced genes and 880 reduced genes in EA compared to ESCC samples. Furthermore, we conducted bioinformatics analysis to reveal the potential roles of these genes. Finally, we utilized the public databases to verify the levels of hub genes in EC samples. We thought we could provide novel biomarkers for EA and ESCC prognosis. Over the past decades, multiple efforts were paid to identify the mechanisms involved in regulating EA and squamous cell carcinoma. For example, targeting the thromboxane A2 pathway driven by cox1/2 can inhibit Barrett's esophagus and EA (Zhang T. et al., 2019). TRIM27 promotes the occurrence and development of esophageal cancer by regulating the PTEN/Akt signaling pathway (Zhang T. et al., 2019). FOXD2-AS1 silencing inhibits the growth and metastasis of esophageal cells by regulating the mir-145-5p/Cdk6 axis (Shi W. et al., 2020). ATP6V0D2 is a subunit related to proton transport, which plays a carcinogenic effect in esophageal cancer and is related to epithelial-mesenchymal transition (Qi et al., 2020). However, there was still a lack of comprehensive analysis of hub signaling in esophagus tumors. In this study, we identified DEGs in esophagus cancer and revealed 1,098 common induced and 669 common reduced genes in both adenocarcinoma and squamous cell carcinoma, which may present the hub mechanisms in esophagus cancers. Bioinformatics analysis found that upregulated genes mainly participated in cell cycle regulation via modulating a series bps, including chromosome segregation, sister chromatid cohesion, and DNA replication. The reduced DEGS were involved in regulating metabolism, via modulating a series bps, including flavone metabolic process and cellular glucuronidation. One hundred forty-seven common induced genes that was also differently expressed between EA and ESCC were identified using Venn diagram. (B) One hundred thirty common reduced genes that was also differently expressed between EA and ESCC were identified using Venn diagram. (C) The differently expressed genes between esophageal cancer (EC) and normal samples were shown using heatmap. (D) Forty-seven common reduced and 49 common induced genes were also differently expressed in EC samples compared to normal samples using The Cancer Genome Atlas (TCGA) database. Of note, we found several hub signaling, such as p53 and PI3K-Akt signaling pathway. As a multifunctional transcription factor, p53 regulates the expression of more than 2,500 target genes (Stegh, 2012). p53 affects numerous and highly diverse cellular processes, including maintaining genomic stability and fidelity, metabolism, and longevity (Stegh, 2012). It is one of the most important and widely studied tumor suppressors. p53 is activated by various stresses, the most important of which are genotoxic damage, hypoxia, and heat shock (Hsu et al., 1995;Hu et al., 2012). It can block cancer progression by triggering transient or permanent growth arrest, DNA repair, or promoting cell death. This effective and versatile anticancer activity spectrum, together with genomic and mutation analysis, shows that p53 is inactivated in more than 50% of human cancers (Nigro et al., 1989). PI3K signaling pathway is one of the most common signaling pathways in human tumors and plays a key role in the occurrence and development of tumors (Liu et al., 2009). Esophageal carcinoma includes EA and ESCC. It is one of the most common gastrointestinal cancers, causing about 375,000 deaths worldwide each year. More and more literatures support different treatment strategies according to the histological characteristics of esophageal cancer (Domper Arnal et al., 2015). The different treatment strategies and outcomes of AC and SCC reflect the impact of histology on the natural history and treatment outcomes of some cancers. Therefore, it is an urgent need to identify DEGS between EA and SCC. In this study, we identified 598 induced and 924 reduced genes in squamous cell carcinoma compared to adenocarcinoma samples. Bioinformatics analysis showed that the induced genes in SCC was related to telomere capping, telomere organization, and DNA replication. Telomeres had crucial roles in tumorigenesis by modulating the proliferation and cell cycle of cancer cells (Cacchione et al., 2019). Downregulated genes in SCC was related to fatty acid metabolism and extracellular signal-regulated FIGURE 6 | The protein-protein interaction (PPI) network of differentially expressed genes (DEGs) was constructed. kinase 1 (ERK1) and ERK2 cascade. ERK signaling is activated in tumors, which was related to regulate multiple processes such as proliferation and survival (Kohno and Pouyssegur, 2006). Previous studies demonstrated that this signaling had a crucial role in both EA and ESCC. For example, Chen et al. (2019) reported that targeting ERK significantly inhibits growth and metastasis of esophageal squamous cell carcinoma cells. Miral R Sadaria et al. (2013) found that suppressing ERK 1/2 activation reduced cell viability and proliferation of human esophageal adenocarcinoma cells. Finally, we identified 147 common induced genes that were also differently expressed between EA and ESCC and 130 common reduced genes that were also differently expressed between EA and ESCC. Based on PPI network analysis, we identified 10 hub genes with connection > 2, including GNAQ, RGS5, MAPK1, ATP1B1, HADHA, HSDL2, SLC25A20, ACOX1, SCP2, and NLN. Very interestingly, the further confirmation showed that most of these hub genes, including GNAQ, RGS5, MAPK1, ATP1B1, HADHA, HSDL2, SLC25A20, ACOX1, and SCP2, were reduced in EC samples, suggesting that they may play a tumor-suppressive role in EC. Only NLN was report to significantly be overexpressed in EC samples compared to normal tissues. Moreover, we found that GNAQ, RGS5, ATP1B1, HADHA, HSDL2, SLC25A20, ACOX1, and SCP2 were reduced in ESCC samples compared to EA samples; however, MAPK1 and NLN were reduced in ESCC samples compared to EA samples. Among these genes, GNAQ was reported to be related to uveal melanoma progression. GNAQ mutations have led to the activation of several downstream pathways in uveal melanoma, including ERK, p38, c-JUN N-terminal kinase (JNK), and Yap signaling (Shoushtari and Carvajal, 2014). In this study, we found that the expression of GNAQ in esophageal carcinoma and EA was lower than normal. The expression of GNAQ in ESCC was also lower than that in EA. G protein signal transduction regulator 5 (RGS5) is a family of GTPase activators and signal transduction molecules that negatively regulate the function of G protein (Liang et al., 2005). More specifically, RGS5 stops the signal transduction in heterotrimer G protein and is located in plasma membrane and cytoplasm (1). Recently, RGS5 has been identified as a major gene induced in pericytes and is associated with some morphological changes in tumor vasculature. It was found that RGS5 level decreased with the increase in antivascular endothelial growth factor (anti-VEGF) antibody expression as a result of angiogenesis inhibition . Of note, this study for the first time revealed that the dysregulation of MAPK1, ACOX1, SCP2, and NLN is significantly correlated to the survival time in EC patients, whose functional importance had been implied in multiple cancer types. MAPK1 belongs to the MAP kinase family . MAPK1 is a well-known oncogene, which is overexpressed in various types of human cancers, such as lung tumor, ovarian, cervical, and gastric cancer. ACOX1 is an enzyme that catalyzes the first and rate-limiting desaturation of longchain acyl coenzyme A to 2-trans-enol coenzyme A and transfers electrons to the reaction to react with molecular oxygen to form hydrogen peroxide (Zhang et al., 2021). Recent studies have shown that ACOX1 may be involved in tumorigenesis. For example, ACOX1 knockout contributed to liver cancer progression (Chen et al., 2018). In addition, ACOX1 destabilizes p73, thereby inhibiting the intrinsic apoptotic pathway of lymphoma cells and regulating the sensitivity to doxorubicin. SCP2 has no enzyme activity but binds branched chain lipids such as phytic acid and cholesterol derived from phytol (Milligan et al., 2017). SCP2 enhances the uptake and metabolism of branched chain fatty acids (Milligan et al., 2017), which is a recognized intracellular cholesterol transporter, which can direct cholesterol to cholesterol-rich cell membrane microstructure. It has been reported that the expression of SCP2 is related to the progression of glioma, and the suppression of SCP2 protein expression can inhibit the proliferation of tumor cells by inducing autophagy. In addition, SCP2-mediated cholesterol membrane transport promotes pituitary adenoma growth by activating hedgehog signaling (Ding et al., 2019). This study is the first to reveal the important role of SCP2 in esophageal cancer. It may be a potential biomarker for the prognosis of esophageal cancer. NLN is a 78-kDa monomer protein with 704 amino acid residues and only hydrolyzes peptides with 5-17 amino acids (Cavalcanti et al., 2014). In vivo studies have shown that NLN is associated with multiple human diseases (Garrido et al., 1999;Massarelli et al., 1999;Rioli et al., 2003). This study is the first to show that NLN is induced in esophageal cancer and has the ability to distinguish between EA and ESCC. In addition, we should point out several limitations of this study. First, the expression levels of hub genes, such as MAPK1, ACOX1, SCP2, and NLN, were not confirmed using clinical samples. Second, the molecular functions of these hub genes in EC remained largely unclear. Using loss of functions with specific small-interfering RNAs (siRNAs) targeting these hub genes will further strength the findings of this study. CONCLUSION In this study, we analyzed the GSE26886 dataset and identified 1,098 genes induced in EA and ESCC, and 669 genes were reduced in EC and ESCC, suggesting that these genes may play an important role in the occurrence and development of EC tumors. Bioinformatics analysis showed that these genes were involved in cell cycle regulation, p53 signaling pathway, and PI3K/Akt signaling pathway. In addition, we identified 147 induced genes and 130 reduced genes differentially expressed in EA and ESCC. The expression of ESCC in the EA group was different from that in the control group. By PPI network analysis, we identified 10 hub genes, including GNAQ, RGS5, MAPK1, ATP1B1, HADHA, HSDL2, SLC25A20, ACOX1, SCP2, and NLN. TCGA validation showed that these genes were present in the dysfunctional samples between EC and normal samples and between EA and ESCC. Kaplan-Meier analysis showed that MAPK1, ACOX1, SCP2, and NLN were associated with overall survival in patients with EC. We believe that this study may provide a new biomarker for the prognosis of EA and ESCC. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ supplementary material. AUTHOR CONTRIBUTIONS SL conceived and designed the study. FW, LZ, and YXu performed the analyses. All authors wrote the manuscript and read and approved the manuscript.
2021-05-28T13:23:04.716Z
2021-05-28T00:00:00.000
{ "year": 2021, "sha1": "4550319c0293f5c6e0586355a1e1c2b9735fbcb1", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.676156/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4550319c0293f5c6e0586355a1e1c2b9735fbcb1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
258236294
pes2o/s2orc
v3-fos-license
Activation of telecom emitters in silicon upon ion implantation and ns pulsed laser annealing The recent demonstration of optically active telecom emitters makes silicon a compelling candidate for solid state quantum photonic platforms. Particularly fabrication of the G center has been demonstrated in carbon-rich silicon upon conventional thermal annealing. However, the high-yield controlled fabrication of these emitters at the wafer-scale still requires the identification of a suitable thermodynamic pathway enabling its activation following ion implantation. Here we demonstrate the activation of G centers in high-purity silicon substrates upon ns pulsed laser annealing. The proposed method enables the non-invasive, localized activation of G centers by the supply of short non-stationary pulses, thus overcoming the limitations of conventional rapid thermal annealing related to the structural metastability of the emitters. A finite-element analysis highlights the strong non-stationarity of the technique, offering radically different defect-engineering capabilities with respect to conventional longer thermal treatments, paving the way to the direct and controlled fabrication of emitters embedded in integrated photonic circuits and waveguides. Introduction The G center is a carbon-related point defect in silicon, whose discovery and characterization have surged a strong interest among the scientific community since its recent demonstration as a single-photon emitter. 1,2In particular, this silicon G center represents an emerging platform in quantum sensing, communication and information processing. 3 4, due to several promising features, namely: emission in the telecom range (1279 nm); 5,6 availability of a triple-singlet transition enabling optically-detected magnetic resonance protocols; 7 appealing coupling of the defect with nuclear spin and electron spin degrees of freedom. 3,4,8,9.Furthermore, its availability as a solid-state color center in silicon without the need for articulated homoepitaxial growth processes paves the way towards the development of highly integrable platforms, upon fabrication by means of industry-compatible techniques such as ion implantation.In this context, the technological capability of introducing G centers in high-purity silicon substrates with a high degree of control will be crucial for the fabrication of practical devices.This goal requires mature ion implantation techniques (e.g.deterministic approaches for the delivery of single impurities with high spatial resolution), as well as highly efficient post-implantation processes enabling their optical activation upon conversion into stable lattice defects. 10 present, the major obstacle to the systematic implementation of this type of emitter in silicon is represented by its very structural configuration.The point defect has been attributed to a neutrally-charged substitutional dicarbon pair coupled to an interstitial silicon atom. 8,11This structure, which is rather unusual if compared with that of stable emitters in other classes of materials with diamond-type crystal structure, [12][13][14][15] poses questions on the existence of suitable pathways for its consistent and efficient fabrication.A relevant concern, recently highlighted in a theoretical study, 11 is that the configuration of this complex represents a structurally metastable state, in competition with more stable lattice defects that are energetically more accessible during conventional thermal annealing processes.Indeed, the formation of the G center requires the occurrence of two separate processes, namely: the accommodation of a C atoms at a substitutional lattice site and its interaction with a mobile C interstitial 11 .In this respect, it is worth remarking that, while the former process is enhanced by the thermal treatment of the substrate, 9 the latter one is unfavored by annealing treatments and rather promoted by radiation-induced damage within the crystal. 11his observation is in line with the experimental results reported so far in the literature.The production of the G center has been typically reported in carbon-rich samples, obtained either by Czochralski synthesis 16 or by carbon ion implantation followed by conventional thermal annealing and/or subsequent proton or silicon irradiation. 2,6,17Notably, the carbon ion implantation step was reported to be unnecessary if the native carbon concentration in the substrate is sufficiently high. 2 16 These fabrication requirements, accompanied by the necessity of conventional rapid thermal annealing (RTA) treatments (20 s duration) 18 are indicative of a poor efficiency in the formation of G centers under annealing processes characterized by a longer duration. 19These limitations in the fabrication process lead to two undesirable features.Firstly, the choice of carbon-rich substrates instead of high-purity crystals might result in embedding the fabricated G centers in a defective environment, which might alter their overall emission 20 or increase the background photoluminescence radiation 16 .Secondly, a high-concentration carbon doping of the substrates severely limits to a purely statistical approach the fabrication of the color centers, thus preventing the necessary level of control on their number and position. In this work we report a new method that allows the spatially resolved and controlled fabrication of G center ensembles, based on ns pulsed focused laser annealing in carbon-implanted silicon substrates.This approach relies on a significantly faster heat transient in the annealing step, with respect to statistical activation approaches based on conventional RTA approaches reported so far. 19his work is inspired by the pioneering work carried out by M.S. Skolnick et al., 21 in which the formation of the G centers was obtained by employing a high power Q-switched laser inducing a melting and subsequent recrystallization of implanted silicon.The dependence of the concentration of the W center in CZ silicon upon pulsed annealing was also further observed by employing a broad infrared source at lasing energy densities insufficient to achieve full recrystallization of the material, thus suggesting a role of thermal heating in the formation of optically active defects 22 .The significant progresses in laser technology have enabled more recently to demonstrate the fabrication of color centers in semiconductors by means of fs laser irradiation [23][24][25] , in which the main role of the optical pulses consists of the introduction of the lattice vacancies necessary for the formation of emitting defects.Similarly to high-power ns lasing, the exploration of fs laser annealing for the fabrication of the G center in silicon was proven to be effective only in case of the crystal melting and subsequent recrystallization. 26onversely, the approach investigated in this work relies on a purely thermal process, replacing conventional annealing treatments while offering the possibility to activate emitters in confined volumes of the target material.By delivering thermal energy to the crystal below the melting threshold, without reaching stationary conditions nor introducing lattice damage, this fast off-equilibrium processing route offers the possibility of strongly enhancing the mobility of the lighter carbon atoms with respect to the silicon interstitials, thus promoting the formation of the G center at the expenses of competing configurations characterized by a higher structural stability.In contrast with other proposed experimental procedures involving surface functionalization through organic molecules, 10 the proposed process offers the activation of G centers with good spatial accuracy, thus enabling in perspective the deterministic activation of individual carbon impurities introduced in the silicon lattice by means of controlled ion implantation protocols.Consequently, this method offers the unique capability of fabricating the G center in higher-purity silicon crystals grown by float-zone (FZ) method, for which RTA is here shown to be ineffective, and in perspective, in ultrapure silicon crystals.In turn, this capability discloses the possibility of fully untapping the potential of the silicon G centers registered to specific nanoscale photonic structures without introducing damage related to local melting of the host material. Results Our study was performed on a set of samples obtained from a commercial FZ silicon wafer (carbon concentration <5x10 14 cm -3 ) that was uniformly implanted with 36 keV C -ions at 2x10 14 cm -2 fluence.The PL emission of the implanted sample was investigated with the purpose of comparing the effects of two different post-implantation treatments, namely conventional RTA and ns-pulsed laser annealing.The photoluminescence (PL) spectra following conventional RTA treatment (20 s duration) are reported in Fig. 1a for different temperatures upon normalization to the optical excitation power.Regardless of the processing temperature, all of the RTA treatments resulted in the formation of the sole W center, corresponding to the sharp emission line at 1218 nm and its phonon replicas at higher wavelengths and originating from the extended tri-insterstitial I 3 -V complex. 27,28The intensity of the W emission steadily decreases at increasing annealing temperatures, indicating a progressive recovery of the pristine crystal structure.Notably, no spectral features associated with the G center can be identified at any RTA processing stage.Such a result, in apparent contrast with the previous reports on SOI substrates engineering, demonstrates the poor formation of the G center under RTA post-implantation treatment in FZ silicon, and that its formation is arguably limited by the carbon concentration in the silicon substrate.Indeed, the present data acquired from float-zone silicon with low native carbon concentration show that even the introduction of a moderate amount of carbon ions by ion implantation is not sufficient to induce a detectable ensemble of G centers.The interpretation of this experimental evidence is that, despite the fact that most of the carbon impurities evolve into substitutional defects during the annealing, their concentration is not large enough to allow the formation of a small fraction of (C-Si) Si interstitial pairs through the interaction with the silicon interstitials generated during the C implantation.These complexes are necessary for the formation of the G center upon capture at substitutional carbon sites. 11his interpretation is consistent with the fact that the same implantation and RTA processing parameters adopted in this work have been linked to the formation of ensembles of G centers (although at low densities) in silicon samples characterized by a substantially higher content of substitutional carbon. 19,29A full understanding of this phenomenon will however require a systematic comparison between different silicon substrates synthesized by different methods in order to enable an optimization of defect engineering procedures. The laser annealing was performed with a focused ns-pulsed Nd:YAG 532 nm laser (pulse duration: 4 ns; repetition rate: 5 Hz; number of pulses: 5) on 7x7 µm 2 square regions of the implanted sample.The sample did not undergo any additional thermal treatment, besides the one under consideration.Confocal PL microscopy was performed using a custom microscope optimized for telecom wavelengths.A typical PL map (488 nm excitation wavelength) acquired from the laser-irradiated sample is shown in Fig. 1b.The map exhibits a low luminescence background from the as-implanted sample, within which a series of bright emission squares are clearly distinguishable, corresponding to the laser treated areas.Each square exhibits different emission features depending upon the corresponding laser-processing parameters.A spectral analysis of the PL emission features of the laser-treated regions is summarized in Figs.1c and 1d.c) PL spectra acquired under 488 nm CW excitation from regions lased at 0.58 mJ/cm 2 , 0.69 mJ/cm 2 , 0.82 mJ/cm 2 d) PL spectra acquired in the same conditions from regions lased at 0.84 mJ/cm 2 , 0.99 mJ/cm 2 , and 1.16 mJ/cm 2 .All measurements were performed at T=10 K using a 488 nm excitation and normalized to the optical excitation power. Fig. 1c shows PL spectra from the carbon-ion implanted regions exposed to laser annealing at increasing energy densities (0.58-0.82 mJ/cm 2 ).All of the reported measurements reveal the spectral features of both the W (1218 nm) and G (1279 nm) emitters, whose activation was not achieved by conventional RTA.In the considered spectral range, the intensity of the G center zero-phonon line (ZPL) increases with the lasing energy density; conversely, the W center ZPL shows a clear intensity maximum for the process performed at 0.69 mJ/cm 2 .If the lasing energy density is further increased (Fig. 1d shows the PL spectra corresponding to the 0.84-1.16mJ/cm 2 range) the W center ZPL disappears, indicating that the defect anneals out under the considered processing conditions.The dependence of the ZPL emission intensities of the G and W centers as a function of laser power is reported in Figs.2a and 2b, respectively.The peak temperature achieved for each ns pulsed treatment was estimated with numerical simulations (further details in the Discussion), and it is reported on the upper horizontal axis, for the sake of comparison with the temperature reached in RTA processing.Both the G and W center ZPL exhibit an initial increase in emission intensity at lower laser power intensities, reaching a maximum value at 0.76 mJ/cm 2 and 0.69 mJ/cm 2 , respectively (these maxima are highlighted by red and green dashed lines in Fig. 2a-b).At higher power density values, both the G and W center ZPL emissions exhibit a progressive reduction.It is worth remarking that, in the former case, the emission of the G center does not reduce to a negligible value at the highest (i.e.>1 mJ/cm 2 ) pulse energy densities, but rather reaches a plateau value.An optimal power density range for the formation of the G center with a concurrent attenuation of the PL emission of the W-one is comprised between 0.76 mJ/cm 2 and 0.82 mJ/cm 2 (corresponding to a local sample temperature of 764-826 °C).In this range, the W center PL is less than half the maximum achieved at 0.69 mJ/cm 2 , indeed.Finally, Fig. 2c shows the "intensity vs temperature" trend for the conventional RTA treatment of the W center. Differently from what is observed for ns laser annealing, the ZPL intensity exhibits a monotonic decrease at increasing annealing temperatures and the center effectively anneals out at temperatures above 450 °C.This process, as already highlighted in Fig. 1a, did not result in the formation of optically-active G centers.The quality of the emitters was assessed at the ensemble level by investigating the emission linewidth and lifetime, as shown in Figure 3. Particularly, Fig. 3a shows the dependence of the ZPL linewidth of the G center on the pulse energy density adopted for the laser annealing process in the 0.5-1.1 mJ/cm 2 range.A minimum FWHM value of (0.97 ± 0.05) nm was inferred from a Gaussian fitting at the lowest considered energy density (0.58 mJ/cm 2 ).The FWHM also exhibits an increasing trend with the energy density, reaching a value of (1.10 ± 0.05) nm at 0.76 mJ/cm 2 (i.e. the annealing conditions at which the G center ZPL reaches the maximal intensity in the absence of W center emission).These values are in line with what reported for previous measurements in C-implanted silicon both at the ensemble and single-photon level from G centers fabricated by RTA methods 2,6,19,30 , thus indicating the suitability of the proposed annealing technique for the fabrication of quantum emitters.Similarly, the lifetime of an ensemble generated by 0.76 mJ/cm 2 laser-induced annealing (Fig. 3b) was quantified upon 532 nm pulsed laser excitation as (5.7 ± 0.1) ns, in line with the results achieved in SOI silicon undergone C and proton co-implantation and subsequent conventional RTA treatment [6].The reported results represent the first exploration of laser annealing capabilities in the pulsed ns regime for the activation of optical dopants in solid state.The comparison with the results achieved by conventional RTA (seconds duration), highlights that the light-matter interaction dynamics in the ns regime disclose an effective processing strategy, relying on off-equilibrium temperature transients.Remarkably, while the short duration of the laser pulses allows accessing to meta-stable defective states in a non-equilibrium landscape, the duration of the optical absorption process is at the same time sufficiently long to prevent the introduction of irreversible structural damage to the crystal lattice structure.Concerning the efficiency achieved in the present experiment, the formation yield, defined as the ratio between the areal density of fabricated optically active emitters centers and the ion fluence 31 , was lower bound to 1.4×10 -4 as estimated by taking into account the detection efficiency of the experimental apparatus (full discussion in the Supplementary Information).For reference, this result is in line with what reported (2×10 -4 ) for subsequent implantation steps of C and H ions, 32 while in other works the efficiency is lowered on purpose (e.g.1.6×10 -7 for RTA treatments 2 ) to decrease the areal density of G centers to the single emitter level. Discussion A quantitative insight into the peculiar features of the reported laser-based thermal process was made possible by a dedicated finite-element analysis of the heat propagation dynamics in the substrate (Figure 4).Firstly, Figs.3a-c show instantaneous temperature maps of the silicon substrate at 4 ns, 40 ns and 400 ns time delays from the delivery of a single 4 ns laser pulse at 1.07 mJ/cm 2 energy density.At the end of the lasing pulse (t=4 ns) the maximum temperature (1065 °C, i.e. well below the melting point of silicon) is achieved at the sample surface.The laser-induced heating is highly confined to the irradiated region, with a steep gradient towards environmental conditions occurring over a sub-micron scale.The effect of the annealing is therefore strongly limited to the region that is directly exposed to the laser pulse, as observed experimentally in Fig. 1b, and does not involve local recrystallization of silicon. 21,36Secondly, Fig. 3d shows the time evolution of the temperature in correspondence of the center of the laser-irradiated region, at different depths from the sample surface.It is worth remarking that the volume of the region affected by laser-induced heating is also confined in the depth direction.Such localization extends to the first few hundreds nm of the material due to the high (i.e.~10 4 cm -1 ) absorption coefficient of silicon at the 532 nm wavelength under consideration.Temperature does not exceed a 250 °C value at a depth of 3 µm from the surface for the entire duration of the process.Furthermore, the numerical results highlight that the time scale for the whole heat transfer process is shorter than 1 µs.After this transient, the system returns to environmental temperature conditions.The use of a lasing system with 5 Hz repetition rate ensures that any subsequent pulse can be treated as a fully independent annealing process, and thus the process can be reiterated indefinitely at lasing frequencies up to 1 MHz.and c) 400 ns after the delivery of a 532 nm laser pulse.The considered energy density is 1.07 mJ/cm 2 .The 4 ns long laser pulse is switched on at t=0. d) Time dependence of the temperature at different depths (z = 0 µm, 3 µm, 10 µm, 25 µm) on the symmetry axis of the system (dashed white line in Fig. 3c), evidencing the localized heating of the crystal.e) Time evolution of the temperature at the surface (z=0 µm) for different laser energy densities. Finally, in Fig. 3e the temporal evolution of the temperature at the laser-exposed surface is reported for different energy densities.Remarkably, the temperature range (500-1100 °C) spanned in concurrence with the experimentally adopted lasing parameters (0.58-1.07 mJ/cm 2 ) overlaps with the set of temperatures achieved by RTA treatments (Fig. 1a).Nevertheless, in the above-mentioned temperature range the ns laser processing is characterized by a lower efficiency at annealing out the W center, thus requiring a simulated temperature of >800 °C (corresponding to an energy density of 0.84 mJ/cm 2 ).This observation further corroborates the interpretation of the ns laser annealing as a strongly non-stationary process resulting in defect-engineering capabilities that are radically different with respect to what can be achieved by conventional RTA treatment. To conclude, we demonstrated that by means of a pulsed ns laser annealing treatment it is possible to reliably induce the formation of the G center in float-zone silicon.The explored defect-evolution pathway is not accessible under stationary thermal treatments at similar temperatures.This observation is confirmed by the formation (and subsequent deactivation at higher temperatures) of the sole W center by conventional RTA with the process at time scales of few seconds, without the occurrence of any spectral feature associated with the G center.Differently from the previous reports, the defect formation was achieved in a high purity sample in the absence of a high native C carbon concentration, without the need for any post-implantation steps.This result indicates a significantly higher formation of the G center via laser annealing with respect to the currently available methods, and highlights a path towards the conversion into individual emitters of a limited number of C ions achieved by (ideally single-)ion implantation for the controlled fabrication of single-photon sources at telecom wavelengths.Finally, the intrinsically fast (0.2-1.0 µs) nature of the thermal diffusion process enables the indefinite reiteration of the process until the optical activation of the desired impurity is achieved.The localization of the heat absorption to the sole region that is directly exposed to the laser beam, together with the induction of temperatures below the melting point of silicon 21,26 also ensures that the lasing pulses do not alter the optical and structural properties of the surrounding material, where other individual G centers or nanoscale photonic structures might have already been fabricated.We expect that, upon adaptation of the heat diffusion model to material-specific thermal boundaries and different light absorption properties, these considerations can be also applied to SOI devices, with appealing applications in integrated photonics.These features offer an appealing perspective towards the fabrication of large-scale arrays of single emitters, in which high-resolution single-ion delivery is followed by a local (in perspective, down to the optical diffraction limit) annealing process for the conversion of each selected impurity into an individual emitter, provided that single-ion counting at keV energies 37,38 with nanoscale ion beam resolution 16,39 will become consistent techniques routinely available in research labs and industrial manufacturing environments, and upon the assessment of C diffusion length for ns heat transients.Additionally, the fact that the sample temperature is left unchanged by the local annealing originating from individual ns lasing pulses enables the exploitation of in situ feedback PL even at cryogenic temperatures to validate the emitters' formation 40 .The feedback could enable the lasing technique to achieve single defects generation also in substrates with an optimal content of C dopants, without the need for further ion implantation processes.A careful choice of the laser energy density allows to anneal out the W center, thus further confirming that the structural properties of the material are not altered by a disruptive interaction with the laser beam, but that instead the process is also effective at recovering the optical properties of the silicon pristine lattice.All these features are particularly attractive for the non-invasive (i.e.][43][44] co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme.'Intelligent fabrication of QUANTum devices in DIAmond by Laser and Ion Irradiation' (QuantDia) project funded by the Italian Ministry for Instruction, University and Research within the 'FISR 2019' program. Methods Sample preparation.The experiments were conducted on a commercially available float zone n-type silicon wafer (77-95 Ω•cm resistivity) purchased by Vishay Intertechnology.This substrate has been cut up in eight 3x3 mm 2 namely identical pieces; each of them was homogeneously implanted with 36 keV 12 C -ions at 2•10 14 cm -2 fluence.Two different annealing processes were considered.Conventional rapid thermal annealing was performed using a PID-controlled SSI SOLARIS 150 Rapid Thermal Processing System.Samples #1-7 underwent an annealing for 20 s in N 2 atmosphere at 320°C, 365°C, 400°C, 450°C, 500°C, 700°C, 1000°C with a temperature ramp rate of 65°C/s.Pulsed ns flash annealing was performed using a high-power Nd-Yag Q-switched laser source emitting at 532 nm.The source is focused by a primary lens system and collimated by an aperture of 2.75x2.75mm 2 .The maximum energy is 0.6 mJ for 4 ns laser pulse duration, thus resulting in a maximum power density of 1.32 MW/cm 2 .The selected pulse is further focused on the sample by a 20x magnification objective.A repetition rate of 5 Hz was adopted for pulses train delivery. IR confocal microscopy. The PL characterization was performed at cryogenic temperatures by employing a custom fiber-coupled cryogenic confocal microscope.The sample was mounted in a closed-cycle optical cryostation equipped with a vacuum-compatible long-distance 100× air objective (N.A.=0.85).The sample was mounted on a three-axes open-loop nanopositioner.Laser excitation was delivered by a 488 nm CW laser diode, combined with a 700 nm long-pass dichroic mirror and an 800 nm long-pass PL filtering.A multimode optical fiber (core diameter ∅ = 50 μm, N.A.=0.22) was used both as the pinhole of the confocal microscope and as the outcoupling medium for luminescence analysis.The optical fiber fed a InGaAs avalanche detector (MPD PDM-IR / MMF50GI).Spectral analysis was performed by connecting the confocal microscope fiber output to a Horiba iHR320 monochromator, whose output port was in turn fiber-coupled to the PDM-IR detector.The spectral resolution of the system was estimated as ≤4 nm.where R and α are the reflectivity and absorption coefficient of silicon, assumed to be independent of the temperature and equal to R=0.371 and α=7.69•10 5 m -1 for 532 nm radiation, respectively 45 .Here, P(t) describes the temporal profile of the laser pulse.Conversely, D(x) describes the profile of the incident laser beam along the x direction parallel to the silicon surface under the assumption of symmetry.The initial temperature was set to 293.15 K. Two different models were considered to map the substrate temperature.In the first case, a sharp pulse P(t)=(t 0 -t) was considered, where t 0 =4 ns, and sharp beam D(x)=J/t 0 •(x-x 0 ), J=1.07 mJ/cm 2 and x 0 =7 µm being the pulse energy density and the size of the irradiated square, respectively.K, c p were assumed to be independent of the temperature.A second, refined model involved a Gaussian temporal shape of the laser pulse P(t)=(2 2 ) 1/2 exp(-(t-t 0 ) 2 /2 2 ), where σ t = 1.7 ns, and a smoothening of the laser profile at the edges of the irradiated region.In this case, the temperature dependence of the parameters cp, K was set according to tabled data. 46Boundary conditions were defined to describe thermal convection and irradiation phenomena to occur at the substrate surface and to keep the wafer backside at room temperature.Homogeneous Neumann conditions were applied elsewhere.The simplified model is presented in the main text as it relies on a minimal number of assumptions. The spatial and temporal distribution of the temperatures did not exhibit significant differences between the two methods, highlighting the small contribution due to the choice of the boundary conditions.The two methods provided a discrepancy in the simulated temperature up to ~90 K at the highest considered pulse energy density, i.e. ~8% of the overall value.This difference is ascribed to the temperature dependence of the thermal capacitance and and conductivity considered in the second model. Figure 1 : Figure 1: a) PL spectra of the sample processed with 20 s rapid thermal annealing process at different temperatures.b) typical PL map of the sample processed with localized ns laser annealing.Each bright square corresponds to a spot treated with different lasing parameters.c)PL spectra acquired under 488 nm CW excitation from regions lased at 0.58 mJ/cm 2 , 0.69 mJ/cm 2 , 0.82 mJ/cm 2 d) PL spectra acquired in the same conditions from regions lased at 0.84 mJ/cm 2 , 0.99 mJ/cm 2 , and 1.16 mJ/cm 2 .All measurements were performed at T=10 K using a 488 nm excitation and normalized to the optical excitation power. Figure 2 . Figure 2. ZPL intensity of a) G center (1279 nm), and b) W-center (1218 nm) as a function of the ns lasing energy density.c) ZPL intensity of the W-center as a function of the annealing temperature of the RTA processing (20 s duration).All measurements were performed at T=10 K. Figure 3 : Figure 3: a) Linewidth of the G center ZPL as a function of the lasing energy density.b) Emission lifetime against a trigger laser pulse acquired from an ensemble of G centers fabricated by 0.76 mJ/cm 2 lasing. Figure 4 : Figure 4: Finite element method map of the sample at a) t=4 ns, b) 40 ns,and c) 400 ns after the delivery of a 532 nm laser pulse.The considered energy density is 1.07 mJ/cm 2 .The 4 ns long laser pulse is switched on at t=0. d) Time dependence of the temperature at different depths (z = 0 µm, 3 µm, 10 µm, 25 µm) on the symmetry axis of the system (dashed white line in Fig.3c), evidencing the localized heating of the crystal.e) Time evolution of the temperature at the surface (z=0 µm) for different laser energy densities. Finite element modeling.The heat transfer was described by the solution in a two-dimensional model of the time-dependent equation , where c p is the thermal capacitance ρ• • ∂ ∂ = ∇•[•∇] + and K is the thermal conductivity of isotropic silicon and ρ=2330 kg/m 3 represents the mass density.The source term describes the heat absorption from a laser pulse as S=D(x)•P(t)•(1−R)•α•exp(−α•z),
2023-04-21T01:15:39.119Z
2023-04-20T00:00:00.000
{ "year": 2024, "sha1": "0c460296aba2b1992f0258900dc658aa5f99b45b", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s43246-024-00486-4.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c460296aba2b1992f0258900dc658aa5f99b45b", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
258357192
pes2o/s2orc
v3-fos-license
Self-dehumanization and other-dehumanization toward students with special educational needs: examining their prevalence, consequences and identifying solutions—a study protocol Background Students with special educational needs (SEN) often face dehumanization, which negatively impacts their mental health, daily functioning, and educational outcomes. This study seeks to address the research gap in dehumanization literature by examining the prevalence, dynamics, and consequences of self-dehumanization and other-dehumanization among SEN students. Moreover, by utilizing psychological experiments, the study aims to identify potential intervention strategies and make recommendations to minimize the negative psychological consequences derived from the dual model of dehumanization. Methods This two-phase, mixed-methods study incorporates cross-sectional surveys and quasi-experimental designs. Phase 1 investigates the self-dehumanization of SEN students and other-dehumanization from non-SEN peers, teachers, parents, and the public. Phase 2 involves four experimental studies to evaluate the effectiveness of interventions emphasizing human nature and uniqueness in reducing self-dehumanization and other-dehumanization of SEN students, as well as their associated negative consequences. Discussion The study fills a research gap by examining dehumanization in SEN students, applying dyadic modeling, and identifying potential solutions to ameliorate dehumanization and its negative consequences. The findings will contribute to the advancement of the dual model of dehumanization, increase public awareness and support for SEN students in inclusive education, and promote changes in school practice and family support. The 24-month study in Hong Kong schools is expected to provide significant insights into inclusive education in school and community settings. Background In Hong Kong, inclusive education for students with special educational needs (SEN) in mainstream schools has been fast developing since 1997. In 2019, SEN students made up 7.8% (22,980) of the total in primary schools, and 8.6% (22,380) of the total in secondary schools [1]. Compared to the relatively stable number of students in special schools (7,700), the population of SEN students in mainstream schools has increased 34% over the past five years. The Education Bureau (EDB) has been promoting the Whole School Approach to Integrated Education and implemented considerable resources and interventions to promote mutual respect of individual differences, as well as cater for student diversity [2]. However, as minority group members, SEN students are always the targets of discriminatory treatment. Previous studies have found that SEN students have more negative views on their relationship with teachers and peers, compared to non-SEN students. They are also more likely to get laughed at and bullied in schools [3]. Studies in Hong Kong primary schools showed that SEN students were sometimes labeled and ignored by teachers [4]. Such alienation and rejection have been found to start as early as elementary school [5]. The discriminatory experience can be reciprocal and disastrous in school settings. For example, as both SEN students and teachers view their relationship as unsatisfactory, it negatively impacts both sides [6]. Such a relationship is reflected in the emotional experience and social interaction of SEN students, and consequently affects their academic performance [7], and psychological and behavioural functioning [8]. With an increasing number of SEN students enrolled in mainstream schools, the aforementioned problems will be aggravated. Considering this situation, there is an urgent call for understanding the underlying mechanism of the prejudice towards SEN students in inclusive settings. It is worthwhile examining the fundamental humanness attribution error, dehumanization, as the foundation for understanding such prejudice. Furthermore, based on the mechanism of dehumanization, teachers have to develop effective intervention strategies to battle discrimination in classrooms. The dual model of dehumanization The booming psychological research into attribution of humanness offers an integrative approach to examining the discrimination. The denial of humanity can be the root of prejudice toward people with disabilities. Dehumanization, the tendency of attributing fewer human characteristics to others and perceiving others as less human, has been a topic of great interest over the past few years [9]. It is a pervasive prejudicial, and discriminatory cognitive process that exists in people's daily life [10,11]. Dehumanization can be expressed in blatant forms (e.g. "they look like animals"), or subtle forms (e.g. "people with ADHD cannot enjoy peace") and can be easily activated in a variety of contexts in people's daily lives [12][13][14][15]. Viewed as somewhat lacking in human characteristics, people who are dehumanized are open to social exclusion and hostility [16]. Early research conceptualized dehumanization primarily in the context of morality [17,18]. Recent development in the field has enriched the understanding of dehumanization as a border conceptual framework that encompasses attributing uniquely human emotions [19,20], warmth and competence [21,22], mental states [23], and personality traits [24]. According to the Dual Model of Dehumanization [13,16,24], people attribute a lack of humanness in others through viewing them as more similar to animals (lacking Human Uniqueness, HU), or viewing them as more similar to robots (lacking Human Nature, HN). This two-dimensional model of humanness has been broadly adopted in recent dehumanization research, and its universalism and implications are supported by a substantial amount of empirical evidence [9,25]. Thus, the dual model (lacking HU and HN) will be helpful to understand the nature of dehumanization. Consequences of dehumanization Many studies have demonstrated the link between dehumanization and harmful consequences [9,26]. Some have profound implications in inclusive settings. For example, when criminals are judged by the public, those who are considered to lack humanness are more likely to receive harsher punishment [27]. Other studies have found similar effects for earthquake victims, those who are dehumanized are less likely to receive humanitarian aid [28]. This paradigm could be relevant for teachers and peers to make decisions when a SEN student conflicts with or bullied by non-SEN peers. In a study, teachers' dehumanization of minority students is further found to be predictive of their discrimination towards and harsher treatment of those students [29]. These consequences highlight the importance of understanding dehumanization in inclusive education, and the need to develop interventions reducing its harmful consequences in SEN students' daily life. Apart from being labelled in schools and classrooms, SEN students may be dehumanized in a way similar to people with disabilities. Some studies examined the dehumanization of adults with intellectual and developmental disabilities and found that they are seen as lacking human uniqueness by professionals in day-care centres [30]. Another study revealed that greater dehumanization of people with Autism or Down's syndrome predicted stronger prejudice towards and reduced social policy support for this group [31]. Moreover, people with mental disorders are dehumanized even more severely than ethnic minorities and immigrants, which further relates to various kinds of stigma (e.g., social distance) within society [32]. These findings highlight the unneglectable existence of the dehumanization phenomenon regarding these vulnerable groups, as well as its harmful consequences. Self-dehumanization by SEN students The aforementioned dehumanization studies have mostly been through the lens of "perpetrator" (otherdehumanization), while the essential and ultimate goal of understanding dehumanization is to help the victims. Therefore, it is worthwhile examining the prevalence and nature of self-dehumanization from the perspective of SEN students. The self-dehumanized persons were at risk of negative mood emotions, pessimistic mental states and aversive self-awareness [33]. Ignoring the selfdehumanization of SEN students may lead to severe consequences, such as learned helplessness and degraded mental health [3]. Experimental studies among adults have found that a prolonged experience of powerlessness results in self-dehumanization [34]; involvement in unethical behaviours leads to self-dehumanization, which in turn leads to continued unethical behaviours [35]. Researchers and educators should pay attention to the causes and intervene appropriately to minimize the adverse consequences. Dehumanization among children or self-dehumanization among people with disabilities is rarely investigated [11]. It may be probable that many studies adopted the common paradigm that involves measuring abstract personality traits and human characteristics [25], which is not suitable for persons without a sense of agency, the capability of understanding abstract thinking, or complex emotions. Thus, the immediate objective of this research study is to develop and evaluate a new measure suitable for SEN students. The family context of dehumanization As summarized above, both other-dehumanization and self-dehumanization bring challenges in public health and education contexts. Beyond researching each target population separately, it is necessary to consider the interpersonal relationships, and look into the actor-partner effect of dehumanization, for example, the dyadic effect of SEN students and non-SEN peers. This is no doubt a prominent question. After all, establishing connections with society is at the core of inclusive education. In the literature, some studies have found such a dyadic effect of dehumanization in terms of sexual objectification and unequal working environments [36]. Again, inclusive education has received little attention. Thus, examining the dyadic relations will enrich the understanding of interpersonal and contextual aspects of dehumanization (demonstrated in Fig. 1). For SEN students, unfolding dehumanization in the family and school contexts will facilitate more effective intervention strategies and improvements in education policies (demonstrated in Fig. 2). Different categories of SEN, e.g., ADHD, ASD, SpLD, are identified in schools [3]. Non-SEN individuals may attribute humanness to them based on subtle differences in stereotypes or experience of interpersonal contact with SEN students. For example, students with ADHD may be seen as overly expressive in secondary emotions, thus high in HU. Students with ASD may be seen as lacking HU. Given the limited literature in the field, it is necessary to take the post-hoc approach by considering the SEN type, and analyse the differences within the empirical data. The examination of the dehumanization among SEN students, as a minority group in inclusive setting, is thus the focus. Research gaps Firstly, previous dehumanization studies have mostly focused on ethnic and racial groups, even though topics on gender inequality and social minorities have begun to receive growing attention [9,37]. However, little attention has been paid to inclusive settings. In particular, no research has been conducted among SEN students. Thus, given the growing number of SEN students in Hong Kong schools, it is important to understand the prevalence and nature of self-dehumanization and other-dehumanization among these students. Secondly, to date, there has not yet been an available dehumanization measurement tool for SEN students, or SEN individuals in general. Thus, the immediate objective of this study is to develop and evaluate a new measure suitable for the target population. Thirdly, so far, dehumanization studies simultaneously taking the perpetrators' and victims' perspectives (e.g., self-dehumanization vs. other-dehumanization) are rare in the field [36]. There is a research gap in investigation of the potentially reciprocal relationship, for instance, in a family context that involves SEN students and caregivers, or in a classroom that involves SEN students and non-SEN peers. The psychological consequences of these relations will be examined through dyadic modelling. Lastly, to understand the nature and implications of the dehumanization of SEN students, it is necessary to develop effective strategies to ameliorate dehumanization and its negative impact. Therefore, the study will incorporate previous findings and conduct programs to explore this possibility. Furthermore, it will be appealing to identify practical interventions in a way that is up-todate, attractive, and trending in popular culture among youngsters, e.g., short video clips that re compatible with Instagram. Assumptions and research questions of the study With the mixed methods approach, the study is with two phases. The first phase will gain a deeper understanding of the dehumanization of SEN students through the lens of victims (self-dehumanization of SEN students), close relations (other-dehumanization from non-SEN peers, teachers, and parents), society (other-dehumanization from public), and lead to the second phase, in which we will conduct experimental studies for developing effective intervention strategies to reduce dehumanization. In Phase 1, we will conduct two cross-sectional survey studies to tap into the dehumanization of SEN students: Study 1a will recruit SEN students from mainstream schools in Hong Kong and examine the nature and impact of self-dehumanization. In addition, we will recruit non-SEN peers, teachers and parents of the SEN students mentioned above, and examine the impact of other-dehumanization from close contacts. Study 1b will recruit non-SEN students (enrolled in schools or classes without SEN peers) other than those mentioned in Study 1a, their parents, as well as university students, to investigate the nature and effects of other-dehumanization that SEN students receive from the public. Research questions in Phase 1 1. In what ways do SEN students dehumanize themselves? Is self-dehumanization related to their mental health and daily functioning in schools (e.g., learning effectiveness, social interaction)? 2. In what ways do non-SEN peers, teachers, and parents dehumanize SEN students? Is other-dehumanization related to prejudice towards SEN students? 3. Is there any relationship between the self-dehumanization of SEN students and the other-dehumanization from school peers, teachers, and parents? How do such relations interact with well-being and school functioning on both sides? 4. In what ways does the public dehumanize SEN students? How does the other-dehumanization relate to prejudice towards SEN students? In Phase 2, aiming at ameliorating the self-dehumanization and other-dehumanization, we will conduct four experimental studies to understand the underlying mechanism: Study 2a & 2b will adopt the approach used in previous research [38], directly manipulating perception of humanness, and evaluate its effects on the self-dehumanization and other-dehumanization. Study 2c and 2d will incorporate the findings from studies in Phase 1 and Study 2a & 2b, produce video clips as experimental stimuli, and investigate the priming effects of watching these videos on dehumanization among participants. Research questions in Phase 2 1. Will or to what extent might the self-dehumanization of SEN students and its negative impact be reduced by presenting information emphasizing the human nature/human uniqueness (e.g. by reading relevant materials or watching a video clip)? 2. Will or to what extent might non-SEN peers' , teachers' , and parents' other-dehumanization of SEN students and its negative impact be reduced by presenting information emphasizing human nature/human uniqueness (e.g. by reading relevant materials or watching a video clip)? 3. Will or to what extent might the other-dehumanization of SEN students from the public and its negative impact be reduced by presenting information emphasizing human nature/human uniqueness (e.g. by reading relevant materials or watching a video clip)? In summary, in Phase 1, we will explore the prevalence and nature of self-dehumanization of SEN students. The findings are predicted to be negatively related to the mental well-being and school functioning of SEN students. We will also investigate other-dehumanization from non-SEN school peers, teachers, and parents, and test whether dyadic relations exist between SEN students and their close relations (e.g., if there's a dyadic relation of dehumanization on well-being between SEN students and their parents; see Fig. 1). In addition, among the public, it is expected that the other-dehumanization is to be positively related prejudice and reduced policy support. In Phase 2, by incorporating the findings from Phase 1, we expects to find a negative impact of lacking humanness priming compared to having humanness priming (e.g., after reading the lacking humanness materials, participants demonstrate greater prejudice compared to those who read having humanness materials). In addition, it is expected to identify the positive impact of watching video clips emphasizing SEN students' humanness. Participants Phase 1: for Study 1a, we will recruit SEN students, their parents, teachers, and non-SEN peers (i.e., classmates of SEN students) in mainstream secondary schools in Hong Kong that implement inclusive education. For Study 1b, we will recruit non-SEN students in secondary schools in Hong Kong other than those who participated in Study 1a, their parents, as well as non-SEN university students. We have conducted power analysis to calculate the ideal sample size. We assume a small-to-medium effect size r = 0.21 [39] for associations of dehumanization and other variables, 175 participants are required to obtain 80% power with 5% ɑ error rate for a two-tailed Pearson correlation test. We also conducted Monte Carlo simulation for identifying a 10-item measurement model with moderate factor loadings, lambda = 0.60 [40], 150 participants are required to obtain good model fit indices (CFI > 0.90, RMSEA < 0.06, SRMR < 0.06), which is less than 175. We expect the data attrition and invalid answers to be 10%. Thus, for each target population we aim to recruit 193 (175 × 1.1) participants, except for school teachers (we aim at 50 for practical reasons). Hence, in total we aim to recruit 629 participants (193 SEN students + 193 peers + 193 parents + 50 teachers) in Study 1a, and 386 (193 non-SEN students + 193 parents and university students) participants in Study 1b. Phase 2: In Study 2a and 2c, we will recruit SEN students, their parents, teachers, and non-SEN peers to participate in the experiments. In Study 2b & 2d, we will recruit non-SEN university students and their parents to participate. However, we may not be able to recruit enough teachers in Study 2a and 2c. Therefore, we will focus on the other three groups. Power analysis suggests that 180 participants are required to obtain 80% power with 5% ɑ to find medium-size differences between 4 conditions (one-way ANOVA, f = 0.25), and 160 participants are required for 3 conditions. Thus, we aim to recruit 540 participants (180 SEN students + 180 peers + 180 parents) in Study 2a, 180 non-SEN participants in Study 2b, 480 participants (160 SEN students + 160 peers + 160 parents) in Study 2c, and 160 non-SEN participants in Study 2d. Procedure and instruments For clear presentation, the key measures discussed are summarized in Table 1 and the workflow of the study are presented in Fig. 3. Phase 1 In Study 1a, SEN students will be invited to complete the Chinese version of the dehumanization measure with instruments described below. Due to anticipated difficulties in reading and understanding the items, the questionnaire will be designed in short form, with minimal complexity in vocabulary and questionnaire formats. Rating scales will be designed to be SEN-friendly. Likert scale points may be replaced with shapes and emojis. Prior to the survey administration, translation and backtranslation will be conducted with the consultancy from SEN experts in CSENIE. Self-dehumanization A 10-item short-form dehumanization measure assessing self-perceived human uniqueness and human nature will be adapted from previous research, Cronbach's ɑs > 0.8 [41]. The original measure has been validated across cultures and within various populations [25]. Participants will rate the extent to which Table 1 Summary of key measures in the study (1) For teachers, the questions assess their perceived school functioning of SEN students and non-SEN students in their class; for parents, the questions assess their perceived school functioning of their kids Participants Measures used Analysis focus Phase 1: Study 1b Non-SEN students (in schools or classes without SEN peers) (N = 193) Parents of non-SEN students and University students (N = 193) Other-Dehumanization of SEN students Prejudice towards SEN students Social Distance Scale School Functioning (1) Public Policy Support Within-group and between-group differences of dehumanization; Associations of dehumanization and outcomes Subjective well-being Students' subjective well-being will be assessed by the widely used 5-item Satisfaction with Life Scale (SWLS) [42]. Previous studies suggest SWLS is reliable, Cronbach's ɑs > 0.8 [43]. Sample items include, "In most ways my life is close to my ideal" and "I am satisfied with my life. " All question items are rated on a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree). School functioning A 19-item School Engagement Scale [44] will be used to assess students' daily functioning at school. The Chinese version that has been validated in the Chinese context in previous research, Cronbach's ɑs > 0.8 [45], will be used. Sample items include "I feel happy in school" and "I pay attention in class". In addition, to assess students' academic and career self-efficacy, we will adapt a 10-item measurement previously validated among SEN students in Hong Kong [46]. All items are rated on a 5-point Likert scale (1 = not at all, 5 = very much). Demographic information Information regarding the school profile, students' grade, gender, age, SEN type and academic level will be collected. In the meantime, the non-SEN peers, teachers, and parents of SEN students will be invited to complete a Chinese version of the dehumanization measure with instruments described below. Translations and instructions will differ from the version for SEN students, depending on the target population. Other-dehumanization of SEN students We will use the same measurement tool as for SEN students mentioned above to assess the other-dehumanization. Items are identical, except participants will rate the extent to which they perceive the humanness traits best describing their "SEN classmates/students/children". Prejudice A 24-item measure will be used to assess participants' prejudice towards SEN students in four dimensions (harm, separate, dependence, and idealization) [31]. It has previously been used to measure public prejudice towards people with developmental disabilities, and the average Cronbach's ɑ > 0.75. Sample items include "I prefer not to interact with people who have SEN". In addition, we will include a 5-item Social Distance Scale, Cronbach's ɑs > 0.9 [47]. Sample items include, "I can accept SEN students to be my neighbours". All items are rated on a 5-point Likert scale (1 = not at all, 5 = very much). Subjective well-being The same measurement as mentioned in the above section. School functioning The same measurement as mentioned in above section. Except for teachers, the questions assess their perception of the school functioning of SEN and non-SEN students in their class; and for parents, the questions assess their perception of the school functioning of their children. Demographic information It is similar to those mentioned in the above section. In Study 1b, non-SEN students (enrolled in schools or classes without SEN peers), their parents, and university students will be invited to complete a Chinese version of the dehumanization measure with instruments described below. Other-dehumanization of SEN students Items are similar to those for the non-SEN peers, teachers and parents in Study 1a, but the target of dehumanization will be "SEN students" in general. Prejudice The same measurement as used in Study 1a. School functioning The same measurement as used in Study 1a. Public policy support It will include a 6-item measure to assess policy support for SEN students. The measurement will be adapted from a previous study regarding the dehumanization of low-SES groups and tap into several kinds of social welfare policy, Cronbach's ɑs > 0.8 [38]. Sam-ple items include "I support the government increasing healthcare spending for SEN students". Items are rated on a 7-point Likert scale (1 = strongly disagree, 7 = strongly agree). Demographic information Similar to mentioned in Study 1a. Phase 2 In Study 2a, we will invite SEN students, their parents, teachers, and non-SEN peers to a computer lab to participate in the experiment by completing a series of tasks on computers; and in Study 2b, we will similarly invite non-SEN university students and their parents to a computer lab for the same purpose. The procedures of both studies are described as follows. • Participants will be informed that they are participating in a project to "help psychologists accurately categorize personality descriptors", and once agreed, they will be randomly assigned into one of the four conditions that implement an HU (Human Uniqueness) manipulation or an HN (Human Nature) manipulation: lacking HU condition, having HU condition (as the control to lacking HU condition), lacking HN condition, or having HN condition (as the control to lacking HN condition). Thus, the current study is a four-condition between-subject experimental design. • Participants in all conditions will first complete a humanness measure that rate "people in Hong Kong" in HU and HN on a 7-point Likert scale. These items are similar to the ones used in Phase 1, to serve the purpose of our cover story. In the meantime, they also establish the baseline of humanness attribution of each participant. • Next, depending on the condition they were assigned, participants will read a paragraph of descriptions with tables or graphs addressing a fake study of how many, or the degree of, humanness traits SEN individuals demonstrate (Samples for the having HU and HN condition are attached in Tables 2 and 3; the materials for SEN students and their close ones will be tailor-made to sound more natural and realistic compared to the version for the public). Table 2 A sample reading material used in having HU condition Now, before answering more questions regarding a particular group in our society, please read the following description of the group adopted from a scientific report The member of SEN students in mainstream schools usually have few resources and are generally considered to have learning difficulties. However, the results of the study have shown that the member of this group tend to act according to their common sense, both good and bad, but very rational, and mostly have control over their behavior. Their civility and rational behaviors, as we understand it, are two of their main characteristics, according to the study. Additionally, their abilities to reflect on and control their actions give the impression that they are in fact mature, as they tend to behave logically • After reading the paragraph, participants will complete the manipulation check questions. One question will ask to what extent they agree to the paragraph, and a few items assess the self-dehumanization (for SEN students) or other-dehumanization of SEN individuals (for other groups). • Next, depending on their group identity, participants will complete a few questions similar to those used in Phase 1. Measurements include prejudice towards SEN students, social distance, public policy support, and subjective well-being. • Lastly, participants will be debriefed regarding the purpose of the experiment. Prior to conducting Study 2c and 2d, we will produce three short video clips as priming materials. Two of the video clips will emphasize the humanness of SEN students (one for HU and one for HN). The video will incorporate the findings drawn from the Phase 1 and Study 2a and 2b into a script that highlights strength, humanness, and mental states, to make a vivid personality presentation that combines documentary, news reports, or interviews conducted by the research team (e.g., SEN students who have exceptional skills or kindness). The information in these two video clips will be comparable in amount, structure, and attractiveness. The third video clip will be used in the control condition, and contains only irrelevant and neutral information (e.g., a clip introduces cosmology). Overall, the goal of producing these video clips is to adopt a popular format, deliver information via a layman's approach, and make priming accessible for future public education purposes. In Study 2c, we will invite SEN students, their parents, teachers, and non-SEN peers; and in Study 2d, we will invite non-SEN individuals to a computer lab to complete a series of tasks. • Participants will be randomly assigned into one of the three conditions: the HU emphasis condition, the HN emphasis condition, or the control condition. Thus, the current study is a three-condition betweensubject experimental design. • Participants will go through the same procedure as in Study 2a and 2b, with the exception that the reading material is replaced by a corresponding video clip mentioned in the above section. Data analysis • Missing data handling. To maximize the estimation efficiency and reduce bias, we will impute missing data using Markov Chain Monte Carlo multiple imputation methods [48] and k-Nearest Neighbor algorithm [49]. • Measurement model evaluation. So far, there is no dehumanization measurement available specifically designed for SEN students. Thus, we will first evaluate the measurement model with coefficient Alpha, coefficient Omega, Confirmatory Factor Analysis, as well as Exploratory Structural Equation Modelling [50,51]. • Associations of variables. Structural Equation Modelling will be used to test the associations of dehumanization and outcome variables in Phase 1. We will apply the Actor-Partner Interdependence Model [52] to assess the dyadic effects between the selfdehumanization of SEN students and other-dehumanization from close relations (Fig. 1). • Experimental effects and within-subject differences. To address within-subject differences of dehumanization in Phase 1 and test the between-condition differences in Phase 2, we will conduct t-test and Analysis of Variance. Beyond evaluating the effect sizes (e.g., Cohen's d), we will conduct Equivalence Test (53) and Bayesian statistical analysis to evaluate the robustness of the experimental effects. Discussion The harmful consequences of undermining others' humanity have been documented in history. The continued growing research into dehumanization, however, reveals its existence in our daily lives. Dehumanization is a complex and pervasive phenomenon across cultures, Now, before answering more questions regarding a particular group in our society, please read the following description of the group adopted from a scientific report The member of SEN students in mainstream schools usually have few resources and are generally considered to have learning difficulties. However, the results of the study have shown that the member of this group tend to act according to their emotions and feelings, both good and bad, very open, and sometimes have expressive behaviors. Their emotionality and cognitive openness, as we understand it, are two of their main characteristics, according to the study. Additionally, their abilities to reflect on and express their thoughts give the impression that they are in fact deep, as they tend to think in different ways ethnic groups, and social hierarchies, and has a profound impact on moral judgment, prejudice, and public health. In recent decades, many researchers have dedicated great effort to understanding dehumanization and promoting awareness of it in a variety of contexts, such as sexual objectification, immigrants and refugees, and cultural differences. However, little attention has been paid to children or individuals with disabilities. In particular, the dehumanization of students with special education needs (SEN) in inclusive settings has been ignored. Furthermore, research studies attempting to view this from the victims' perspective are lacking. Focusing on the self-dehumanization and other-dehumanization of SEN students, the study aims at investigating the prevalence and dynamics of dehumanization, identifying its negative consequences, and conducting experiments to reduce the dehumanization. The beneficiaries will include SEN students, non-SEN peers, teachers and parents. Findings will be with significant theoretical and empirical impacts. Theoretical impacts The research will fill the research gap of a group previously neglected in dehumanization research. Previous dehumanization studies have mostly focused on ethnic and racial groups, as well as gender minorities and social minorities. However, little attention has been paid to inclusive settings. Few researches have been conducted among SEN students. Through surveying and conducting experiments among multiple stakeholders, the research will contribute to the literature by tapping into the prevalence and nature of the dehumanization of SEN students, trying to unfold the underlying mechanism of humanness attribution in human uniqueness and human nature, and furthermore, identifying the psychological consequences associated with it. In addition, validation of the tailormade dehumanization measurement will enable the accessibility of the group and have a profound impact on future research. Teachers or researchers will find these validated tools useful for assessing the dehumanization and other-dehumanization toward students with special educational needs. The findings will help identify factors for further examination in Chinese community. The study will offer a new perspective in dehumanization research, which examines the self-and other-dehumanization in a dyadic relationship model and widen our understanding of humanness among different minorities in history and society. Unlike most of the existing literature that has involved group-based investigations, the study will take the interpersonal approach to understand the dehumanization through the dyadic model. From such an integrative perspective, it will examine the impact of dehumanization within family units and within classrooms, seek to isolate the associations between SEN students and caregivers, non-SEN peers, and teachers. With the findings, it is possible to pinpoint such reciprocal relationships and shed lights to future research into how to intervene within the units to diminish dehumanization, especially among those close to SEN students, for realizing the inclusive community. Empirical impacts Through experiments, the study will further the understanding of conditions under which emphasizing specific humanness buffers the negative consequences of otherdehumanization. The results will provide immediate feedback on the interventions, have important empirical impact on how educators, parents, and the public view SEN students, and incorporate such understanding into teaching, parenting, and improving public policy support for inclusive education. In addition, to increase the potential to generalize the experimental findings into practice, the study will include short video clips as priming materials, a method that is trending in popular culture (e.g., Instagram videos). The goal is to deliver information via a layman's approach and make priming accessible for future public education purposes. The resources will help seeking funding opportunities, e.g. Quality Education Fund, for program initiatives supporting SEN students and enhancing the community awareness in inclusion. At professional development level, the outcome can be further disseminated in teacher education courses or on-line teaching, particularly in SEN support and guidance, meeting the core value of the policy and practice of Whole School Approach to Integrated Education in Hong Kong mainstreaming schools. The dyadic modelling will provide a vital view on how to ameliorate these negative consequences in family contexts and classroom contexts. By determining how SEN students are dehumanized by and identifying the process's associations with close relations, the findings will offer insights not only to the "perpetrators" and the "victims", but also to families, classrooms, and schools, facilitating finding solutions as a unit. In a broader picture, the study will enhance public awareness and provide insights to gain a deeper understanding of mental states and life obstacles SEN students encountered in their daily lives. By understanding the underlying mechanism and psychological impact of self-dehumanization and otherdehumanization, the outcome will contribute to future practical endeavours to ameliorate the dehumanization of SEN students, as well as other socioeconomically disadvantaged groups. The impact is significant in accumulating knowledge, skills and successful strategies in school, family and community support.
2023-04-28T14:12:20.126Z
2023-04-27T00:00:00.000
{ "year": 2023, "sha1": "18b9a5751cc37dca0f01ca2cfe35e2d00787a704", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "18b9a5751cc37dca0f01ca2cfe35e2d00787a704", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229685706
pes2o/s2orc
v3-fos-license
Ten SNPs May Affect Type 2 Diabetes Risk in Interaction with Prenatal Exposure to Chinese Famine Increasing studies have demonstrated that gene and famine may interact on type 2 diabetes risk. The data derived from the cross-sectional 2010–2012 China National Nutrition and Health Survey (CNNHS) was examined to explore whether gene and famine interacted to influence type 2 diabetes risk. In total, 2216 subjects were involved. The subjects born in 1960 and 1961 were selected as the famine-exposed group, whereas subjects born in 1963 were selected as the unexposed group. A Mass Array system was used to detect the genotypes of 50 related single-nucleotide polymorphisms (SNPs). Interactions were found between prenatal exposure to famine and ten SNPs (rs10401969, rs10886471, rs10946398, rs1470579, rs2796441, rs340874, rs3794991, rs5015480, rs7961581, and rs9470794) on type 2 diabetes risk after adjustments. The stratified results showed that famine exposure exacerbated the effect of CILP2-rs10401969 to fasting serum insulin (FINS), GRK5-rs10886471 to fasting plasma glucose (FPG) and FINS, IGF2BP2-rs1470579 to FINS, TLE1-rs2796441 to impaired fasting glucose (IFG), PROX1-rs340874 to impaired glucose tolerance (IGT), GATAD2A-rs3794991 to FINS, TSPAN8/LGR5-rs7961581 to FPG, and ZFAND3-rs9470794 to IGT and FINS. Famine exposure weakened the effect of CDKAL1-rs10946398 to type 2 diabetes. Famine exposure weakened the effect of HHEX-rs5015480 to IFG, but exacerbated the effect of HHEX-rs5015480 to FINS. The present study suggests that ten SNPs may affect type 2 diabetes risk in interaction with prenatal exposure to Chinese famine. Introduction The occurrence of type 2 diabetes is not only influenced by the environment, but also by inherent cause. By associating regions of the genome with disease susceptibility, loci influencing type 2 diabetes risk have been identified [1]. Furthermore, convincing evidence has shown that genetic factors also play an important role in causing type 2 diabetes, and more than 100 loci have been confirmed to be contributable to type 2 diabetes risk in different ethnic populations, which promises to accelerate our understanding of disease pathology [2]. During the period of 1959-1961, Chinese people suffered the most severe famine in the world [3]. Some studies found that exposure to severe famine in the prenatal or postnatal period was associated with the development of type 2 diabetes in adulthood. Data from different periods of famine around the world have been utilized to explore the association of early life malnutrition and type 2 diabetes risk in adulthood, and the "famine effect" has been found in China and some foreign studies, including Asian, European, and African populations [4][5][6][7][8][9][10][11]. The individuals exposed to famine may involve adaptations to malnutrition, with fetal adaptations including reduced growth, small size at birth, or low birth weight [12]. One study assessed the interaction between birth weight and genetic susceptibility to type 2 diabetes in two independent prospective cohorts in the USA, and the data suggested that low birth weight and genetic susceptibility to obesity may affect adulthood risk of type 2 diabetes [13]. The latest research has found the existence of interactions between famine and some genes in the occurrence of type 2 diabetes, which means that some variants may influence susceptibility of type 2 diabetes amongst the population experiencing famine or malnutrition in early life, such as SIRT1, PPAR-γ2 Pro12Ala, IGF2BP2, etc. [14][15][16]. The gene-environment interactions resulting from famine and increased type 2 diabetes risk have contributed to the epidemic of type 2 diabetes in China [17]. Thus, we used data from the China National Nutrition and Health Survey (CNNHS) 2010-2012 to explore whether there were some genetic variants which may affect type 2 diabetes risk with prenatal exposure to Chinese famine. Data Resources The CNNHS 2010-2012 was a national representative cross-sectional study which assessed the nutrition and health status of Chinese residents. The 2010-2012 survey covered all 31 provinces, autonomous regions, and municipalities throughout China (except for Taiwan, Hong Kong, and Macao). The country was divided into four strata (large cities, medium and small cities, ordinary rural areas, and poor rural areas), according to their characteristics of economy and social development, using the data from the China National Bureau of Statistics [18]. In this survey, subjects were recruited using a stratified multistage cluster and probability proportional to size sampling design, which was described in a previous study [19]. The Chinese famine lasted for three years, in 1959-1961. Therefore, we established our famine cohort: the subjects born in 1960-1961 were selected as the famine-exposed group, whereas subjects born in 1963 were selected as the unexposed group. The subjects in the two groups were 1:1 matched by gender and birth areas, with 1108 subjects in each group. Questionnaires were used to collect information on demographic characteristics. Blood samples were also collected from the subjects. The exclusion criteria were: unqualified blood sample; failure in DNA extraction; abnormal gene detect results; incomplete basic information; subjects suffered from liver/kidney/heart diseases/cancer; subjects had been diagnosed as type 2 diabetes and had changed their lifestyle. The protocols of the 2010-2012 CNNHS and "Fetal origin hypothesis of diabetes: thrifty genotype hypothesis or thrifty phenotype" were both approved by the Ethical Committee of the National Institute for Nutrition and Health, Chinese Center for Disease Control andPrevention (No. 2013-018, No. 2013-010). Signed consent forms were obtained from all subjects. Assessments of Variables Information about demographic characteristics, dietary factors, smoking and drinking status, exercise data and anthropometric data were derived from the questionnaires. Self-reported education levels were classified as illiteracy to primary school, junior middle school, and senior high school or higher. Current economic status was assessed by the per capita annual income of households in 2011, and was divided into three levels: <20,000, 20,000-40,000 and >40,000 RMB. Smoking and drinking status was classified as "yes" or "no". A validated semi-quantitative food frequency questionnaire and 24 h recall method for the last three consecutive days (two weekdays and one weekend day) were used to collect data regarding dietary intake. In the present study, we only considered the intake of cereals and beans, and the intake of meat and poultry as confounders, as they have been found to be associated with type 2 diabetes [20,21]. The Chinese Dietary Guideline recommends that the reference intake of meat and poultry is between 40 g and 75 g, and the reference intake of cereals and beans is between 50 g and 150 g [22]. Thus, we assessed the intake according to the reference intake. The intake of meat and poultry was divided into three categories: low (<50 g/d), medium (from ≥50 to ≤150 g/d), and high (>150 g/d). Dietary intake of cereals and beans was divided into three categories: insufficient (<40 g/d), sufficient (from ≥40 to ≤75 g/d), and excessive (>75 g/d). Physical activity questionnaires were used to collect physical activity variables, such as whether exercise was taken or not, and total sedentary time (watching TV, using computers, playing video games, and reading) in the subjects' leisure time. BMI was calculated as weight in kilograms divided by height in meters squared (kg/m 2 ). Fasting glucose was measured by collecting morning fasting venous blood samples. Then, the subjects without known diabetes were required to take 75 g oral glucose, and after two hours, venous blood samples were collected again to obtain 2-h plasma glucose. We used criteria proposed by the World Health Organization, the International Diabetes Federation, and the American Diabetes Association on diabetes mellitus [23][24][25][26]. Impaired fasting glucose (IFG) was defined as fasting plasma glucose (FPG) ≥6.1 and <7.0 mmol/L, 2-h plasma glucose <7.8 mmol/L. Impaired glucose tolerance (IGT) was defined as FPG <7.0 mmol/L and 2-h plasma glucose ≥7.8 and <11.1 mmol/L. Type 2 diabetes was defined as FPG ≥7.0 mmol/L and/or 2-h plasma glucose ≥11.0 mmol/L and/or a previous clinical diagnosis of type 2 diabetes. Fasting serum insulin (FINS) was measured by an Iodine [ 125 I] Insulin Radioimmunoassay Kit. Genotyping According to the latest reports in the genome-wide association study and other studies, 61 related single-nucleotide polymorphisms (SNPs) were included in our study [27][28][29][30][31][32][33]. A Mass Array system (Agena, San Diego, USA) was used to detect the genotypes of 61 SNPs. No significant departures were detected from the Hardy-Weinberg equilibrium (HWE) among subjects without type 2 diabetes by using the chi-square test, which suggested the sample was representative (Supplementary Table S1). At the individual level, we removed the samples whose call rates were less than 50%. At the SNP level, we excluded the SNPs if their call rate was <80% and/or their p-value for HWE was <0.0001 in subjects without type 2 diabetes. Thus, 2216 subjects and 50 SNPs were ultimately included in the analysis. Statistical Analysis The statistical software package SAS version 9.4 (SAS Institute, Cary, NC, USA) was used for data analysis. A p-value < 0.05 was considered significant. Continuous variables were presented as mean ± SD or median (P25, P75) according to their distribution, and categorical variables were presented as frequency and percentage. Chi-square and t-tests were used for the comparison of differences between the exposed and unexposed groups. Interactions were tested by creating interaction terms for each genetic variant (coded 0, 1 for not carrying and carrying the risk allele respectively) with the exposed group (coded 0 and 1 for unexposed and exposed subjects, respectively). We tested the multiplicative interaction with famine exposure by using a likelihood ratio test comparing models with and without the cross-product term. Then, associations between SNPs and type 2 diabetes risk were performed according to fetal exposure to famine. General linear model regression was used to test the relationship between FPG, FINS, and SNPs, adjusting for covariates such as age, gender, education level, economic status, smoking, drinking, the intake of meat and poultry, the intake of cereals and beans, physical exercise, sedentary time, BMI, and family history of type 2 diabetes. Logistic regression was used to estimate the ORs for the risk of type 2 diabetes, IFG, and IGT after adjusting for the aforementioned covariates. Results A total of 2216 subjects were included in the current study, with an average age of 49.7 years. General characteristics of subjects between exposed and unexposed groups are shown in Table 1. There were group differences in age, education level and drinking. Table 2 shows the interactions between genetic variants and prenatal exposure to famine as they influence type 2 diabetes risk. Interactions were found between prenatal exposure to famine and ten SNPs (rs10401969, rs10886471, rs10946398, rs1470579, rs2796441, rs340874, rs3794991, rs5015480, rs7961581 and rs9470794) and type 2 diabetes risk after adjustments for age, gender, education level, economic status, smoking, drinking, the intake of meat and poultry, the intake of cereals and beans, physical exercise, sedentary time, BMI, and family history of type 2 diabetes (p < 0.05). Table 3 showed that FPG increased by 0.474 mmol/L among risk allele carriers (rs10886471) in the exposed group (p = 0.032), and FINS decreased by 2.996 mU/L among risk allele carriers in the unexposed subjects (p = 0.023). There was a significant association for rs10946398 with type 2 diabetes for risk allele carriers in the unexposed group (OR = 3.263, 95%CI: 1.584-6.724, p = 0.001). FINS increased by 1.427 mU/L among risk allele carriers (rs1470579) in the exposed subjects (p = 0.011). FINS increased by 1.725 mU/L among risk allele carriers (rs3794991) in the exposed group (p = 0.046). There was a significant association for rs5015480 with IFG (OR = 1.941, 95%CI: 1.119-3.366, p = 0.018) for risk allele carriers in the unexposed group, and FINS increased by 1.260 mU/L among risk allele carriers in the exposed group (p = 0.032). FPG increased by 0.171 mmol/L among risk allele carriers (rs7961581) in the exposed subjects (p = 0.042). There was a significant association of rs9470794 with IGT for risk allele carriers in the exposed group (OR = 7.902, 95%CI: 1.063-58.735, p = 0.043), and FINS increased by 2.105 mU/L among risk allele carriers in the exposed group (p = 0.018). In the exposed subjects, the risk allele carriers (rs10401969) tended to increase with FINS (p = 0.092), whereas this was not true in the unexposed subjects (p = 0.210). There was a borderline significant association between rs2796441 and IFG (OR = 0.587, 95%CI: 0.336-1.026), p = 0.061), as well as rs340874 and IGT (OR = 0.616, 95%CI: 0.352-1.077, p = 0.089) in the unexposed subjects. The Chinese famine provides a unique opportunity to investigate the interactions of prenatal exposure to famine with type 2 diabetes and related measurements. The latest studies have found that prenatal exposure to famine interacted with some genes in influencing type 2 diabetes [14][15][16]. Thus, we investigated interactions of SNPs associated with type 2 diabetes in the Chinese population exposed to famine in utero. Our stratified results showed that famine exposure exacerbated the effect of CILP2-rs10401969 to FINS, GRK5-rs10886471 to FPG and FINS, IGF2BP2-rs1470579 to FINS, TLE1-rs2796441 to IFG, PROX1-rs340874 to IGT, GATAD2A-rs3794991 to FINS, TSPAN8/LGR5-rs7961581 to FPG, and ZFAND3-rs9470794 to IGT and FINS. Famine exposure weakened the effect of CDKAL1-rs10946398 to type 2 diabetes. Famine exposure weakened the effect of HHEX-rs5015480 to IFG, but exacerbated the effect of HHEX-rs5015480 to FINS. To our knowledge, the ten SNPs are the first found to interact with prenatal exposure to famine in type 2 diabetes risk. The IGF2BP2 gene, which encodes the IGF2 mRNA binding protein2, is suggested to play a role in the regulation of insulin production and beta cell function, and IGF2BP2-rs4402960 showed an interaction with prenatal exposure to famine on glucose levels in the Dutch Famine Birth Cohort Study in Amsterdam [15]. Some studies explored the interaction of genes and fetal malnutrition or birth size/weight in type 2 diabetes risk (K121Q, HHEX, CDKN2A/2B, etc.) [12,34]. However, IGF2BP2-rs4402960 did not show an interaction between birth weight and the risk of developing type 2 diabetes in the Helsinki Birth Cohort Study [34]. Variants in CDKAL1 were associated with beta cell function and influenced insulin secretion. The Helsinki Birth Cohort Study investigated the interaction between birth weight and CDKAL1-rs7754840 on the risk of developing type 2 diabetes, and the results were negative [34]. CDKAL1-rs10946398 was previously reported to be associated with birth weight and type 2 diabetes [35], so it was possible that CDKAL1-rs10946398 influenced type 2 diabetes risk by affecting birth weight, or CDKAL1-rs10946398 indeed had an interaction with prenatal exposure to famine in type 2 diabetes risk, but such explanations are speculative and they still need to be replicated in different cohorts. HHEX was associated with impaired insulin release by influencing beta cell development, and HHEX-rs1111875 was found to have an interaction with low birth weight in type 2 diabetes in the Helsinki Birth Cohort Study, which indicated that low birth weight might affect the strength of the association of the variants with type 2 diabetes [34,36]. CILP2 encodes cartilage intermediate layer protein 2, GRK5 plays a crucial role in multiple G-protein-coupled receptors(GPCRs) and non-GPCR substrates which are either key regulators of glucose homeostasis or inflammation, TLE1 and GATAD2A are protein-coding genes, PROX1 influences insulin secretion by influencing beta cell development, TSPAN8/LGR5 seems to result in pancreatic beta cell dysfunction and influences insulin secretion, and the expression of ZFAND3 was found in mouse pancreatic islets with altered beta-cell function [2,29,31,36]. Previous researchers found that exposure to famine in utero or food restriction during gestation impaired and reduced glucose tolerance or decreased beta cell mass [16], which predisposed humans to type 2 diabetes in later life [37,38]. Most of these type 2 diabetes susceptibility genes were associated with the expression and/or function in beta cells and changed insulin secretion. Whether these SNPs involved in fetal development can influence type 2 diabetes in adulthood still needs to be replicated in later cohorts. The present study has several advantages. It was the first time the interaction of so many SNPs and fetal exposure to Chinese famine in type 2 diabetes risk was examined. Moreover, our study utilized national representative data and provided a scenario to assess whether the variants influence the established association between prenatal exposure to famine and type 2 diabetes risk. We found several variants that showed interactions, although these variants still need to be confirmed in later studies. There were also some limitations which should be mentioned. Although we considered some lifestyle factors, other confounding factors such as the consumption of sugar-sweetened beverages, eggs, and fruits and vegetables were not considered in our study. Additionally, the mechanism of how these SNPs interact with prenatal famine on type 2 diabetes risk still remains unclear, and should be examined in the future. Conclusions Our study suggests that ten SNPs may be genetic factors influencing type 2 diabetes risk among famine-exposed subjects, which might synergistically impair the development and function of beta cells, increasing type 2 diabetes risk in adulthood. Author Contributions: The authors' contributions were as follows: C.S. conceived the study, collected and analyzed the data, wrote and revised the manuscript; C.D., F.Y., G.F., and Y.M. collected the data; A.L. supervised the study and contributed to the discussion, interpretation of the data, and manuscript revision. All authors have read and agreed to the published version of the manuscript.
2020-12-24T09:13:58.351Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "72d3f695247cfd575c9ac49833df1f558219e75b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/12/12/3880/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cd4a8df81060a7fa00ef50553abc90af5afd811b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125070642
pes2o/s2orc
v3-fos-license
Semi-phenomenological description of the chiral bands in $^{188,190}Os$ A set of interacting particles are coupled to a phenomenological core described using the generalized coherent state model. Among the particle-core states a finite set which have the property that the angular momenta carried by the proton and neutron quadrupole bosons and the particles, separately, are mutually orthogonal are identified. The magnetic properties of such states are studied. All terms of the model Hamiltonian exhibit chiral symmetry except the spin-spin interaction. There are four bands of the type with two-quasiparticle-core dipole states, exhibiting properties which are specific for magnetic twin bands. An application is presented, for the isotopes $^{188, 190}$Os. I. INTRODUCTION Some of the fundamental properties of nuclear systems may be evidenced through their interaction with an electromagnetic field.The two components of the field, electric and magnetic, are used to explore the properties of electric and magnetic nature, respectively. At the end of the last century, the scissors like states [1][2][3] and the spin-flip excitations [4] were widely treated by various groups.The scissors mode provides a description of the angular oscillation of the proton against a neutron system, and the total strength is proportional to the nuclear deformation squared, which reflects the collective character of the excitation [3,4]. By virtue of this feature it was believed that the magnetic collective properties are in general associated with deformed systems.This is not true due to the magnetic dipole bands, where the ratio between the moment of inertia and the B(E2) value for exciting the first 2 + from the ground state 0 + , I (2) /B(E2), takes large values, of the order of 100(eb) −2 MeV −1 . These large values can be explained by there being a large transverse magnetic dipole moment which induces dipole magnetic transitions, but almost no charge quadrupole moment [5].Indeed, there are several experimental data sets showing that the dipole bands have large values for B(M1) ∼ 3 − 6µ 2 N , and very small values of B(E2) ∼ 0.1(eb) 2 (see for example Ref. [6]).The states are different from the scissors mode ones, exhibiting instead a shears character.A system with a large transverse magnetic dipole moment may consist of a triaxial core to which a proton prolate orbital and a neutron oblate hole orbital are coupled.The maximal transverse dipole momentum is achieved when, for example, j p is oriented along the small axis of the core and j n along the long axis and the core rotates around the intermediate axis.Suppose that the three orthogonal angular momenta form a right trihedral frame.If the Hamiltonian describing the interacting system of protons, neutrons and the triaxial core is invariant to the transformation which changes the orientation of one of the three angular momenta, i.e. the right trihedral frame is transformed to one of a left type, one says that the system exhibits a chiral symmetry.As always happens, such a symmetry is identified when it is broken and consequently to the two trihedral ones there correspond distinct energies, otherwise close to each other.Thus, a signature for a chiral symmetry characterizing a triaxial system is the existence of two ∆I = 1 bands which are close in energies.On increasing the total angular momentum, the gradual alignment of j p and j n to the total J takes place and a magnetic band is developed. In [7] we attempted to investigate another chiral system consisting of one phenomenological core with two components, one for protons and one for neutrons, and two quasiparticles whose angular momentum J is oriented along the symmetry axis of the core due to the particle-core interaction.In the quoted reference we proved that states of total angular momentum I, where the three components mentioned above carry the angular momenta J p , J n , J which are mutually orthogonal, do exist.Such a configuration seems to be optimal for defining a large transverse magnetic moment that induces large M1 transitions.In choosing the candidate nuclei with chiral features, we were guided by the suggestion [5] that triaxial nuclei may favor orthogonality of the aforementioned three angular momenta and therefore may exhibit a large transvercse magnetic moment.In the previous publication, the formalism was applied to 192 Pt, which satisfies the triaxiality signature condition. Here the same formalism is applied to two other isotopes, 188,190 Os.Moreover the proton and neutrn gyromagnetic factors are calculated in a self-consistent manner.Also, an extended discussion concerning chiral symmetries of the spin-spin interaction, the broken symmetries and associated phase transition is presented. II. BRIEF REVIEW OF THE GCSM The core is described by the Generalized Coherent State Model (GCSM) [8] which is an extension of the Coherent State Model (CSM) [9] for a composite system of protons and neutrons.The CSM is based of the ingredients presented below. For the sake of giving a self-contained presentation, in what follows we shall give some minimal information about the phenomenological formalism of GCSM, providng a description of the core system.In his way the necessary notation and the specific properties of the core are presented.The usual procedure used for describing the excitation energies with a given boson Hamiltonian is to diagonalize it and fix the structure coefficients such that some particular energy levels are reproduced.For a given angular momentum, the lowest levels belong to the ground, gamma and beta bands, respectively.For example, the lowest state of angular momentum 2, i.e. 2 + 1 , is a ground band state, the next lowest, 2 + 2 , is a gamma band state, while 2 + 3 belongs to the β band.The dominant components of the corresponding eigenstates are one, two and three phonon states.The harmonic limit of the model Hamilto-nian yields a multi-phonon spectrum while on switching on a deforming anharmonicity, the spectrum is a reunion of rotational bands.The correspondence of the two kinds of spectra, characterizing the vibrational and rotational regimes respectively, is realized according to the Sheline-Sakai scheme [10].In the near vibrational limit a certain staggering is observed for the γ band, while in the rotational extreme, the staggering is different.The bands are characterized by the quantum number K which for the axially symmetric nuclei is 0 for the ground and β bands and equal to 2 for γ band.The specific property of a band structure consists in the E2 probabilities of transition within a band being much larger that the ones for transitions between two different bands.For γ stable nuclei, the energies of the states heading the γ and β bands are ordered as , while for γ unstable nuclei the ordering is reversed.A third class of nuclei exist for which , J-even.These are the fundamental features for which the wave functions should provide a description in any realistic approach.The CSM builds a restricted basis requiring that the states before and after angular momentum projection are orthogonal and, moreover, accounts for the properties listed above.If such a construction is possible, then one attempts to define an effective Hamiltonian which is quasi-diagonal in the selected basis.The CSM is, as a matter of fact, a possible solution in terms of quadrupole bosons [9]. Unlike within the CSM, within the GCSM [8] quadrupole proton-like bosons, b † pµ , provide the description of the protons, while quadrupole neutron-like bosons, b † nµ , provide that of neutrons.Since one deals with two quadrupole bosons instead of one, one expects to have a more flexible model and to find a simpler solution satisfying the restrictions required by CSM.The restricted collective space is defined by the states providing the description of the three major bands: ground, beta and gamma, as well as the band based on the isovector state 1 + .Orthogonality conditions, required for both intrinsic and projected states, are satisfied by the following six functions which generate, by angular momentum projection, six rotational bands: (2.2) Note that a priory we cannot select one of the two sets of states φ JM and φ(γ) JM for gamma band, although one is symmetric and the other asymmetric against the proton-neutron permutation.The same is true for the two dipole states isovector candidates.In [11], results obtained by using as alternatives a symmetric structure and an asymmetric structure for the gamma band states were presented.Therein it was shown that the asymmetric structure for the gamma band does not conflict any of the available data.In contrast, on considering for the gamma states an asymmetric structure and fitting the model Hamiltonian coefficients in the manner described in [8], for some nuclei a better description for the beta band energies is obtained.Moreover, in that situation the description of the E2 transition becomes technically very simple.The results obtained in [8,11] for 156 Gd are relevant in this respect.For these reasons, here we adopt the option of a proton-neutron asymmetric gamma band.All calculations performed so far considered equal deformations for protons and neutrons.The deformation parameter for the composite system is: 3) The factors N (k) J with k = g, β, γ, γ, 1, 1 involved in the wave functions are normalization constants calculated in terms of some overlap integrals. We seek now an effective Hamiltonian for which the projected states (2.1) are, at least to a good approximation, eigenstates in the restricted collective space.The simplest Hamiltonian fulfilling this condition is: with Ĵ denoting the proton and neutron total angular momentum.The Hamiltonian given by Eq.(2.4) has only one off-diagonal matrix element in the basis (2.1); that is JM .However, our calculations show that this affects the energies of β and γ bands at the level of a few keV.Therefore, the excitation energies of the six bands are to a good approximation, given by the diagonal elements: (2.5) F spin properties of the model Hamiltonian and analytical behavior of energies and wave functions in the extreme limits of vibrational and rotational regimes have been studied [8,[11][12][13][14][15]. Results for the asymptotic regime of deformation suggests that the proposed model generalizes both the two-rotor [1] and the two-drop models [16]. Note that H GCSM is invariant under any p-n permutation and therefore its eigenfunctions have a definite parity.We chose one or the other parity for the gamma band, depending on the quality of the overall agreement with the data.We don't exclude the situation when the fitting procedure selects the symmetric γ band as the optimal one.The possibility of having two distinct phases for the collective motion in the gamma band has been considered also in [17] within a different formalism. III. EXTENSION TO A PARTICLE-CORE SYSTEM The particle-core interacting system is described by the following Hamiltonian: with the following notation for the particle quadrupole operator: The core is described by H GCSM , while the particle system is described by the next two terms, standing for a spherical shell model mean field and pairing interactions of the like nucleons, respectively.The notation |α = |nljm = |a, m is used for the spherical shell model states.The last two terms, denoted hereafter as H pc , express the interaction between the satellite particles and the core through a quadrupole-quadrupole qQ and a spin-spin force sS, respectively.The angular momenta carried by the core and particles are denoted by J c (= J p + J n ) and J F , respectively. The mean field plus the pairing term is quasi-diagonalized by means of the Bogoliubov-Valatin transformation.The free quasiparticle term is α E a a † α a α , while the qQ interaction preserves the above mentioned form, with the factor q 2m changed to: The notation a † jm (a jm ) is used for the quasiparticle creation (annihilation) operator.We restrict the single particle space to a proton single-j state where two particles are placed. For the space of the particle-core states, therefore, we consider the basis defined by: where |BCS denotes the quasiparticle vacuum, while N is the norm of the projected state. IV. NUMERICAL APPLICATION The formalism described above was applied for two isotopes 188,190 Os.In choosing these isotopes, we had in mind their triaxial shape behavior reflected by the signature Indeed, this equation is obeyed with a deviation of 2 keV for 188 Os and 11 keV for 190 Os. A. Energies We calculated first the excitation energies for the bands described by the angular momen- are to be fixed.For a given ρ we determined the parameters involved in H GCSM by fitting the excitation energies in the ground, β and γ bands, through a least square procedure. We then varied ρ and kept the value which provides the minimal root mean square of the resulting deviations from the corresponding experimental data.The excitation energies of the phenomenological magnetic bands are free of any adjusting parameters.In fixing the strengths of the pairing and the q.Q interactions, we were guided by [13], where spectra of some Pt even-even isotopes where interpreted with a particle-core Hamiltonian, the core being described usung the CSM.The two-quasiparticle energy for the proton orbital h 11/2 was taken as 1.947 MeV for 188 Os and 2.110 MeV for 190 Os , these values being close to the ones yielded by a BCS treatment in the extended space of single particle states.The parameters mentioned above have the values listed in Table I.Excitation energies calculated Xpc. with these parameters are compared with the corresponding experimental data, in Figs. 1 and 2. One notes a good agreement of results with the corresponding experimental data.Data are taken from [18]. FIG. 2: The same as in Fig. 1 but for 190 Os with data from Ref. [19]. Unfortunately, there is no data values available concerning the magnetic states.However, in [18,19] the 1304.82keV and 1115.5 keV states in 188 Os and 190 Os respectively, perform an M1 decay to the ground band states.These states could tentatively be associated with the heading states of the two dipole bands which are located at 1400 and 1538 keV, respectively. FIG. 6: The magnetic dipole reduced probabilities within the two quasiparticle-core bands corresponding to the quasiparticle total angular momentum J.The gyromagnetic factors are the same as those used in Fig. 5. For 188 Os, the states |1; JM are not in a natural order from J ≥ 6.Indeed, the yrast states belong to the 1 + band except the states with J = 6, 8, 10 which are of 1+ type.Similarly, non-yrast states have a 1+ character except the states of J = 6, 8, 10, which are of 1 + type. If in the expression of H (3.1) one ignores the spin-spin term, the resulting Hamiltonian exhibits a chiral symmetry.A chiral transformation in the angular momentum space consists in changing the orientation of one of the axes.Thus the chiral transformation transforms a right oriented trihedral form into a left oriented trihedral form and vice versa.Clearly the spin-spin interaction breaks the chiral symmetry i.e. this term is not invariant under any chiral transformation.Indeed, changing alternatively the signs of J F , J p , J n one obtains and T 4 are degenerate and correspond to the transformations J p → −J p and J n → −J n respectively, applied to the initial reference frame symbolized by T 1 .The degeneracy is caused by the fact that in both cases the transformed spin-spin interaction is asymmetric with respect to the p-n permutation and therefore their averages with the two quasiparticledipole-core states, which are asymmetric, are vanishing.It is remarkable the fact that upon enlarging the particle-core space with the [(2qp) J ⊗ Φ (g) J ′ ] IM states, the interaction between the opposite parity 2qp ⊗ core states, due to the spin-spin term, would determine another two bands of mixed symmetry, characterized also by large M1 rates.The description of such bands will be presented elsewhere.In conclusion, the degeneracy mentioned above is removed if the space is enlarged such that the parity mixing symmetries are possible.The energies shown in figures 3 and 4 are listed in Table 2. B. Magnetic properties In what follows we give a few details about the calculation of the M1 transition rate.The magnetic dipole transition operator is defined as: Considering for the core's magnetic moment the classical definition, one obtains an analytical expression involving the quadrupole coordinates and their time derivatives of first order, which can be further calculated by means of the Heisenberg equation [8,11,12].Finally, writing the result in terms of quadrupole boson operators and identifying the factors multiplying the proton and neutron angular momenta with the gyromagnetic factors of proton and neutrons, one obtains [11]   where Z and R 0 denote the nuclear charge and radius, while M and c are the proton mass and the velocity of light.k p is a parameter defining the canonical transformation relating the coordinate and conjugate momenta with the quadrupole bosons, while A 1 , A 3 , A 4 are the structure coefficients involved in H GCSM .Within the GCSM the core gyromagnetic factor is [8] and moreover that might be identified with the liquid drop value, Z/A; consequently the canonicity coefficient acquires the expression: Inserting this in Eq.(4.3), the gyromagnetic factors are readily obtained.Their values are listed in Table 1.The fermion gyromagnetic factor corresponds to the proton orbital h 11/2 with the spin composing term quenched by a factor 0.75. With this expression for the transition operator, we also calculated the B(M1) value for the transitions 1 + → 0 + g and 1 + → 2 + g .The results are 0.2772µ 2 N , 0.0139 µ 2 N for 188 Os and 0.1752µ 2 N , 0.0085µ 2 N for 190 Os.Actually this is consistent with the fact that the nuclear deformations of the considered nuclei are small, which results in there being a relatively small The four frames are related by a chiral transformation.The spin-spin interaction corresponding to each trihedral is also mentioned.They generate the bands T i with i=1,2,3,4, respectively. M1 strength for the dipole state 1 + .The model used in the present paper was formulated in a previous publication [7] and applied for the case of 192 Pt.As we already mentioned before, here we used the same ingredients but for another two triaxial isotopes, 188,190 Os.While in Ref. [7] the gyromagnetic factor of neutrons was taken to be 1 5 g p , here the two factors are calculated in a self-consistent manner and thus they depend explicitly on the structure coefficients involved in the collective Hamiltonian.Our work proves that the mechanism for chiral symmetry breaking, which also favors a large transverse component for the dipole magnetic transition operator [5], is not unique. The bands T 1 , T 2 and T 3,4 , defined above have, do indeed have properties which are specific to the chiral bands: i) First of all, as proved in [7], the trihedral form (J p , J n , J F ) is orthogonal for some total angular momenta of the 2qp-core states, at the beginning of the bands, and almost orthogonal for the next states, and on increasing the total angular momentum the angle is decreased due to the alignment effect caused by rotation.Since the proton state involved is h 11/2 and the fermion angular momentum is J = 10 with the projection M = J, this is aligned to the Oz axis, which is perpendicular onto the plane of the orthogonal vectors J p and J n ; ii) The energy spacings in the two bands have similar behaviors as function of the total angular momentum.They vary slowly with angular momentum.From Table 2 one notes that for 190 Os the energy spacing increases with the angular momentum faster than in the case of 188 Os.The reason for this difference is provided by the strength of the qQ-interaction; iii) The staggering function (E(J) − E(J − 1))/2J is almost constant; iv) The most significant property is that the B(M1) values for the transition between two consecutive levels are large.The B(M1) values associated to the intra-band transitions are large despite the fact that the deformation is typical for a transitional spherically deformed region; this property is shown in Fig. 5.The fact that the large transition matrix elements are associated with a chiral configuration of the angular momenta involved is illustrated in Fig 6, where one sees that large B(M1) values are achieved for large quasiparticle total angular momentum projection on the symmetry axis.According to figure 5, the M1 strength for the intra-band transitions depends quadratically on the angular momentum of the decaying state.This feature is to be compared with the property of the scissors mode that the the total M1 strength is proportional to the nuclear deformation squared. C. More about symmetries Our description is different from the ones from the literature in the following respects.The previous formalisms were focussed mainly on the odd-odd nuclei, although a few publications refer also to even-odd [20] and even-even isotopes [21].Our approach concerns the eveneven systems and is based on a new concept.While until now there have been only two magnetic bands related by a chiral transformation, here we found four magnetic bands with this property, two of them being degenerate.Indeed, consider the trihedral formas (J p , J n , J F ), (J p , J n , −J F ), (−J p , J n , J F ), (J p , −J n , J F ) denoted by the same letters as the associated bands, i.e.T 1 , T 2 , T 3 and T 4 , respectively.To these trihedral forms, four distinct spin-spin interaction terms correspond: ( The average of the model Hamiltonian with the transformed functions is This equation proves that the four bands are, indeed, determined by the images of the noninvariant part of H through the transformation C k .According to this equation, the four chiral bands show up upon adding to the space of 2qp-core states given by equation (3.4) the corresponding chirally transformed states. Obviously, the four bands are related by the following equations: with R k π , k=p, n, F, denoting the rotation in the angular momenta space around the axis k with angle π.Therefore, if T 1 is a right trihedral form then the trihedral forms T 2 , T 3 and T 4 have a left character.Due to this, one may expect that the bands T k with k=2,3,4, are identical since they have the same chiral nature.This is however not true in our model since the transformations C p and C n break not only the chiral symmetry but also the protonneutron (pn) permutation symmetry.Due to this feature the bands T 3 , T 4 and T 2 are different.Moreover, since for the frames T 3 and T 4 the sS term is asymmetric under the (pn) permutation and consequently its average with wave functions of definite pn parity is vanishing, the corresponding bands are degenerate.Note that upon enlarging the particlecore space with the 2qp ⊗ Φ (g) J states, the interaction between the opposite parity 2qp ⊗ core states due to the spin-spin term will determine another two bands of mixed symmetry, characterized also by large M1 rates.In conclusion, the degeneracy mentioned above is removed if the space is enlarged such that the parity mixing is possible.The description of such bands will be presented elsewhere.The bands of different chiral kinds are conventionally called partner bands.In this respect, the pairs of bands (T 1 , T 2 ), (T 1 , T 3 ), (T 1 , T 4 ) are chiral partner bands.According to the above equations, the reference frames of similar chiral nature are related by a rotation of angle equal to π.Now let us see how the transformations defined above affect the sS interaction term.To this end, it is useful to introduce the notation V k for the interactions specified in figure 7 which corresponds to the reference frame T k .Obviously, the following relations hold: (5.4) From here, the connections of different chiral transformations result: Consequently, under the given conditions the set of C p , C n , C F and the unity transformation I form a group.At a superficial glance, this seems to be in conflict with the fact that a chiral transformation changes chirality while the product of two transformations preserves chirality. We mention however that the equations from above were derived taking into account that each interaction V k is invariant under the parity transformation P , which simultaneously changes the orientation of the three axes.Due to this result we may extend the notion of the chiral partner bands to any pair of bands (T 2 , T 3 ), (T 2 , T 4 ), (T 3 , T 4 ). The bands T 1 and T 2 have different chiralities and thereby they characterize different nuclear phases.Varying the interaction strength X sS smoothly from positive to negative values, one may achieve a transition between the two phases.The critical value of the strength is X sS = 0. Recall that the degenerate bands T 3 and T 4 correspond to this value. On the other hand it has been proved that, generally, the critical point of a phase transition corresponds to a new symmetry [22,23].Tentatively, the degeneracy of the T 3 and T 4 bands might be related with the symmetry corresponding to the critical spin-spin interaction. Note that in the absence of the sS interaction, the Hamiltonian is invariant under chiral transformations, and therefore states with left are degenerate with those of right chirality. The model Hamiltonian is also invariant with respect to the pn permutation and consequently its eigenstates are either even or odd with respect to this symmetry.When the sS interaction is switched on both symmetries -chiral and pn permutation -are broken.Thus the energies of left oriented frame are different of those of right character.Moreover, for T 1 and T 2 the states are of definite pn permutation parity, while upon enlarging the model space, T 3 and T 4 are split apart and the states become mixture of components of different parities. Note that fixing the angular momentum orientation may define a certain intrinsic frame while apparently the Hamiltonian is considered in the laboratory frame, due to its scalar character.This is actually not the case.Indeed, the Hamiltonian is invariant under the rotations defined by the components of the total angular momentum but not under those defined by the components of J F , J p or J n .The pure boson term should be discussed in the framework of the coherent states.Indeed, we recall that the projected states depend on the deformation parameter which implies an asymmetric structure in the intrinsic coordinates.Indeed the projected state is a linear combination of different K components among which one is dominant.Therefore the states are approximatively K oriented and by this the Hamiltonian is considered in a subsidiary intrinsic frame when an angular momentum projected states are used. Usually the particle-core formalisms are confronted with the Pauli principle violation. This feature is not encoutered in the present approach.Indeed, the two quasiparticles which are coupled to the core have a maximal angular momentum.On the other hand in a spurious component, which might be generated by the particle number operator, the quasiparticle angular momenta are anti-aligned which results in having a vanishing total angular momentum.We recall the fact that the core is described in terms of phenomenological quadrupole proton and neutron bosons.If we consider a microscopic structure for the aforementioned bosons, then among the composing quadrupole configurations one may find the state of the outer particles.This could be a source for the Pauli principle violation.Again that does not matter in our case since the 2qp state have a maximal angular momentum while the proton bosons, describing the core, have the angular momentum equal to two.In conclusion, due to the fact that the 2qp states, which are coupled to the core, are characterized by maximal quantum number K, the Pauli principle is not violated at all, at least as far as the particle-core interaction is concerned.Of course, since the states of the core are multi-boson states the Pauli principle is violated, which is common to all phenomenological descriptions dealing with bosons.However, if the anharmonic boson Hamiltonian is derived by means of the Marumori [24] boson expansion method, this drawback is certainly removed.Since our description uses phenomenological quadrupole bosons, the core's feature mentioned above This feature suggests that, indeed, the orbital magnetic moment carried by protons plays an important role in determining a chiral magnetic band.The core is described through angular momentum projected states from a proton and neutron coherent state as well as from its lowest order polynomial excitations.Among the three chiral angular momentum components two are associated with the core and one with a two quasiparticle state.In contradistinction, the previous descriptions devoted to odd-odd system, use a different picture.The core carries one angular momentum and, moreover, its shape structure determines the orientation of the other two angular momenta associated with the odd proton and odd neutron, respectively. For odd-odd nuclei several groups identified twin bands in medium mass regions [25][26][27][28] and even for heavy mass regions [29,30].Theoretical approaches are based mainly on a triaxial rotor-two quasiparticle coupling, which was earlier formulated and widely used by the group of Faessler [31][32][33][34].For a certain value of the total angular momentum, the angular momenta carried by the three components are mutually orthogonal.This picture persists for the next two angular momenta and then increasing the the rotation frequency, the core spins are gradually aligned.Subsequently, the quasiparticle angular momentum is also aligned with the resulting spin.This is the mechanism which develops a ∆I=1 band.The new features of the present approach are underlined by comparing it with the formalism of [7]. As mentioned before, the even-even nuclei which might be good candidates for use in exploring the chiral properties are those of a triaxial shape.Moreover, the satellite protons are to be in a shell of large angular momentum.In this way the protons orbital angular momentum provides a consistent contribution to the M1 strength.Also, the chosen nucleus must belong to a transitional spherically deformed region, i.e. it exhibits a small nuclear deformation.In the present approach the chiral states are of 2qp-core dipole state type which implies that the core has a low lying dipole band.The numerical results for the chosen nuclei are consistent with the commonly accepted signatures of the chiral bands.The intra-band M1 strength has a quadratic dependence on the state angular momentum, which contrasts the case for the scissors mode, whose strength is proportional to the nuclear deformation squared.The chiral bands are characterized by large angles between the proton and neutron symmetry axes, while for the scissor mode this angle is very small.The calculated M1 strength for the transition from a chiral band to the band generated by the 2qp ⊗ Φ (g) J is small, which confirms the fact that the considered states have a different nature than the scissors-like states. The strength parameters characterizing the core were fixed by fitting some energies from ground, β and γ bands.The agreement with experimental data in the core bands is very good.Unfortunately, no data for the dipole bands are available for the chosen nuclei. Experimental data for chiral bands in even-even nuclei are desirable.These would encourage us to extend the present description to a systematic study of the chiral features in eveneven nuclei.The present paper has the merit of drawing the attention to the fact that such states, organized in twin bands, exist.We believe that predicting new features of the nuclear system and describing the existing data are equally important ways of achieving progress in the field. FIG. 5 : FIG. 5: The BM1 values associated with the dipole magnetic transitions between two consecutive levels in the T 1 band of 188 Os.The results are interpolated with a second rank polynomial (full curve).The gyromagnetic factors employed are gp = 0.828µ N , gn = −0.028µN and g F = 1, 289µ N . three distinct interactions which, moreover, are different from the initial one.Associating with each of these interactions a band (for more details see Section 5), one obtains a set of four bands which will be conventionally called chiral bands.In figures 3 and 4 the chiral bands T 1 and T 2 are associated with the actual Hamiltonian given by equation (3.1) and the one obtained by the chiral transformation J F → −J F , respectively, while the bands T 3 each of them affecting the chiral symmetric and degenerate spectrum in a specific way.Concretely, let us denote by C k with k = p, n, F the chiral transformation corresponding to the "k" axis and define k Ψ (2qp;J1) JI;M = C k Ψ (2qp;J1) JI;M , k = p, n, F. TABLE I : The structure coefficients of the model Hamiltonian were determined by a least square procedure.On the last column the r.m.s.values characterizing the deviation of the calculated and experimental energies are also given.The deformation parameter ρ is adimensional.The parameter X ′ pc is related to Xpc by: X ′ pc = 6.5η
2015-05-05T08:13:20.000Z
2015-05-05T00:00:00.000
{ "year": 2015, "sha1": "2dcd5b1ace1dd8cfeb71f4d9a20d6e1071db2795", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1505.00910", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2dcd5b1ace1dd8cfeb71f4d9a20d6e1071db2795", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253631821
pes2o/s2orc
v3-fos-license
Prevalence of social risk factors and social needs in a Medicaid Accountable Care Organization (ACO) Background Health-related social needs (HRSN) are associated with higher chronic disease prevalence and healthcare utilization. Health systems increasingly screen for HRSN during routine care. In this study, we compare the differential prevalence of social risk factors and social needs in a Medicaid Accountable Care Organization (ACO) and identify the patient and practice characteristics associated with reporting social needs in a different domain from social risks. Methods Cross-sectional study of patient responses to HRSN screening February 2019-February 2020. HRSN screening occurred as part of routine primary care and assessed social risk factors in eight domains and social needs by requesting resources in these domains. Participants included adult and pediatric patients from 114 primary care practices. We measured patient-reported social risk factors and social needs from the HRSN screening, and performed multivariable regression to evaluate patient and practice characteristics associated with reporting social needs and concordance to social risks. Covariates included patient age, sex, race, ethnicity, language, and practice proportion of patients with Medicaid and/or Limited English Proficiency (LEP). Results Twenty-seven thousand four hundred thirteen individuals completed 30,703 screenings, including 15,205 (55.5%) caregivers of pediatric patients. Among completed screenings, 13,692 (44.6%) were positive for ≥ 1 social risk factor and 2,944 (9.6%) for ≥ 3 risks; 5,861 (19.1%) were positive for social needs and 4,848 (35.4%) for both. Notably, 1,013 (6.0%) were negative for social risks but positive for social needs. Patients who did not identify as non-Hispanic White or were in higher proportion LEP or Medicaid practices were more likely to report social needs, with or without social risks. Patients who were non-Hispanic Black, Hispanic, preferred non-English languages or were in higher LEP or Medicaid practices were more likely to report social needs without accompanying social risks. Conclusions Half of Medicaid ACO patients screened for HRSN reported social risk factors or social needs, with incomplete overlap between groups. Screening for both social risks and social needs can identify more individuals with HRSN and increase opportunities to mitigate negative health outcomes. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-022-08721-9. Background Health-related social needs (HRSN) are associated with high chronic disease prevalence, poor disease control [1][2][3][4], and high health care utilization in both adults and Open Access *Correspondence: kschiavoni@partners.org children [5][6][7]. Increasingly, health systems are screening patients for HRSN during routine care and integrating responses into the electronic medical record (EMR), with the goal to refer or provide resources to address identified needs [8,9]. Screening for HRSN has also become a priority for public payors in the Accountable Care Organization (ACO) model as a strategy to prevent and treat chronic disease [10][11][12], and several states allow use of Medicaid funds to directly address HRSN like food and housing [13,14]. Screening for HRSN may be completed using many instruments, which can include questions on social risk factors as well as social needs [15][16][17]. The relationship between social risk factor and social need screening is unclear. Consistent with prior literature, we use the term health-related social needs to mean a group of individuallevel adverse social determinants of health, such as those assessed in a screening instrument [10]. We use social risk factors or social risks as the specific adverse social and economic conditions associated with poor health as measured on the individual level, for example food insecurity. We use the term social needs when individuals express their own preferences and priorities to address these conditions, such as requesting assistance with food [18,19]. Previous studies have shown that the identification of social risk factors is not always consistent between different screening instruments assessing the same domain, such as housing risk [20]. There has also been substantial variability in the extent to which individuals identified as having social risk factors on screening instruments report social needs by requesting additional assistance [21][22][23][24]. In smaller studies of patient questionnaires in a research context, 8.6 to 26% of participants who screened negative or declined to answer social risk screening questions still indicated they were interested in receiving resources for social needs [23,25,26]. It is therefore not clear how social risk factor and social need screening overlap in identifying HRSN in a population. In our study, we sought to understand the prevalence of social risk factors and social needs in a large population screened for HRSN as part of routine clinical care. We included both patient and practice level characteristics in our model, drawing from the Drivers of Health framework, which includes indirect factors (such as public policy, gender, and racial identity) that affect direct factors (such as environment, access to and quality of healthcare, and social circumstances) that affect health outcomes [27]. The goals of this study were: (1) to compare the differential prevalence of social risk factors and social needs in a Medicaid ACO population, specifically describing the characteristics of patients who would be missed by screening for social risk factors only, and (2) to identify the patient and practice characteristics associated with reporting social needs in a different screening domain from social risk factors. Study design and setting We conducted a cross-sectional study of patient responses to a HRSN screening questionnaire from February 2019 to February 2020. This period was chosen to reflect full implementation of the HRSN screening program after launch in March 2018, but before the disruption in routine care that occurred due to the COVID-19 pandemic. We examined HRSN screening responses from 114 outpatient primary care practices across a large Medicaid ACO in an integrated health system in Massachusetts including those in academic medical centers, community physician-hospital organizations, and affiliated physician groups. Practices were located in urban, suburban, and rural settings. Of included practices, 15 (10.4%) were in Community Health Center locations, 73 (64.0%) were owned by the health system, and 41 (36.0%) were private practices affiliated with the health system. Included practices actively screened during the entire study period and had ≥ 5 patient responses. Patients eligible for the study sample were enrolled in Massachusetts Medicaid (MassHealth); were in the Medicaid ACO for at least 11/13 months during the study period; and completed the questionnaire either during a primary care visit or by phone with staff at an included practice. Of the approximately 107,900 individuals in the Medicaid ACO in 2019, 31,156 were eligible for inclusion. Patients may have completed the screening more than once if they had multiple qualifying primary care encounters. All completed items on the screening questionnaire were analyzed and incomplete screening items were treated as missing completely at random. Health-related social needs screening HRSN screening was conducted as part of routine primary care for patients in the Medicaid ACO beginning in 2018. The questionnaire was available in English or Spanish for patients to complete through an online portal, on a tablet before primary care visits, or verbally with staff assistance, with the goal to complete annually. Patient responses were imported into the EMR. For patients 15 years or younger, a parent or caregiver completed the questionnaire on their behalf. The screening questionnaire assessed social risk factors in eight domains (food, housing, medication, transportation, utilities, family care, employment, education), as well as social need as defined by a request for more information in any of the same eight domains (Supplemental Fig. 1). We used request for more information to define social need in this study because answers expressed patient prioritization of that domain and preference for additional engagement. The questionnaire was created for institutional use by compiling portions of publicly available validated screening tools and adding additional questions for domain completeness. Prior to implementation, the institutional questionnaire was tested with patient focus groups and modified as needed. While the tool in its entirety was not formally validated, we used it in this study to understand the results of pragmatically implemented HRSN screening in a real-world setting. Outcome and predictor variables Outcome variables included (1) reporting social needs among those who screened positive or negative for social risks, and (2) reporting social needs in a concordant or discordant domain as the social risk factor. Concordant domain was defined as reporting social need in any domain where a patient also screened positive for a social risk factor. Discordant domain was defined as reporting social need in a domain where a patient screened negative for social risk, while screening positive for a different social risk factor. Predictor variables on the patient level included pediatric age (≤ 18 years), sex, race, ethnicity, language, and whether ≥ 3 social risk factors were positive. Patient-level information was obtained from the EMR. At the practice level, predictor variables included proportion of patients with Limited English proficiency (LEP) and with Medicaid insurance where payor data was available. Practicelevel information was obtained from aggregated EMR data of patients attributed to the practice by having an insurance-assigned primary care provider or ≥ 3 practice visits. Twenty-five practices located at two participating academic medical centers had payor composition data available (referred to as the "payor subset") and were examined in a secondary analysis. Datasets were linked using a patient medical record number, date of questionnaire completion, and EMR location. Analytic approach We used a log binominal multivariable regression model with generalized estimating equations (GEE) to understand the patient and practice characteristics associated with social need, and with reporting social need in a concordant or discordant domain as social risk. We estimated prevalence ratios using a binomial distribution, log link function, and working independence correlation structure. We chose a GEE model to address potential non-independence of the observations and a hierarchical model to account for clustering of patients at the practice level. After evaluating the model suitability of continuous variables, we found practice proportion LEP and Medicaid failed the assumption of linearity. We also found high correlation (ρ = 0.79) between the continuous practice LEP and Medicaid variables, with concern for collinearity in the model. Therefore, we included both practice-level LEP and Medicaid composition as categorical variables using quartiles and combined the categorical variables into a single indicator of high-need practice environment defined as top quartile for both LEP and Medicaid (2 social factors), top quartile for 1 social factor, or no top quartile ranking. We conducted statistical analyses using SAS 9.4 software. A two-sided p ≤ 0.05 defined statistical significance. This study follows RECORD and STROBE reporting guidelines for observational studies of routinely-collected health data [28]. It was approved by the Institutional Review Board at Mass General Brigham. Study population The study population included 27,413 patients at 114 primary care practices who completed a HRSN screening questionnaire during the February 2019 to February 2020 study period ( Table 1). The mean patient age was 24.2 years, with 55.5% of the population age 18 years or younger. Less than half of the population identified as (Table 2). Patients who screened positive for risk in any domain most often reported social need in unstable housing (12.1%), difficulty paying for utilities (11.8%), education (10.0%), and food insecurity (9.0%). Notably, patients who screened negative for risks in all domains still reported social needs most often in housing (1.5%), utilities (1.4%), education (1.1%), and childcare (1.1%). Patients who screened positive for social risk factors in any domain were significantly more likely to report social needs if they were female, identified as a race/ ethnicity other than non-Hispanic White, preferred Spanish or another non-English language, or received care at a practice in a higher quartile of patients with LEP (Table 3). Those who screened positive for 3 or more social risks were also significantly more likely to report social needs. Social need without social risk factors Among those who screened negative for social risk factors, patients who identified as a race/ethnicity other than non-Hispanic White or who received care at practices in the top quartiles of patients with LEP were significantly more likely to report social needs (Table 3). Patients who preferred languages other than English were not more likely to report social need when they did not have social risk factors. With or without social risk factors, caregivers of pediatric patients were significantly less likely to report social needs. Social need discordant to social risk factors In the full study population, patients who identified as non-Hispanic Black, preferred a language other than English, or received care at a practice in the top two quartiles of patients with LEP were significantly more likely to report social need in a domain different from their social risk factor (Table 4). Payor subset secondary analysis We also analyzed a subset of 11,093 patients at 25 practices where payor composition data was available (Supplemental Tables 1-4 Social need without social risk factors In the payor subset, patients who identified as non-Hispanic Black, spoke a language other than English or Spanish, or received care in a highest quartile practice for both LEP and Medicaid were significantly more likely to report social needs when they did not report social risk factors (Supplemental Table 3). In this smaller group, Hispanic identity and Spanish language preference were no longer significantly associated with reporting social needs. Social need discordant to social risk factors Consistent with the larger dataset, non-Hispanic Black identity, non-English language preference continued to predict reporting domain discordant social needs, along with receiving care in the highest-need practice environment with top quartile proportion of patients with LEP and Medicaid enrollment (Supplemental Table 4). Discussion In this study, we demonstrate that screening with social risk factors as compared to social needs identifies different patient populations across a large primary care population in varied practice settings in a Medicaid ACO. Patients who identified as a race/ethnicity other than non-Hispanic White were more likely to report social needs, and more often reported social needs without reporting social risk factors. Among those with social risk factors, patients were more likely to report social needs in a domain discordant to social risks if they identified as non-Hispanic Black, preferred a language other than English, had higher social risk overall, or received care in a practice with higher proportions of patients with LEP and/or Medicaid enrollment. These patients would have been missed if they were screened with social risk factor questions alone (Fig. 1). These individuals are also more likely to experience HRSN due to structural racism and systemic poor access to health services [29,30], emphasizing the importance of including both social risk factor and social need questions in integrated screening tools to improve the equity and accuracy of clinical screening programs. It is difficult to precisely compare the prevalence of social risk factors and social needs to other studies due to differences in the populations and screening tools examined. In our study, the 44.6% social risk and 19.1% social [21,31,32], and studies specifically assessing request for assistance finding 15 to 37% with social needs [9,23,32]. Our prevalence of food and housing insecurity specifically were also comparable to those reported in other studies [21][22][23][31][32][33][34]. Our study expands upon prior research identifying a discrepancy between social risk factor and social need screening [23,25,35]. This observational study of routine-care screening in a large population across varied practice settings adds to the understanding of HRSN prevalence in clinical practice, and expands upon existing literature by identifying specific patient and practice characteristics associated with domain discordant screening. Our findings are supported by insights from prior research, including a study of an emergency department population in the same health system finding that non-Hispanic Black and Spanish speaking patients more often reported social need rather than social risks [35]. There are multiple reasons why patients may report social needs but not social risk factors on a screening tool. Patients may experience stigma regarding their social circumstances or have privacy concerns about who will see the information [22,36]. Others may perceive questions on social needs to be more relevant or actionable compared to social risk screening. The finding that patients report social needs in the absence of social risks underscores the limitation of using social risk factor screening alone, and lends further support to implementing patient-centered strategies that engage individuals in determining their own needs and priorities [37]. This study has several potential limitations. First, patients in the sample were only those without substantial churn in Medicaid eligibility (at least 11/13 member-months) and who engaged in routine primary care, limiting the portion of the ACO examined. These patients are likely to be different from the portion of the Medicaid ACO population who experiences more disruptions in eligibility or is unable to participate in scheduled office-based care. Second, the study is a secondary analysis of data that was collected during routine clinical care rather than to answer a specific research question, leading to potential misclassification and missing data. The race, ethnicity, and language data from the EMR were not complete for all included patients, though unavailable data was limited to 8% of race/ethnicity and 3% of language preference. Third, while the institutional screening tool used questions from validated screeners, the entire instrument was not formally validated prior to clinical deployment, leading to potential bias in the patient responses collected. Additionally, we used the request for more information item from this screener to define social need because the answers expressed patient prioritization of a domain and preference for additional engagement. We recognize that patients were not specifically asked if they would like help addressing the health-related social need and this may have led to misclassification of patient responses. The question is an imperfect proxy, although provides an opportunity to understand patient prioritization of their own needs in a real-world clinical screener. Finally, our analysis was limited to patients with Medicaid in a single large, integrated health care system. Because the Medicaid population is more likely to have high social risk and needs, the results may not be generalizable to other patient populations. The practice settings included were varied in size, location, practice ownership, and resources for patients with LEP ranging from on-site interpreters to third-party phone services. However, the results may not be generalizable to patients who are uninsured or who receive medical care in different health system settings. Conclusions The findings from this study have important implications for health policy and practice. Health systems, payors, and policy makers who wish to screen for HRSN should carefully consider how to conduct population-based screening as asking about social risk factors alone are not sufficient to identify all patients with HRSN in a Medicaid population. Populations with systematically higher HRSN may be more likely to report social needs rather than social risk factors. Health systems and Medicaid programs should consider screening tools that include questions which assess both social risk factors and patient-identified social needs. Identifying both populations of patients would increase the opportunity for intervention to reduce the burden of HRSN and associated adverse health outcomes. Additional file 1: Supplemental Figure 1. Health-related social needs institutional screening questionnaire. Supplemental Table 1. Characteristics of Medicaid Accountable Care Organization (ACO) patients who completed health-related social needs (HRSN) screening in the payor subset. Supplemental Table 2. Social risk factors (positive screening response) and social needs (request for more information) among healthrelated social needs (HRSN) questionnaires completed in the payor subset. Supplemental Table 3. Patient and practice characteristics associated with expressing social need, with and without social risk factors on healthrelated social need (HRSN) screening in the payor subset. Supplemental Table 4. Patient and practice characteristics associated with expressing social need in domains concordant and discordant with social risk factors on health-related social needs (HRSN) screening in the payor subset.
2022-11-19T15:05:48.074Z
2022-11-19T00:00:00.000
{ "year": 2022, "sha1": "be091638b37c516dbc0d27ed277326073a17caef", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "be091638b37c516dbc0d27ed277326073a17caef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
255099064
pes2o/s2orc
v3-fos-license
Scanpath analysis of expertise and culture in teacher gaze in real-world classrooms Humans are born to learn by understanding where adults look. This is likely to extend into the classroom, making teacher gaze an important topic for study. Expert teacher gaze has mainly been investigated in the laboratory, and has focused mostly on one cognitive process: teacher attentional (i.e., information-seeking) gaze. No known research has made direct cultural comparisons of teacher gaze or successfully found expert–novice differences outside Western settings. Accordingly, we conducted a real-world study of expert teacher gaze across two cultural settings, exploring communicative (i.e., information-giving) as well as attentional gaze. Forty secondary school teachers wore eye-tracking glasses, with 20 teachers (10 expert; 10 novice) from the UK and 20 teachers (10 expert; 10 novice) from Hong Kong. We used a novel eye-tracking scanpath analysis to ascertain the importance of expertise and culture, individually and as a combination. Attentional teacher scanpaths were significantly more similar within than across expertise and expertise + culture sub-groups; communicative scanpaths were significantly more similar within than across expertise and culture. Detailed analysis suggests that (1) expert teachers refer back to students constantly through focused gaze during both attentional and communicative gaze and that (2) expert teachers in Hong Kong scan students more than experts do in the UK. Introduction Expertise can be seen at every level of teaching. At the macro-level, expert teachers demonstrate stronger knowledge, organisation and reflectiveness (e.g., Allen and Casbergue 1997). At the micro-level, teachers reveal exceptional and unique internal processes that have been developed and refined over time, including their memory (Ericsson and Kintsch 1995), strategy (Chassy and Gobet 2011), efficiency (Haider et al. 2005) and intuition (Sherin 2006). Likewise, expertise in teaching not only predicts what teachers look at but also suggests how teachers look around the classroom (e.g., Cortina et al. 2015). It has generally been proposed that gaze sequences-or 'scanpaths'-reflect internal representations of the world, namely 'cognitive models' (Henderson 2003;Choi et al. 1995). We can therefore expect expertise to differentiate teachers' scanpaths as it does in other domains (e.g., Foerster et al. 2011). Since culture entails different experiences, culture might also differentiate teachers' scanpaths, which would support broader theories of expertise (Sternberg 2014). We therefore investigated the role of expertise and culture in distinguishing teachers' scanpaths. So far, most expertise and teacher gaze research has focused on information-seeking, or attentional, gaze (Reingold and Sheridan 2011). Such research is relevant to the passive processes of observation, when viewers are looking for particular details or waiting to receive knowledge so that information is going from the scene (or audience) to the viewer. Yet, information-giving, or communicative, gaze is important to consider in classroom research, given the role of gaze in human learning (Csibra 2010), social interaction (Wu et al. 2014), and successful teaching (Leinhardt 1987). Unlike in attentional gaze, the viewer is active in transmitting knowledge to an audience such that information is now traveling from the viewer to the scene (or audience). It has been proposed that complex social processes are best explored in the real-world (Fiske and Taylor 2013). Although some research has examined the gaze of expert teachers (e.g., Cortina et al. 2015), this behaviour may be best investigated in the real-world due to the complex nature of teaching (Berliner 2001) and the domain-specificity of expertise in general (Ericsson 2014). Together, the present study addressed gaps in previous investigations by examining the role of expertise and culture in teachers' scanpaths during both attentional and communicative gaze, in the real-world, across two cultural settings. While there are very few datasets regarding gaze in real-world teaching, we recently reported a study in which teachers from the UK and Hong Kong were eye-tracked in secondary school classrooms. We used aggregate measures of dynamic properties in teacher gaze to compare the variability of expert teachers' gaze in comparison with novices' (McIntyre et al. 2017). This analysis provided some of the first information about teachers' real-world gaze, but was limited by its reliance on aggregate measures of where teachers look: such measures are unsuitable for examination of the 'cognitive model' underlying teacher gaze. Here, we provide a new analysis of data from the same study by testing dynamic measures of teacher gaze, that is the gaze sequences of the same teachers which, it has been argued, are critical reflections of the cognitive model. Our expectation was that this new approach would reveal additional details about teacher expertise. Scanpath analysis Despite recent interest in teacher gaze, there has been little investigation of the underlying structure of teachers' gaze patterns. A scanpath is a ''repetitive sequence of saccades and fixations, idiosyncratic to a particular subject [person] and to a particular target pattern'' (Choi et al. 1995, p. 450). Scanpath analysis is therefore the investigation of gaze sequences, which preserves information about what is being looked at and the order in which these looks occur. It has been argued that social interactions (Bakeman and Gottman 1997;Hewes 1979;Sackett 1987)-and therefore teaching (Palincsar 1998;Pianta et al. 2012;Wubbels et al. 2016)-are inherently sequential and dynamic, since each social act needs to be understood in the context of earlier behaviours. Thus, teacher scanpaths are likely to contain rich sequential patterns which have yet to be investigated. Scanpath analysis is a particularly valuable approach in teaching research, because a sequence of gaze acts is normally involved during pedagogical exchanges. To begin with, a learner (e.g., an infant) is often invited into an educational episode when the teacher (e.g., the adult carer) catches his or her attention through a social signal which usually involves eye contact (Batki et al. 2000;Committeri et al. 2015;Farroni et al. 2002). A process of shared attention (Baron-Cohen 1995) is thus initiated, in which the teacher's gaze shift redirects learner attention to the right gaze target (de Langavant et al. 2011;Senju and Csibra 2008) and triggers 'gaze following' in the learner as they are drawn to shift their gaze to where the teacher has shifted their attention (Böckler et al. 2015;Farroni et al. 2004). When both teacher and learner are looking at the same thing, shared attention is achieved and the intended teaching can be given regarding the shared gaze target (Csibra and Gergely 2009;Tomasello 2000). A sequence of multiple gaze targets is therefore involved in each pedagogical episode. Conventional aggregated analyses, which look only at the overall amount of time looking at different targets, may miss such patterns. One of the most common techniques for scanpath analysis involves the representation of gaze sequences as letter strings. The similarity between two strings is represented by the 'string edit distance' (SED; Brandt and Stark 1997;cf. Levenshtein 1966). A SED is calculated by counting the number of edits-insertions, deletions and substitutions-before two strings become identical. This procedure is flexible, meaning that strings can be defined by geometric gaze coordinates, areas of interest, or semantic codes. The present study used this last approach, since the data was from real-world, mobile eye-tracking which was then coded in order to identify each gaze target. Culture-specific expertise in teacher scanpaths Expertise is shown through consistent traits across professions (Sternberg and Horvath 1995) while each individual's area of expertise is domain-specific (Bédard and Chi 1992). In teaching, experts are more knowledge-driven, efficient, flexible during lessons and more consistent across lessons (McIntyre et al. 2017). Culture, however, plays a central role in defining teacher expertise. As with any profession, 'expertise' in teaching is embedded in contextual policies (Berliner 2001) and culture (Sternberg 2014). Compared with East Asia, learning in the West involves more vocal input from the student (even disagreement, Hofstede 1986), more group work (Leung 1995) and a more analytic approach to learning (Yang and Cobb 1995). Student preferences diverge across cultures (Wozniaková 2016), as do their expectations of their teachers (Zhang et al. 2005). Teachers' strengths also differ across cultures, with Western teachers excelling in general pedagogical knowledge and East Asian teachers excelling in subject and pedagogical content knowledge (König et al. 2011). It has been proposed that scanpaths reflect ''the read-out of the internal representation of pictures, the so-called 'cognitive model''' (Choi et al. 1995). This scanpath theory proposes that some internal process drives each person to look where they do, in the order that they do. Because each individual has their own unique cognitive model, scanpaths of the same individual on multiple occasions will be more similar when compared than scanpaths of different individuals. If scanpaths are indeed more guided by cognitive models, experts should produce significantly different scanpaths to novices. With expertise, observers gain in experience-and knowledge-informed cognitive models and teachers' cognitive models are likely to become more refined than novices'. The scanpaths of expert teachers should therefore be more knowledge-driven and effective than those of novices, providing a template of what to think about, and in what order, for novices' successful professional development. Given that scanpaths are affected by experience, culture can also be expected to shape scanpaths. Through the course of professional development, teachers are likely to treat different classroom regions with differing task-relevance and appropriateness according to their own culture (e.g., Hofstede 1986), such that the most important regions changes with culture (e.g., Berliner 2001). Teacher scanpaths should thus reflect culture-specific cognitive models, especially expert teachers'. Teachers from different cultures should therefore produce significantly different scanpaths from each other, while individuals should display similar scanpaths to teachers belonging to the same cultural group. Using expert scanpaths from their own culture, novices can more accurately emulate an order of what to consider in classroom teaching that is better rooted in the values in their own cultural setting. The present article The present article aims to apply scanpath methods and comparison to teacher gaze, by expanding scanpath research into differences across expertise levels (e.g., Humphrey and Underwood 2009) to the teaching profession and by extending scanpath comparisons across cultures. The present analysis also extends our previous work on the same dataset (e.g., McIntyre et al. 2017) in which we reported that expert teachers give greater importance to students and use their classroom gaze more efficiently (i.e., primarily using the most relevant gaze) than novices. Experts were also more strategically consistent. Cultural differences were found, with UK teachers showing more attentional efficiency and Hong Kong teachers more communicative efficiency. Although this previous analysis used dynamic measures, their differences still involved aggregating over many didactic events. We anticipated that the present study, which instead adopts a sequential and fully dynamic perspective, would shed further light on expertise and culture in teacher gaze. This will allow us to understand how expertise and culture affect moment-to-moment changes in behaviour (i.e., teacher scanpaths) and the proposed underlying cognitive model. We had a number of specific hypotheses based on the expectation that expertise and culture will affect teachers' looking behaviour. Hypothesis 1 We expected scanpaths to be significantly more similar when compared within an individual than when compared across individuals. This finding would give credence to subsequent comparisons, and it would support the idea that teachers show an idiosyncratic strategy. Hypothesis 2 Since teachers with the same expertise level are more likely to share a cognitive model than teachers of differing expertise, scanpaths of expert teachers should be more similar when compared with other experts than when compared with novice teachers. Hypothesis 3 Since the cognitive model of teachers is likely to be similar within a cultural group, teacher scanpaths should be more similar when compared within cultures than when scanpaths are compared across cultural settings. Hypothesis 4 Cognitive models are likely to be most similar when teachers have the same expertise level and cultural setting. Therefore, scanpaths within the same expertise and cultural group were expected to be significantly more similar than those of teachers in different expertise and cultural group. Method Participants Participants consisted of 20 Hong Kong Chinese (henceforth East Asian) and 20 White Caucasian UK (henceforth Western European) secondary school teachers. Schools were selected on the condition that they followed their respective national curricula and that they consisted of students from the first to fifth years of secondary education: two schools were sampled in each country. Cultural groupings in the present study were based on geographical location (i.e., in Hong Kong vs. in the UK). Each cultural group were ethnically homogenous, with the Hong Kong sample comprising entirely of Hong Kong Chinese and the UK sample entirely of White British teachers and students. Expert teachers were defined using the guidelines given by Palmer et al. (2005), which consisted of (a) having at least 6 years' experience, (b) social nomination as an expert in teaching (selected by the school leadership), (c) professional memberships, and (d) performance ratings (based on in-school classroom observations). According to MANOVA, teachers who were nominated as experts (criterion b) significantly differed on years' experience (criterion a), professional memberships (criterion c) and performance ratings (criterion d), F(3,37) = 14.22, p \ 0.001, g 2 p = 0.54. See Table 1 for detailed teacher demographics. A previous report indicated that differences Apparatus The Tobii 1.0 glasses eye-tracker was used to record teacher gaze. This eye-tracker was monocular, with a sampling rate of 30 Hz (i.e., 30 frames per second) and calibrated using nine gaze points. The eye-tracker yielded a 640 by 480 px video, capturing 56°horizontally and 40°vertically, as well as an audio recording. Three approaches were used to secure quality of data analysis: a parallax correction tool is provided with our eye-tracking package, Tobii Studio 3.2.1, to reduce risks associated with monocular eye-trackers; each participant was asked to confirm the location of the gaze cursor during cued retrospective reporting (van Gog et al. 2005); when the gaze cursor disappeared, we applied the code, Unsampled. Procedure The teachers wore eye-tracking glasses during one 10-min 'teacher-centred' segment of their own lesson. That is, when the teacher was at the centre of the whole class' attention: introducing an activity, explaining new concepts, or presenting new material. This lesson followed on from each teacher's original plans for curriculum delivery; we simply waited for a teacher-centred section in the lesson scheduled for data collection to install the eyetracker onto the participating teacher. As such, each participant's teacher-centred segment differed from each other. The eye-tracking glasses were calibrated by the researcher just before recordings took place. In order to preserve the individual calibration, participants were instructed not to move the glasses until recording was over. Once 10 min of teachercentred learning was recorded, the researcher waited for a considerate moment to remove the eye-tracking equipment from the teacher. Coding We systematically coded teacher gaze and simultaneous verbalisations. Both the teacher gaze and simultaneous verbalisations were coded from the start to the end of analysed periods of eye-tracking. Gaze codes Gaze behaviour was coded by the researcher by slowing the playback to one eighth of realtime speed and manually applying the gaze behaviour codes. The gaze behaviours coded were student fixations (i.e., focused gaze at students), student scan, student material, teacher material, other (i.e., miscellaneous) and unsampled gaze. The student fixation code was applied when the gaze cursor overlaid students for more than four frames. Student scans were student-oriented gaze, during which the gaze cursor overlaid students for less than four frames (Franchak et al. 2011;Hanley et al. 2015). Unsampled gaze was coded when the gaze cursor disappeared from gaze replay. Didactic codes Existing theories conceptualise teacher activity too broadly for the present data (i.e., different activity types not teacher-centred activity only, Berliner 2004;Hofstede 1986;Leung 1995;Pianta et al. 2004), in too much detail (i.e., speech acts within each teacher utterance, Searle 1969) or in topics that are too specific for the present article (e.g., classroom management not information-seeking vs. information-giving in general, Elliott et al. 2011). A coding scheme was therefore developed in a bottom-up manner, based on participating teachers' verbalisations rather than an existing theory. Simultaneous verbal data was coded manually while playing the video in real-time (i.e., full playback speed) to generate teacher didactic codes. The simultaneous verbal data from eye-tracking recordings was divided into five didactic behaviours: address behaviour (i.e., directly instructing students to change their behaviour), attention (i.e., student or teacher asking and answering questions), communication (i.e., teachers lecturing), refer notes (i.e., teacher referring to presentation slides or students' resources), logistics (e.g., teacher moving the presentation onto another slide). In particular, questioning only consisted of periods of dialogue: that is, question-and-answering between teacher and students, while the teacher was at the front. Questioning thus included classroom silence as the teacher waited for students to answer their question; it also included periods when students spoke instead of the teacher. Talking consisted of straight talk and rhetorical questioning by the teacher. We thus followed McNeill's (2006) approach by interpreting non-verbal behaviour using simultaneous verbalisations. Didactic events Because attentional and communicative gaze were dramatically more common among teachers, the present study focused on these two didactic events only. Since social interaction is integral to classroom teaching (e.g., Pianta et al. 2012), attentional and communicative gaze were regarded as the most relevant teacher gaze to be analysed (cf. McIntyre et al. 2017). Of all the didactic behaviours, the present research explored only two-attention and communication-and six gaze codes were possible for each didactic behaviour, resulting in a total of 12 didactic events. Scanpaths For each participant, we generated multiple scanpaths (i.e., gaze event sequences) starting from the first ten gaze behaviours occurring within each period of either attentional or communicative teaching. In opting for ten (as in Foulsham et al. 2012; rather than five, e.g., Freeth et al. 2011), we hoped to capture greater detail in teachers' gaze behaviour. In using the first ten behaviours, we aimed to achieve more comparability between teachers than if no limit was set on the start-point. Thus, for each participant, we generated attentional gaze scanpaths and communicative gaze scanpaths. To illustrate, one attentional scanpath of one participant is depicted in Fig. 1. 'Unsampled' gaze targets were excluded from scanpaths. During the 10-min recording time, each participant took part in multiple episodes of questioning and talking, so each participant's data yielded several scanpaths. Sequences of gaze events were compared using the string edit distance (SED; Brandt and Stark 1997), which is also known as the Levenshtein (1966) distance. The algorithm for calculating the distance between two strings is widely available, and yields the minimum number of changes required to turn one string into the other. This number was divided by the length of the longer string and then subtracted from 1 to give a similarity score ranging from 0 to 1, 1 being 'most similar' (i.e., identical). Reliability Intra-observer reliability was checked by asking the coder to re-code part of the gaze recordings. Two members of each sub-group (e.g., Western novices) were selected for recoding; the first two out of ten minutes re-coded (i.e. 20%). Intra-class correlation (ICC) of the first coding attempt with the second was deemed acceptable because ICC has been shown to be equivalent to the Fleiss Kappa-the conventional intra-rater reliability measure for nominal data (Fleiss and Cohen 1973;Landis and Koch 1977). The two-way random ICC (ICC-2) was chosen because it was the rater variation that we were concerned with, while the data sampled remained the same (e.g., Bartko 1976). The reliability of our coder was satisfactory (ICC-2 = 0.68). Scanpath analyses Each set of scanpath comparisons resulted in an average similarity score, and our hypotheses required testing whether these similarity scores differed according to the identity, expertise and culture of the source participant. For statistical analyses of similarity scores we ran repeated measures univariate analyses of variance for each didactic behaviour separately (i.e., attentional gaze, then communicative gaze). Transformations were 1 2 3 4 5 6 7 8 9 10 Fig. 1 Attentional scanpath of a UK expert (Participant 24) used for scanpath comparisons. This sequence of gaze targets show an attentional scanpath of 1 student fixation, 2 teacher material, 3 student fixation, 4 teacher material, 5 student fixation, 6 student scan, 7 student fixation, 8 other, 9 teacher material then, finally, 10 student fixation. The sequence of gaze events can be represented with the scanpath string, FTFTFCFOTF conducted prior to running ANOVAs where necessary in order for all dependent variables to meet parametric assumptions, while raw scores are reported for descriptive statistics. Individual teacher scanpaths To address Hypothesis 1, the first set of similarity scores related to comparisons within and between each individual (Table 2). Each participant was thus given a mean similarity score for scanpath comparisons within him or herself as well as a mean similarity score for the scanpath comparisons between him (or her) and others. Our expectations of top-down guidance for teacher scanpaths in Hypothesis 1 were supported by intra-compared with inter-individual similarities. Within the attentional periods of teaching, intra-individual similarities were significantly greater (M = 0.43) than inter-individual similarities (M = 0.38), F(1,39) = 29.14, p \ 0.001, g 2 p = 0.43. Likewise, during communicative gaze, greater scanpath similarities were found in intra-individual (M = 0.40) than in inter-individual (M = 0.37) comparisons, F(1,36) = 61.34, p \ 0.001, g 2 p = 0.63. Teachers were more similar to themselves in the ordering of their gaze behaviour than they were to each other (Fig. 2). Our subsequent analyses therefore asked whether expertise and culture underlie some of this idiosyncrasy. Expertise differences in scanpaths We expected teacher expertise to be one explanation for differing scanpaths between individuals. The second set of similarity scores related to teacher expertise. We therefore ran string edit comparisons within and across single-IV groupings as shown in Table 3. Whin-expertise comparisons meant that each teacher was compared to another teacher within their own level of expertise; across-expertise comparisons involve experts being compared with novices. In attentional gaze, teacher scanpath similarity was significantly greater across (M = 0.40) than within (M = 0.39) expertise, F(1,39) = 15.85, p \ 0.001. g 2 p = 0.29. In communicative gaze, teacher scanpath similarity was greater across (M = 0.39) than within (M = 0.38) expertise, F(1,39) = 35.16, p \ 0.001, g 2 p = 0.48. These findings opposed Hypothesis 2 which predicted that scanpaths would be more similar within expertise groupings. We also compared teacher scanpaths across different expertise but within the same culture through dual-IV groupings, shown in Table 4. Thus, we controlled for culture in the second expertise comparisons. Once culture was controlled for, within-expertise comparisons meant that each teacher was compared to another teacher within their own level of expertise; across-expertise comparisons meant that experts were compared with novices within the se culture. In attentional gaze, similarity scores were significantly more similar within (M = 0.39) than across (M = 0.38) expertise, F(1,39) = 4.89, p = 0.03, g 2 p = 0.11 (Fig. 3). In communicative gaze, scanpaths were more similar within (M = 0.39) than across (M = 0.37) expertise, F(1,38) = 6.92, p = 0.01, g 2 p = 0.15 (Fig. 4). Thus, controlling for culture revealed greater within-expertise similarity than across-expertise similarity in teacher scanpaths, which accorded Hypothesis 2. Cultural differences in scanpaths We expected teacher culture to be another explanation for differing scanpaths between individuals, as stated in Hypothesis 3. We therefore obtained similarity scores for teacher scanpath comparisons within cultures (e.g., UK vs. UK) and across cultures (i.e., Hong Attn Similarity Score, 0-1 * *** Fig. 3 The dual-IV comparisons conducted in this study on teachers' attentional gaze. Attn = Attentional gaze, Same sub-group = same expertise and cultural group (e.g., Hong Kong experts vs. Hong Kong experts), Across expertise = same culture, different expertise (e.g., Hong Kong experts vs. Hong Kong novices), Across cultures = same expertise, different culture (e.g., Hong Kong experts vs. UK experts), Across sub-group = teacher from differing expertise and culture (e.g., Hong Kong experts vs. UK novices). However, when cultural comparisons of scanpaths were made with expertise controlled for (i.e., in dual-IV comparisons; Table 4), teachers' attentional scanpaths were not more similar within-culture than across (p = 0.08; Fig. 3), which opposed Hypothesis 3. Nevertheless, teachers' communicative scanpaths accorded Hypothesis 3 by being more similar within (M = 0.39) than across (M = 0.38) culture, F(1,38) = 3.98, p = 0.05, g 2 p = 0.10 (Fig. 4). Culture-specific expertise in scanpaths We expected teacher expertise and culture to combine and provide the strongest similarity or difference between individuals' scanpaths. To address Hypothesis 4, we compared teacher scanpaths within the same expertise and the same cultural grouping (i.e., same subgroups; Table 4). Same Sub-Group Across Expertise Across Cultures Across Sub-Group Cmmn Similarity Score, 0-1 ** * Fig. 4 The dual-IV comparisons conducted in this study on teachers' communicative gaze. Cmmn = communicative gaze, Same sub-group = same expertise and cultural group (e.g., Hong Kong experts vs. Hong Kong experts), Across expertise = same culture, different expertise (e.g., Hong Kong experts vs. Hong Kong novices), Across cultures = same expertise, different culture (e.g., Hong Kong experts vs. UK experts), Across sub-group = teacher from differing expertise and culture (e.g., Hong Kong experts vs. UK novices). *p B 0.05, **p B 0.01, ***p B 0.001 when compared with 'Same sub-group' Sub-string descriptions After finding that teacher scanpaths differ with expertise and culture (in communicative gaze), we began exploring how teacher scanpaths differ. The most common sequences of teacher scanpaths for each, attentional and communicative, gaze were generated to yield 'sub-strings' of each teacher sub-group (i.e., expertise ? culture, e.g., Hong Kong novices). During analysis, a series of trials revealed that a scanpath of six events yielded the greatest difference between the most common (i.e., modal) and second (and/or third) most common scanpaths. These modal scanpaths for each teacher sub-group are shown in Table 5 for each, attention and communication. Sub-strings revealed relatively minor differences. Experts in both cultures used student fixation first, before alternating between that and less student-oriented gaze, in both attentional and communicative gaze. Since cultural differences were only significant with communicative gaze, we highlight cultural differences therein: namely, Hong Kong teachers used student scan, regardless of expertise, whereas neither experts nor novices used student scan at all in the UK. Culture-specific expertise was shown in both attentional and communicative gaze, with Hong Kong experts using student fixations readily in contrast to Hong Kong novices who used no student fixations. UK experts started with student fixation, whereas novices started with other (i.e., non-student and non-instructional) gaze. Among experts, Hong Kong teachers used student scan whereas UK teachers did not use student scan at all. Discussion The present article sought to demonstrate the role of expertise and culture in the sequencing of teacher gaze. Furthermore, we distinguished between didactic behaviours, analysing attentional (i.e., questioning) gaze separately from communicative (i.e., lecturing) gaze. Our decision to use scanpath comparisons to explore the top-down influences of expertise and culture was supported: participants' gaze sequences were more similar within an individual than when compared between teachers (Hypothesis 1). Dual-IV comparisons controlled for the alternative factor (i.e., grouping system, e.g., culture controlled while making expertise comparisons), which showed attentional scanpaths to be significantly more similar within expertise (Hypothesis 2) and sub-groups (Hypothesis 4), but not culture (Hypothesis 3). Communicative scanpaths, on the other hand, revealed both factors-expertise (Hypothesis 2), culture (Hypothesis 3), and their sub-groupings (Hypothesis 4)-to make a difference to the similarity between gaze sequences. Together, the present study lends strong support for existing frameworks of effective teaching, but does so by showing that the importance of various dimensions of teacher expertise can be seen to the most micro-level of teacher behaviour, namely teacher scanpaths. Cognitive model for teaching Teacher scanpaths were more similar when compared within individuals than across individuals. This finding supports the Scanpath Theory assumption that teachers are guided by top-down rather than bottom-up visual processes, suggesting cognitive models to be active in real-world classroom teaching and driving the order (or sequence) in which It is unsurprising that teachers in this study displayed a top-down process, even at the micro-level of their gaze sequences. Indeed, an overload of top-down processes seems to characterise classroom teaching. Hence Berliner's (2001) comparison of teaching to air traffic control and other high-pressured professions and the rapid exhaustion of cognitive resources among beginning teachers (Berliner 2001). Accordingly, much of their professional development involves identifying the recurrent aspects of teaching for automatic processing, to reserve deliberate cognition for less predictable parts of the profession (Feldon 2007;Van Merriënboer et al. 2002). It is through professional growth that teachers are likely to converge on similar cognitive models for the optimal way of operating in the classroom, even in the minute detail of where they look and the order in which they do so. While we have previously highlighted that experts are more guided by strategy (McIntyre et al. 2017), we presently propose that the content of the cognitive model for classroom teaching is uncovered by identifying expert-novice differences in teachers' scanpaths. Expertise differences in teacher scanpaths The present finding that scanpaths are more similar when compared within than across teacher expertise supports our second hypothesis, that the cognitive model for teacher scanpaths changes with expertise. Specifically, teachers prioritise and order where they look at in the classroom differently when they are experts compared with novice teachers. The sequence of teachers' gaze is therefore a significant indication of professional expertise, quite apart from other measures of the way teachers use their gaze (e.g., summed frequencies or durations). Our finding coincides with research into classroom interactions, which highlights that whole phases of gestural sequences exist as an integral part of the teaching and learning process (e.g., Arnold 2012). Novice teachers can therefore go beyond what they should do more of (e.g., making as much eye contact with students as possible) to give more consideration to the order in which they do things in classroom teaching (e.g., start with eye contact and proceed onto subsequent expert teacher behaviours revealed in their scanpaths). While the nature of classroom teaching necessitates top-down control, it seems that this process either grows in dominance (i.e., teachers use cognitive models more) or changes in nature (i.e., the content of cognitive models change) as the teacher develops expertise. In terms of the dominance (or importance) of the cognitive model, teacher gaze has shown novice teachers to be more distracted by salient yet task-irrelevant classroom events (e.g., bright shoe laces), whereas expert teacher gaze is guided by pedagogical principles developed over time (e.g., areas surrounding disruptive behaviour, Wolff et al. 2016). Just as gaze behaviour increasingly reflects task-relevant strategy outside the classroom (Haider et al. 2005), so teachers increasingly restrict their gaze to the most task-relevant classroom regions . The scanpaths of novices in the present study highlight the common errors of beginning teachers. Future novice teachers can learn from the mistakes of the present novice sample by resisting task-irrelevant distractions and focus on the important classroom considerations revealed by expert teachers' scanpaths. Qualitative analyses suggested that, in both cultures, experts look at students-and return to look at them-more readily than novices do during both attentional and communicative gaze. As such, students constitute a consistent and central component in teachers' gaze sequences. Regardless of culture, experts used student fixation from the beginning and resumed doing so after each diversion. While we had previously demonstrated experts' student-centredness through teachers' real-world classroom gaze (McIntyre et al. 2017), we now provide new, additional and direct evidence that expert teachers not only look frequently at students but also show a characteristic sequential pattern of returning to them on subsequent gazes. Our analyses corroborate and extend previous records of student-centredness among expert teachers in contrast to novices' more controlling mind-set both in ours and others' research (Cheon and Reeve 2015;Wolff et al. 2014). Our sequential analyses of teacher gaze thus coincide with existing frameworks of teacher expertise: we highlight that the importance of student-centredness pervades at every level of teacher expertise, beyond existing aggregated analyses of teacher cognition, to the micro-level of analysis such as teacher scanpaths. Novice teachers might give heed to the support from the present study for the centrality of student experience. Rather than focusing on student discipline or salient visual distractions, novices can determine to devote their efforts to the improvement of students' learning experiences and progress. Our qualitative findings regarding communicative gaze saw experts in both cultures starting with eye contact (i.e., student fixation) then interspersing it with other gaze types (sub-strings 5 and 7 in Table 5). The importance in teachers starting communicative scanpaths by making eye contact with students reflects the natural pedagogical role of eye contact when initiating information transmission (Csibra and Gergely 2009;Frith and Frith 2012) and suggest that teachers are implementing this in the classroom. Indeed, the human eye triggers engagement in the gaze recipient (Committeri et al. 2015;Holler et al. 2014) in a way that artificial stimuli (Ristic et al. 2007) and non-human eyes (Tomasello et al. 2007) cannot. Experts in both cultures appear to be securing student engagement by making eye contact with them first, before shifting their own gaze to another classroom region: an essential gaze sequence for successful shared attention (Baron-Cohen 1995). It seems that the naturally occurring gaze processes of human teaching and learning can indeed be applied to the classroom context. As a result of this study, teachers can apply principles of natural pedagogy and shared attention to their everyday classroom communication and instruction. The role of culture in expertise differences in teacher scanpaths Contrary to preceding analyses (McIntyre et al. 2017), culture on its own failed to predict attentional scanpaths (when expertise was controlled for), culture may be relevant to teacher attentional scanpaths only in relation to attentional expertise. Otherwise, it appears that culture plays less of a role in teachers' attentional scanpaths. Culture is likely to have combined with expertise in correspondence with extant differences between East Asia and Western populations in relationship-orientated vision (Chua et al. 2005) and cognition (Nisbett and Miyamoto 2005). Both attentional and communicative gaze differed most when teacher scanpaths were compared within sub-groups than when they were compared across sub-groups. Our hypothesis that teachers' expertise and culture would, together, significantly distinguish gaze sequences was supported. This finding corresponds with our previous analysis of the dataset (McIntyre et al. 2017) and existing literature on differing characterisations of effective teaching according to culture (e.g., Zhang et al. 2005). Since culture combines with expertise to affect teachers' cognitive models (i.e., scanpaths), teachers might give greater importance to their specific cultural values when reflecting on best practice for their own classroom context. Culture-specific expertise in scanpaths were revealed through qualitative analyses. Qualitative analyses of our teacher sample also demonstrate that scanning gaze occurs during communicative gaze among Hong Kong (or East Asian) teachers exclusively. This finding coincides with previous research highlighting the risk that teacher-student eye contact is offensive (Alston and He 1997), inappropriate (Cheng and Borzi 1997) or intimidating (Akechi et al. 2013) in East Asian settings. Accordingly, it was also expected that teachers would make less use of eye contact to convey that students are welcome to contribute. Even if eye contact is emotionally neutral from teachers, cultures diverge in the meanings signalled by the same gaze patterns (McCarthy et al. 2008). Yet, it was the use rather than the lack of student fixations that set experts apart from novices in Hong Kong, suggesting that teachers opt to exercise authority in East Asia rather than approachability (Hofstede 1986). This finding further echoes existing literature on the universal importance of teacher-student eye contact as part of the communicative episodes during teacher and learning (cf. shared attention, Baron-Cohen 1995, and gaze following, Senju and Csibra 2008). Therefore, while student fixations are a common denominator for experts in both cultures, East Asian culture defines expert scanpaths differently as student fixations are interspersed with student scans among Hong Kong experts only. Limitations and implications In spite of the unique depth of the present investigations into expert teacher gaze, a number of limitations should be acknowledged. Although we contend that scanpaths reflect changing cognitive models, it is important to note that teachers might also differ in their interpretation of what they see. Future research could solicit subjective reports and compare these with gaze records, in order to establish whether teachers exhibiting the same scanpaths are necessarily interpreting the situation in the same way. The present scanpaths were event-rather than duration-based. By taking a duration-based approach to investigating teacher scanpaths, quite a different picture could have emerged, especially regarding teachers' communicative gaze since many of those strings were excluded due to inadequate numbers of events per string. Additionally, most scanpath comparisons yielded noticeably similar scores in spite of significantly greater within-than across-group scanpath similarities. Yet our similarity scores are within the same range as those in extant literature (e.g., Foulsham and Underwood 2008). The scanning gaze coded in the present study should also be interpreted with caution due to the low sampling rate of the eye-tracker (30 Hz). We conducted post hoc power analysis to confirm whether our sample size was adequate for the analyses reported in this article. When post hoc power analysis was conducted using G*Power (Erdfelder et al. 1996) for the repeated measures ANOVA prediction of expertise in single-IV comparisons in attentional gaze, the statistical power using our observed effect size (g 2 p = 0.29), sample size (N = 40), measurements (T = 500; the power does not alter when measurements exceed T = 500), and the default correlation setting among repeated measures (r = 0.50), was determined to be b = 0.71, which neared the standard b = 0.80 power requirement. The repeated measures ANOVA prediction of expertise in dual-IV comparisons of attentional gaze was b = 0.11. Nevertheless, statistically significant findings were found, as reported. The present article highlights two areas for teacher professional development. First, cognitive models are important in differences between teachers' expertise. The present study has given eye-tracking support to the proposal that expert teachers have a cognitive representation distinct from that of novices, which we have revealed through teachers' gaze sequences. That is, teachers change in their operation as they develop professional expertise, both on a macro-level (e.g., Berliner 2001) and on a micro-level-as shown through our participants' gaze sequences. Second, the order in which teachers look at differing regions of the classroom matters, to the extent that expertise manifests not only in summed durations of where teachers look most, but in the sequences of teachers' gaze. Experts in our study demonstrated that their gaze, for both attention and communication, is sequentially distinct from that of novices. Professional development programmes might therefore give closer attention to the cognitive models of expert teachers during questioning (or attention) and lecturing (or communication) via their gaze sequences and give beginning teachers a head start in developing their own, more effective, cognitive models. Tuse gaze sequences effectively, expert teachers do not develop a universally uniform cognitive model regardless of culture. Rather, the optimal cognitive model in classroom teaching must take culture into consideration. The present study highlights the advantages of culture-specific expertise, as the sequencing of teachers' gaze differs most when both expertise and culture are being compared in scanpath analyses. Thus, the present article corroborates preceding calls for teacher development programs to take cultural context into account: while others have made such calls based on macro-level differences (e.g., student attitudes, Leung 2014;student preferences, Zhang 2006; student emotional experiences, Zhou et al. 2012), our study demonstrates that culture-specificity permeates to the microlevel of effective teacher behaviour in the classroom: namely, their gaze sequences.
2022-12-26T14:49:26.438Z
2018-01-05T00:00:00.000
{ "year": 2018, "sha1": "915e21eca789af754036800b1be74725a0529012", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11251-017-9445-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "915e21eca789af754036800b1be74725a0529012", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [] }
267290375
pes2o/s2orc
v3-fos-license
The effect of urban–rural resident basic medical insurance on physical health of the rural older adult in China Introduction Urban–Rural Resident Basic Medical Insurance (URRBMI) is an important system for effectively transferring disease risks to the rural older adult. As China experiences rapid aging, maintaining the physical health of the rural older adult is key to achieving the goal of healthy aging. Methods The study explores the impact of URRBMI on physical health of the rural older adult in China using the Chinese Longitudinal Healthy Longevity Survey (CLHLS) data in 2018. Ordinary least square models were used to analyze the relationship between URRBMI and physical health of the rural older adult, and we used instrumental variable method to address the potential endogenous problem. Results We find that URRBMI greatly improves physical health of the rural older adult. The heterogeneity analysis indicates that URRBMI contributes more significantly to the rural older adult in eastern areas and the advanced rural older adult. The results also suggested that URRBMI improves physical health of the rural older adult through increasing life satisfaction and enhancing the timeliness of medical services. Recommendations This study implies that we need to further improve the participation rate, increase the actual reimbursement ratio and increase financial subsidies for URRBMI in central and western areas, and further integrate the distribution of medical resources to promote physical health of the rural older adult. Introduction Population aging has become increasingly serious in all countries of the world, increasing life expectancy and changes in the disease spectrum have led to a worrying health status for the older adult.The older adult, especially rural older adult, as a vulnerable health group in the population as a whole (1), they generally face more serious disease risks, we should put more emphasis on the physical health of the rural older adult (2).In order to spread the disease risks, many countries around the world have established medical insurance schemes (3,4).China began to integrate Urban-Rural Resident Basic Medical Insurance (URRBMI) based on the Urban Resident Basic Medical Insurance and the New Cooperative Medical Scheme in 2016.The primary goal of URRBMI is to decrease the medical costs, promote the utilization of medical services and contribute to better health.The physical health of the rural older adult contributes to their life quality and well-being, and is also strongly linked to healthy aging.With the deepening aging, we must pay attention to the rural older adult and discover the healthy function of URRBMI for the rural older adult.Thus, it is an essential topic to discuss whether URRBMI affects physical health of the rural older adult in China, this will help improve URRBMI policy and further enhance its healthy effects. The Chinese government has always placed the protection of people's health as a strategic priority for development, and has continuously improved its policies for the promotion of people's health.Achieving the goal of "Healthy China" means that the achievements of development must benefit all residents fairly (5), especially the vulnerable and high-risk groups, ensuring that "no one is left behind." Aging is an important demographic feature in China (6,7).The aging problem is especially serious in rural China.According to the statistical data, China's rural population aged over 60 has reached 121 million, and the rural population aging at 23.81% (8), the scale of the rural older adult in China is huge.The number of older adult with limitations in Activities of Daily Living (ADL) will increase to 37.3 million in 2050 (1), and their demand for medical insurance is increasing. Previous studies have examined the relationship between basic medical insurance and health status but have come to different conclusions.Several scholars have stated that basic medical insurance positively affects health status (9-11), because medical insurance can reduce the price of medical services (12), offer more opportunities for medical care and higher-quality health services (13).However, other researchers have shown that basic medical insurance has little effect on the improvement of residents' health (14)(15)(16), the probable explanation lies in the fact that the current policy only focuses on the most basic issues, the finite reimbursement rates, and the limited protection for the health vulnerable population (17). Many interesting results have been found on the above issues, but there are still some gaps needed to be filled.Several previous studies have mostly focused on the whole population.So far, however, in a rapidly aging society, there is an insufficient wealth of research dedicated to the health of basic medical insurance for the rural older adult, we are particularly concerned about the rural older adult in China, who are the most vulnerable to illness.Furthermore, most of the previous studies used self-assessed health to measure health status, but self-assessed health is subjective, so this study overcomes the shortcomings of self-assessed health by using ADL to represent the objective health status of rural older adult.Improving the physical health of the rural older adult is an essential task to cope with the healthy aging, we must keep an eye on the rural older adult and explore the role of URRBMI for the rural older adult.It will help enrich the theory of basic medical insurance and the study of healthy aging issues. This study uses the data from CLHLS in 2018 to explore how URRBMI influences physical health of the rural older adult in China.The article made the following contributions: First, the purpose of the study is to provide new empirical proof for current relevant studies by examining the influence of URRBMI on the physical health of the rural older adult under the context of healthy aging.Second, we discussed the heterogeneous influence of URRBMI on physical health of the rural older adult from the viewpoints of different areas and ages, and offers some critical perspectives for improving the URRBMI in the future.Third, we also discussed the influence mechanism between URRBMI and physical health of the rural older adult. The remainder of this study is organized as follows.Section 2 proposes the research hypothesis.Section 3 presents the data sources and empirical models.Section 4 presents the empirical results.Section 5 offers the discussion and policy recommendations, and finally, Section 6 offers the research conclusions and limitations. Research hypothesis URRBMI is an essential part of the social welfare scheme in the rural areas.When rural older adult are not enrolled in the URRBMI, they have to pay the full medical costs when they fall ill, and therefore they may choose not to receive treatment, which may be detrimental to their health.After participating in the URRBMI, on the one hand, the price of medical services has been decreased owing to the broaden of URRBMI coverage, and the health of the rural older adult can be promoted by decreasing out-of-pocket costs and enhancing their medical services utilization (18).On the other hand, URRBMI offers the rural older adult with protection against disease (19), it has changed the previous traditional concept of not seeking medical services for illnesses, and increased their motivation to pay attention to their physical health, and their awareness of physical health protection has become stronger and stronger.Therefore, participation in the URRBMI is expected to provide greater protection for the physical health of the rural older adult (20), so we propose the following hypothesis: H1: URRBMI can improve the health status of the rural older adult. The principle of territorial financing and management of URRBMI in China means that there are obvious regional characteristics in the medical insurance resources actually possessed by each region.Due to the disparity in economic levels among the eastern, central and western areas, there are regional disparities in medical services received by the rural older adult in different regions.The eastern region of China is more economically developed (21,22), and the financial subsidies invested in URRBMI have also increased (23), so the level of medical insurance coverage is generally better in the eastern region.The advanced medical resources and perfect medical conditions are mainly distributed in the eastern region (24, 25), differences in the access to medical services in the eastern, central and western areas may further widen the gap in the medical insurance benefits for the rural older adult.Generally speaking, URRBMI contributed more strongly to the physical health of the rural older adult in eastern areas.We propose the following hypothesis: H2a: The effect of URRBMI on physical health of the rural older adult in eastern areas is more significant than in central and western areas. The life cycle theory provides an explanation for the fact that the rate of illness dramatically increases as people grow older (26).As we all know, age plays a fundamental role in the physical health of the older adult.Compared to the advanced rural older adult, younger rural older adult make relatively less use of medical services, because they are younger and their physiological functions have not deteriorated significantly.With increasing age, the health degradation rate of the advanced older adult rises.The advanced rural older adult are generally subject to more disease risks, and their specialized medical services need increases, which means that the advanced rural older adult are more in need of the protection of URRBMI.Wu et al. also found that the medical insurance significantly reduces the mortality risk of the advanced older adult (27).Therefore, there is a possible age difference, and URRBMI has a more obvious promotion effect on the physical health of the advanced rural older adult.We propose the following hypothesis: H2b: The effect of URRBMI on physical health of the advanced rural older adult is more significant than the younger rural older adult.URRBMI promotes the access to medical services for the rural older adult and is an essential guarantee for meeting the medical demands of the rural older adult.By reimbursing the medical costs of the rural older adult, the disease financial burden has been reduced, thus minimizing the influence of catastrophic medical expenditures on the lives of the rural older adult and reducing to a greater extent the disease risks among the rural older adult.Finkelstein et al. also believes that medical insurance may have a positive effect on health due to the increased financial accessibility of medical care (28).At the same time, URRBMI helps rural older adult to reduce precautionary savings, increase current life consumption expenditures, and alleviate the pressure of life caused by medical care (29), the increase in relative incomes effectively improves life quality of the rural older adult and improve their life satisfaction, thus contributing to the improvement of their physical health. H3a: Life satisfaction mediates the effect of URRBMI on physical health of the rural older adult. According to the Anderson Health Services Utilization Model (30), timeliness of medical services utilization can improve the physical health of the rural older adult through more specialized medical resources (31).URRBMI, as a public policy to promote health, is beneficial to improving the accessibility of medical services for rural older adult, so that they can be more promptly informed of their own health status and can enhance health awareness of the rural older adult (32), thereby avoiding the expansion of disease risks.Hoffman et al. also believes that medical insurance can further improve people's health through the accessibility of medical services utilization (33).Therefore, URRBMI can promote the health status of the rural older adult by enhancing the timeliness of medical services utilization.We propose the following hypothesis: H3b: Timeliness of medical services mediates the effect of URRBMI on physical health of the rural older adult. Data, variables, and empirical model Data The study uses the latest data from CLHLS in 2018.CLHLS data is a national, large-scale database for the older adult (34-36), so CLHLS data samples are nationally representative.Besides, the 2018 CLHLS data include detailed variables of URRBMI, physical health of the rural older adult, and timeliness of medical services, and so no, which are the basis for this analysis.According to the purpose of the study, the sample was selected according to the following criteria: to retain the aged 65 and above, have rural household registration and live in rural areas at the time of the survey.Additionally, the invalid samples with missing key information including URRBMI, ADL, gender, age, marriage, education years, smoke, drink, exercise, physical examination, co-residence, life satisfaction, timeliness of medical services and who pays for medical services mainly were eliminated, and the final valid samples is 9,551 (Figure 1). Variables Dependent variable The dependent variable is physical health of the rural older adult.We use the ADL to reflect physical health.ADL is objective indicator of physical condition and can indicate the health condition of the rural older adult (37), and referring to the previous literature (38)(39)(40)(41), ADL has been widely used to measure physical health, so we use ADL reflect the health status of the rural older adult.ADL was measured by the following items: (1) Bathing; (2) Dressing; (3) Indoor moving; (4) Toileting; (5) Continence of defecation; (6) Eating.Each item was scored from 1 to 3, and the total score of ADL could reveal the health status.The more scores the respondents received, the higher ADL reliance would be, which means they are in poor health (38).At the same time, we use a binary variable defined as ADL-1 that equals 0 if the respondent reported no limitation in six items above, and otherwise equals 1.We take ADL as a key proxy variable physical health of the rural older adult. Independent variable The independent variable is whether the rural older adult participated in URRBMI (URRBMI).The rural older adult enrolled in the URRBMI were assigned a value of 1, otherwise 0. Instrumental variable This study uses who pays for medical services mainly (Expense) as an instrumental variable, we set Expense as 1-3, representing medical insurance payment, pay personally, and others, respectively.We use Expense as the instrumental variable, the reasons are as follows: First, whether or not the rural older adult are enrolled in URRBMI is affected by the variable of who pays for medical services mainly, therefore, who pays for medical services mainly affects the willingness of the rural older adult to participate in URRBMI.Second, the variable of who pays for medical services mainly has no direct influence on the physical health of the rural older adult. Frontiers in Public Health 04 frontiersin.org Mediating variable Life satisfaction (satisfaction) and timeliness of medical services (Service) may relate to the URRBMI and physical health of the rural older adult, which may affect the relationship between them.Relevant studies demonstrated that timely access to medical services could improve the chances of healthy survival for the older adult (45), and life satisfaction is known to be positively correlated with physical health (46).Referring to the previous literature (39, 47-50), the study selected satisfaction and Service to examine the mediating effect. The above variables and their definitions are shown Table 1, and their descriptive statistics are given in Table 2. Empirical model Referring to the previous literature (43,51,52), the regression model is set as follows: where ADL refers to the rural older adult physical health; URRBMI represents the variable of URRBMI; Controls stands for the above control variables, β1 indicates the intercepted item; β 2 denotes the coefficient of URRBMI; β 3 is the coefficients of control variables; εi is a normally distributed random error vector. β 2 is the coefficient of interest.If β 2<0, it means that URRBMI promotes the physical health of the rural older adult.If so, H1 is confirmed.In contrast, if β 2>0, it indicates that URRBMI weakens the health status of the rural older adult.If so, according to the research, H1 does not stand. Benchmark regression results In this section, ordinary least square models were used to analyze the regression of physical health of the rural older adult.The estimated results are shown in Table 3.We see that all the coefficients of URRBMI are significantly negative at the 1% level.It means that URRBMI can prompt physical health of the rural older adult and confirm the H1 is right. For all the control variables, most of the estimates are in agreement with theoretical expectations.Specifically speaking, the coefficients of Gender, Age, and Residence are positive at the 5% level, which suggests that the better health of the rural older adult is more apparent among males, younger, and residence alone.Furthermore, at the 1% level, the coefficients of Marriage, Education, Smoke, Drink, Exercise, Examination is all negative.The findings show that married rural older adult have better physical health status, and education years, regular exercise, regular physical examination can prompt the physical health of the rural older adult. Robustness test In this study, we use the method of replacing the dependent variable for the robustness test, and since ADL-1 is a dummy variable, we adopt a binary logistic regression model to estimate the results.The results are given in Columns (1, 2) of Table 4.The coefficients of URRBMI are significantly negative at the 5% level, it means that URRBMI improve the physical health of the rural older adult, this suggests that URRBMI has a protective effect on the physical health of the rural older adult.The outcome is in accordance with the previous results, suggesting that the results keep highly robust and further support the conclusions of this study.The results for the control variables are also in agreement with the above results obtained from the ordinary least square models. Endogenous test As we all know, there may be a bi-directional causality between URRBMI and the health status of the rural older adult.Generally speaking, rural older adult in poorer health is more inclined to participate in URRBMI, this leads to endogenous problems as the health status of the rural older adult inversely affects the behavior of Sample selection procedure of this study.whether or not to participate in URRBMI (53).Therefore, we solve the endogenous problem using the instrumental variable method (54).Columns (3,4) of Table 4 present the estimation outcomes of the endogeneity test.The coefficient of URRBMI is still negative, the result is agreement with the previous findings and further demonstrates our conclusion.Compared to not controlling the endogeneity, we also find that the value of the regression coefficients of URRBMI decreases after controlling the endogeneity, suggesting that the impact of URRBMI in promoting the physical health of the rural older adult is underestimated if endogeneity is not addressed. Dependent variable Activities of daily living ADL ADL is equal to 6-18 scale, representing the rural older adult physical health, respectively. ADL-1 ADL-1 is equal to 1 if the rural older adult is restriction in six daily activities, and otherwise 0. Independent variable Urban-Rural resident basic medical insurance URRBMI URRBMI is equal to 1 if the rural older adult participated in the URRBMI, and otherwise 0. Mediating variable Life satisfaction Satisfaction Satisfaction is equal to 1-5, representing very good, good, so so, bad, very bad. Timeliness of medical services Service Service is equal to 1 if the rural older adult can get timeliness of medical services, and otherwise 2. Instrumental variable who pays for medical services mainly Expense Expense is equal 1-3, representing medical insurance payment, pay personally, and others. Gender Gender Gender is equal 1 if the rural older adult is male, and otherwise 2. Marriage Marriage Marriage is equal 1 if the rural older adult is in marriage period, and otherwise 0. Education years Education Education is 0-16, indicating that education years Smoke Smoke Smoke is equal 1 if the rural older adult is smoke, and otherwise 0. Drink Drink Drink is equal 1 if the rural older adult is drink, and otherwise 0. Exercise Exercise Exercise is equal 1 if the rural older adult is exercise, and otherwise 0. Physical examination Examination Examination is equal 1 if the rural older adult have regular physical examination, and otherwise 0. Co-residence Residence Residence is equal 0-2, representing alone, with household member, institution. Heterogeneity analysis We also investigate the heterogeneous effect from different regions and age.The estimation results are shown in Table 5. Columns (1), (2), and (3) are the results for western, central and eastern areas.Columns (4,5) are the outcomes of younger rural older adult and advanced rural older adult. At the 1% level, the URRBMI coefficients were significantly negative in the eastern region, but there is no influence on the central and western regions.The influence of URRBMI on the physical health of the rural older adult varies in different regions.Hence, this finding confirmed H2a, the effect of URRBMI on physical health of the rural older adult in eastern areas is more significant than in central and western areas. Age was classified into two groups, 65-80 years old is considered as the younger rural older adult, and aged over 80 is considered as the advanced rural older adult.Columns (4,5) in Table 5 present the results, the coefficient of URRBMI is significantly negative at the 1% level for the advanced rural older adult, while it is not significant for the younger rural older adult.The result suggests that the URRBMI promotes the health status of the advanced rural older adult, but it has no influence on the younger rural older adult.Hence, this finding confirmed H2b, the effect of URRBMI on physical health of the advanced rural older adult is more significant than the younger rural older adult. Mediating effect Mediated effects analysis can help researchers verify the processes and mechanisms of factor interactions.This study uses Hayes' identification methodology and test steps to test the mediating effect (55, 56).The model is set as follows: (2) where Mediator indicates the variable of satisfaction or service, and the other variables are the same with the Model (1).If η 2 and η 3 are both significant, it suggests that life satisfaction and timeliness of medical services are partially mediating variables; but if η 2 is not significant but η 3 is significant, it suggests that life satisfaction and timeliness of medical services are fully mediating variables. From the results of the Table 6.After adding two mediating variables, life satisfaction and timeliness of medical services, respectively.We can find that the coefficients of life satisfaction and timeliness of medical services were significant at the 1% level, it suggested that URRBMI improves the physical health of the rural older adult through increasing life satisfaction and enhancing the timeliness of medical services, respectively, with Discussion and recommendations As an important medical security system design in China, URRBMI undertakes a number of missions such as ensuring health rights and safeguarding health justice (57).The study indicated that URRBMI is consistent with the fundamental goal of improving people's health. (1) In this study, we found that URRBMI can prompt physical health of the rural older adult.The results are consistent with the previous studies: medical insurance can significantly improve the health status of the older adult (58)(59)(60)(61).As we all know, URRBMI can effectively reduce the medical expenditures, and increase the probability that rural older adult have access to higher-quality medical services which prompts the physical health of the rural older adult (62).Besides, with increasing age, the physical functions of the rural older adult deteriorate and their physical health gets worse.The study also suggested that marriage is a protective factor for physical health of the rural older adult, this is consistent with the findings of Fuhrer's study (63).Regular exercise helps strengthen the immune system and thus reduces the likelihood of disease, so rural older adult who exercise regularly are in better physical health. Due to the positive physical health implication of URRBMI on the rural older adult, it is quite necessary to further improve its participation rate (64).We should improve the design of URRBMI policy.Continuously expanding the coverage of URRBMI is a precondition for promoting the improvement of physical health of the rural older adult.Expansion of medical insurance coverage significantly increases medical services utilization (65).Particularly, it is difficult to ensure the sustainability of broad coverage due to the current policy of voluntary participation.To further broaden the wide coverage of URRBMI, consideration could be given to compulsory participation in the URRBMI for rural older adult. (2) The study's results suggested that URRBMI contributes more significantly to the rural older adult in eastern area and the advanced rural older adult.Because the sophisticated and high-quality health resources are largely distributed in the eastern region (24), rural older adult in eastern area enjoys a higher level of URRBMI, release higher medical demand and can gain medical services more easily (66).However, the economic development is slower in the central and western areas.Even though the URRBMI has increased the demand for medical services by the rural older adult, the demand for medical services still cannot be met within the constraints of the existing medical conditions, so the promotion of URRBMI is not significant.Furthermore, as age increases, the physical health of the advanced rural older adult deteriorates and they need to consume more medical services to maintain their health (67), and the frequency and intensity of the advanced rural older adult use URRBMI is higher than that the younger rural older adult. Due to the disparity in economic levels between the eastern, central and western areas, there are regional differences in the impact of URRBMI on the physical health of the rural older adult.We should narrow the basic medical insurance compensation gap among eastern, central and western areas.Specifically, we should continue to increase financial subsidies for URRBMI in the central and western areas, and gradually raise the overall level of URRBMI and minimize regional disparities.Reducing regional disparities in URRBMI reimbursement contributes to achieve the regional equalization of basic medical insurance services.Meanwhile, generous insurance reimbursement can decrease the price of medical services (68), it is particularly important to promote the physical health of the advanced older adult.We should maintain the balance of income and expenditure of the medical insurance fund, reduce the threshold line, raise the ceiling line, and increase the actual insurance reimbursement rate to alleviate the financial burden of illness for the rural older adult, especially provide more precise safeguards for the advanced rural older adult. (3) The results also showed that URRBMI improves physical health of the rural older adult through increasing life satisfaction and enhancing the timeliness of medical services.On the one hand, URRBMI has reduced the burden of medical expenses on the rural older adult and relatively increased their regular income, thus the rural older adult can spend more of their income on such areas as daily leisure consumption and preventive health care, and their quality of life has improved accordingly, which in turn has increased their life satisfaction and improved their physical health.On the other hand, timeliness of medical services shortens the time to acquire medical services, enhances the availability of medical services for rural older adult, thus enhancing their physical health. To increase the timely access to medical services for rural older adult, we should further integrate the distribution of medical resources.By optimizing the integration of medical resources, improving the efficiency of medical resources allocation, thus forming a reasonable and orderly pattern of access to medical services (69), which significantly improves spatial accessibility of medical services.It will help rural older adult obtain various types of medical services close to their homes, which could gain medical services immediately when they need it. Conclusion and limitation Using the CLHLS data in 2018, this study analyzed the influence of URRBMI on the health status of the rural older adult.We have come to the following conclusions: First, the URRBMI greatly improves physical health of the rural older adult, and the results are robust.Second, there are regional and age differences in the impact of URRBMI on the physical health of the rural older adult.URRBMI plays a more vital role in prompting physical health of the rural older adult in the eastern area.Furthermore, compared with the younger rural older adult, the effect of URRBMI on improving the physical health of the advanced rural older adult is more obvious.Third, we provide extra evidence that life satisfaction and timeliness of medical services plays a mediating effect in the association between URRBMI and physical health of the rural older adult in China.The compensation mechanism of URRBMI has relatively lowered the price of medical services and enhanced the leisure consumption for the rural older adult, thereby increasing life satisfaction and promoting physical health, and URRBMI guarantees timely access to medical services for the rural older adult, which prevents minor illness from become serious ones (70). However, there are following limitations in our study and further research is needed.First, the impact of chronic diseases on ADL may be significant, this could affect the analysis results.However, due to the incompleteness of the chronic disease data, chronic disease was not included as a control variable in this study.Second, as the research object of this study is the rural older adult and the relationship between URRBMI and their physical health, this study did not include the urban older adult, the comparison of the two groups could be studied as a new topic in the future. TABLE 1 The definitions of all variables. TABLE 3 The Benchmark regression result. TABLE 4 Regression results of robustness test and endogenous test. TABLE 5 Estimation results of heterogeneous analysis. TABLE 6 Results of mediating effect.
2024-01-28T16:22:00.015Z
2024-01-26T00:00:00.000
{ "year": 2024, "sha1": "bbb95e15708221dec0f442f9a913393d6633dd54", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2024.1319697/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e25e58a3c2288b8213e00f0263d85ad0f559c452", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
24249295
pes2o/s2orc
v3-fos-license
A Novel Rac1-GSPT1 Signaling Pathway Controls Astrogliosis Following Central Nervous System Injury* Astrogliosis (i.e. glial scar), which is comprised primarily of proliferated astrocytes at the lesion site and migrated astrocytes from neighboring regions, is one of the key reactions in determining outcomes after CNS injury. In an effort to identify potential molecules/pathways that regulate astrogliosis, we sought to determine whether Rac/Rac-mediated signaling in astrocytes represents a novel candidate for therapeutic intervention following CNS injury. For these studies, we generated mice with Rac1 deletion under the control of the GFAP (glial fibrillary acidic protein) promoter (GFAP-Cre;Rac1flox/flox). GFAP-Cre;Rac1flox/flox (Rac1-KO) mice exhibited better recovery after spinal cord injury and exhibited reduced astrogliosis at the lesion site relative to control. Reduced astrogliosis was also observed in Rac1-KO mice following microbeam irradiation-induced injury. Moreover, knockdown (KD) or KO of Rac1 in astrocytes (LN229 cells, primary astrocytes, or primary astrocytes from Rac1-KO mice) led to delayed cell cycle progression and reduced cell migration. Rac1-KD or Rac1-KO astrocytes additionally had decreased levels of GSPT1 (G1 to S phase transition 1) expression and reduced responses of IL-1β and GSPT1 to LPS treatment, indicating that IL-1β and GSPT1 are downstream molecules of Rac1 associated with inflammatory condition. Furthermore, GSPT1-KD astrocytes had cell cycle delay, with no effect on cell migration. The cell cycle delay induced by Rac1-KD was rescued by overexpression of GSPT1. Based on these results, we propose that Rac1-GSPT1 represents a novel signaling axis in astrocytes that accelerates proliferation in response to inflammation, which is one important factor in the development of astrogliosis/glial scar following CNS injury. phages, leukocytes, and lymphocytes into the lesion and proliferation and migration of resident glial cells, astrocytes and microglia, around the lesion site (18,19). During the acute phase of the injury, astrocytes increase in number and migrate to the site of the injury to isolate the inflammatory region from neighboring tissue. During the subacute and chronic phases, astrocytes form a physical barrier that is referred to as a glial scar particularly in severe SCI. The glial scar surrounding the lesion has dual effects: a beneficial effect that minimizes the inflammatory region during the acute phase of injury and a detrimental effect that restricts neuronal regeneration during the subacute and chronic phases of injury (18,19). Thus, efficient control of the degree of astrogliosis/glial scar and appropriate timing of therapeutic intervention to astrogliosis/glial scar may be important for achieving better recovery from SCI. To investigate whether the Rac/Rac-mediated signaling pathway in astrocytes is a novel candidate for therapeutic modalities following CNS injury, we generated astrocyte-specific Rac1-KO (GFAP-Cre;Rac1 flox/flox ) mice. Rac1-KO mice exhibited better recovery from SCI and reduced astrogliosis following CNS injury relative to control mice. Depletion or deletion of Rac1 in astrocytes delayed cell cycle progression and reduced cell migration. We also found that the GSPT1 (G 1 to S phase transition 1) protein is a downstream molecule of Rac1 signaling in astrocytes. GSPT1/eRF3 was first identified as a molecule involved in the G 1 to S phase transition in Saccharomyces cerevisiae (20). Subsequently, GSPT1 was reported to mediate translation termination via the eRF1-eRF3 complex in eukaryotes (21,22). Expression levels and responses of IL-1␤ and GSPT1 to LPS treatment were reduced in astrocytes with Rac1 depletion or deletion. GSPT1 depletion induced cell cycle delay, and cell cycle delay induced by Rac1 depletion was rescued by overexpression of GSPT1. Thus, we propose that Rac1-GSPT1 is a novel signaling axis that accelerates the proliferation of astrocytes during inflammation, which is one important factor in the development of astrogliosis/glial scar after CNS injury. Better Recovery of Rac1-KO Mice after SCI-We developed SCI in Rac1-KO mice via a contusion injury and evaluated the locomotor capabilities of their hind limbs at 18 time points (1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, and 35 days post-injury) over 35 days using 2 scoring systems: the 0 -8point Basso Mouse Scale (BMS) score (25) and the 0 -4-point Body Support Scale (BSS) score (26). The BMS and BSS evaluate hind limb movement and body support, respectively. Recovery of locomotor capability after SCI was significantly better in Rac1-KO mice compared with control mice for both scoring systems. Better recovery in Rac1-KO mice was observed from 7 days after SCI (but not significant for BMS, p ϭ 0.124). Recovery was statistically improved for both scores from 9 days after SCI until 35 days after SCI ( Fig. 2A). To examine histological differences between Rac1-KO mice and control mice, the spinal cord was fixed at 35 days after SCI. GFAP immunoreactivity around the injury sites (100 m from the lesion) was significantly weaker in Rac1-KO mice compared with control mice (Fig. 2B). As an alternative approach to demonstrate GFAP immunoreactivity in Rac1-KO mice after CNS injury, we applied microplanar beam irradiation (100-m width at 550 Gy with 400-m gaps (center to center distance)) supplied from a Spring-8 synchrotron radiation facility (27) to the cerebellar cortex and brainstem. The injury caused by high energy X-rays is limited to a narrow region (27,28). GFAPpositive immunoreactivity in a linear band surrounding the irradiated lesions was weaker in the brainstem of Rac1-KO mice compared with control mice (Fig. 2C). Together these FIGURE 1. Rac1-KO in astrocytes. A, RT-PCR was performed using cDNA obtained from WT primary astrocytes and specific primer-pairs of Rac1, Rac2, and Rac3. The predicted sizes of the amplified Rac1, Rac2, and Rac3 bands are 454, 581, and 440 bp. NC, negative control (without cDNA). B, primary astrocytes obtained from control and GFAP-Cre;Rac1 flox/flox (Rac1-KO) mice were subjected to immunoblotting using a Rac1 antibody. Comparable loading of proteins was confirmed using tubulin-␣ antibody. C, spinal cords obtained from GFAP-Cre;Rac1 flox/ϩ ;tdTomato mice were subjected to immunostaining using a GFAP antibody followed by Alexa 488 secondary antibody and then observed under a confocal laser microscope. Scale bar, 200 m. D, magnified images of the area indicated by the rectangles in C are shown. Scale bar, 100 m. E, spinal cords obtained from control (Rac1 flox/ϩ ;tdTomato) mice were observed under a confocal laser microscope. DIC, differential interference image. Scale bar, 200 m. results strongly suggest that immunoreactivity to GFAP, namely astrogliosis, after CNS injury was reduced by Rac1-KO in astrocytes. Reduced Proliferation and Migration in Rac1-KD and Rac1-KO Astrocytes-We hypothesized that reduced GFAP immunoreactivity in two different CNS-injury models is due to the reduced proliferation and/or migration following Rac1-KO in astrocytes, thereby leading to reduced astrogliosis. To examine the contribution of Rac1 to cell cycle progression and cell migration, we used the Fucci system (29) and scratch wound assay, respectively, under a long term, time lapse live-imaging system. Rac1-KD in LN229 cells, a cell line derived from a human glioblastoma, was achieved using a verified plasmid expressing shRNA for Rac1 (6) (Fig. 3A). Cell cycle progression, defined as the time from one cytokinesis to another cytokinesis, was significantly longer in Rac1-KD LN229 cells than in control cells (29. Fig. 3B). the G 1 phase was significantly extended by Rac1-KD (14.60 Ϯ 1.09 h versus 19.64 Ϯ 1.87 h); however, the S-M phase, defined by the green fluorescence of Venus-tagged hGeminin, was not changed by Rac1-KD (Fig. 3B). Cell migration, evaluated using the scratch wound assay, was also significantly reduced in Rac1-KD LN229 cells compared with control cells (168.7 Ϯ 21.8 m versus 40.6 Ϯ 7.3 m) (Fig. 3C). Both delayed cell cycle progression and reduced cell migration were also observed in Rac1-KD primary astrocytes (cell cycle: 24 (Fig. 4, A-C). The reduced cell migration in Rac1-KO primary astrocytes was confirmed using a CytoSelect cell migration assay kit (Fig. 4D). GSPT1 Is a Downstream Effector of Rac1 Signaling-LPS has been reported to induce Rac1-mediated up-regulation of various proteins, including the pro-inflammatory cytokine IL-1␤ (30). IL-1␤ is a key driver of inflammatory response and astrogliosis induced by brain damage, including SCI (31) and ischemic brain injury (32). First, we confirmed increased protein levels of IL-1␤ following LPS treatment in primary astrocytes from WT mice, as well as reduced IL-1␤ expression following LPS treatment in primary astrocytes from Rac1-KO mice. These findings indicate the existence of a Rac1-dependent transcriptional pathway enhanced by LPS treatment (Fig. 5A). Second, to identify novel downstream molecules in Rac1 signaling associated with the cell cycle and cell migration, we performed a DNA microarray using control and Rac1-KD LN229 cells treated with LPS ( Fig. 5B), from which reduced levels of GSPT1 in Rac1-KD cells were detected compared with control cells. Decreased protein levels of GSPT1 were confirmed in LN229 cells using two different siRNAs for Rac1, as well as from primary astrocytes from Rac1-KO mice (Fig. 5C). In addition, GSPT1 was increased after treatment with LPS, and the decreased expression of GSPT1 following LPS treatment was observed in Rac1-KD LN229 cells (Fig. 5C). Furthermore, the increase in GSPT1 levels following LPS treatment was inhibited by a JNK inhibitor (JNK-IN-8), an ERK inhibitor (U0126), and an NF-B inhibitor (BAY 11-7085), but not by a p38 MAP kinase inhibitor (SB203580) (Fig. 5E). These results suggest that GSPT1 is transcriptionally regulated by Rac1 through the activation of at least JNK, ERK, and NF-B and that GSPT1 is upregulated and continues to function after CNS injury, which induces inflammation. The Rac1-GSPT1 Signaling Axis Is Involved in the Cell Cycle but Not Cell Migration-To examine the role of Rac1 and GSPT1 in cell cycle progression, we analyzed the cell cycle under long term, time lapse live-imaging in HeLa cells (another cell line rather than LN229). The cell cycle of HeLa cells was significantly prolonged following KD using two different siRNAs for GSPT1 (control: 18.47 Ϯ 0.36 h, si620: 22.93 Ϯ 0.40 h, and si1374: 23.25 Ϯ 0.61 h) (Fig. 6, A and B). Further-FIGURE 2. Better recovery of locomotor function after SCI and reduced astrogliosis after CNS injuries in Rac1-KO mice. A, BMS and BSS scores were recorded from days 1 to 35 after SCI. From day 9 after SCI, both hind limb movement and body support capability in Rac1-KO mice were significantly better than in control (cont) mice. This significant difference was sustained until day 35 (control; n ϭ 14 hind limbs, Rac1-KO; n ϭ 10 hind limbs; *, p Ͻ 0.05; **, p Ͻ 0.01; ***, p Ͻ 0.001 by Bonferroni's post hoc test following twoway ANOVA). B, sagittal sections of spinal cords from control and Rac1-KO mice 35 days after SCI were immunostained using a GFAP antibody and Alexa 488 secondary antibody. Immunoreactivity to GFAP in the area 100-m rostral and caudal from the edge of the lesion (indicated by red lines) is shown (control; n ϭ 12 sections from 3 mice, Rac1-KO; n ϭ 12 sections from 4 mice; *, p ϭ 0.0420). Scale bars, 100 m. C, coronal sections of the right cerebellum and brainstem from control and Rac1-KO mice 21 days after microbeam irradiation injury (horizontally propagating multibeams, 100-m width with 400-m gaps between them) were immunostained using a GFAP antibody. GFAP-positive immunoreactivity in a linear band surrounding the irradiated lesions of the brainstem is shown (control; n ϭ 9 ROIs from 3 mice, Rac1-KO; n ϭ 9 ROIs from 3 mice; **, p ϭ 0.0046). The right panels are magnified images of the regions indicated by the rectangles in the left panels. Scale bars, 500 m. To determine whether GSPT1 controls cell migration, we employed a scratch wound assay. Cell migration was not affected by two siRNAs for GSPT1 in LN229 cells (Fig. 7, A and B). The absence of an effect of GSPT1-KD on cell migration in primary astrocytes was also confirmed using a CytoSelect cell migration assay kit (20 nM of si-cont and siGSTP1-620m; Figs. 7, C and D). These results suggest that GSPT1 is a downstream effector in Rac1 signaling and that the Rac1-GSPT1 signaling axis is involved in cell cycle regulation but not cell migration. Discussion Although current evidence suggests that astrogliosis is beneficial during the initial/early/acute stages of CNS injury by isolating the lesion from inflammation, astrogliosis is detrimental at later/chronic stages of injury because of inhibition of neural regeneration (18). Activation of astrocytes, which results in glial scar after severe CNS injury, starts with immediate infiltration of macrophages/leukocytes/lymphocytes to the lesion and activation of microglia. However, this activation persists longer than those reactions to isolate and sequester the inflammation (18). In the present study, Rac1-KO mice exhibited better functional recovery than control mice, starting from 7 days after SCI (statistically significant from 9 days after SCI) until 35 days after SCI ( Fig. 2A). Rac1-KO mice also exhibited mild suppression of astrogliosis at 35 days following SCI compared with control mice. In contrast, severely reduced astrogliosis after SCI was reported in conditional STAT3-KO mice, resulting in remarkably worse motor deficits than control mice (33,34). Thus, mildly suppressed astrogliosis may lead to reduced detrimental effects of astrogliosis during the chronic stages of SCI. In addition to mildly suppressed astrogliosis at 35 days after SCI in Rac1-KO mice, Rac1-KO astrocytes also exhibited reduced production of IL-1␤ following LPS treatment. Inhibition of C5aR, a G-protein-coupled receptor for complement protein C5a, was reported to have dual effects on locomotor recovery after SCI: a beneficial effect in the first 7 days after SCI by inhibiting production of various pro-inflammatory cytokines, including IL-1␤, and a detrimental effect after the first 7 days by inhibiting astrogliosis (35). Thus, better functional recovery after SCI in Rac1-KO mice may be due to the dual beneficial effects of Rac1 against inflammation in the acute/subacute phase and astrogliosis in the subacute/chronic phase. The main compartment of the glial scar is believed to be formed by proliferated astrocytes around the lesion, as well as infiltrating astrocytes from neighboring regions (18,33). In contrast, using a stab wound cortical injury model, which is probably a weaker injury than SCI or brain infarction, Bardehle et al. (36) have shown that astrogliosis is not associated with astrocyte migration from neighboring regions. In addition, the authors report that astrocyte proliferation in the specific juxtavascular niche in the brain parenchyma may play important roles in astrogliosis. Although further study is required for conclusions regarding the contribution of astrocyte migration from neighboring regions in astrogliosis after SCI or brain infarction, in the present study, we found that Rac1 regulates both the proliferation and migration of astrocytic cells. Involvement of Rac1 in cell migration is well studied (37,38); however, the precise mechanism regulating cell proliferation by Rac1 remains unclear. Although Rac1 was reported to be a negative regulator in cytokinesis (39,40), we found delayed cell cycle progression, in particular an elongated G 1 phase, induced by Rac1-KD. Promotion of G 1 to S phase transition by Rac1 was reported to be regulated via increased levels of cyclin D1 (41,42) through either NF-B-dependent (5,43) or independent (44) mechanisms. Chauvin et al. (45) reported that GSPT1 depletion induced decreased levels of cyclin D1 by inhibition of translation initiation via the mTORC1 pathway, which pro-motes translation initiation rather than inhibition of translation termination. Although the precise mechanism responsible for the Rac1 control of GSPT1 levels is still unknown, GSPT1 may be transcriptionally regulated by Rac1 through the activation of JNK, ERK, and NF-B, but not p38 MAP kinase. Thus, the Rac1-GSPT1 signaling axis plays a critical role for astroglial growth. In addition to the involvement of GSPT1 in the cell cycle, GSPT1 has been reported to be involved in cell migration (46,47). However, no effect of GSPT1 KD on cell migration was observed in the present study. The reason for the discrepancy between the present study and previous reports is unknown but may be due to the specific cell lines used. Xiao et al. (47) reported that HCT116 colorectal cancer cells exhibited high level expression of GSPT1. Given the significant reduction of migration in Rac1-KD/KO, but not GSPT1-KD, astrocytes, GSPT1 is unlikely to be involved in cell migration in this context. Rac1 is a known tumor progression factor because of its roles in cell migration/invasion and cell proliferation (48,49). Recently, several activating mutations of RACs, including RAC1 and RAC2, have been reported as oncogenic driver genes in human melanoma and cancer cell lines (50 -52). More recently, genome-wide association studies have shown that testicular germ cell tumors are susceptible to increased GSPT1 (618) ϩ GFP plasmid) into WT primary astrocytes, Rac1 expression levels were evaluated using a Rac1 antibody. Comparable loading of proteins was confirmed using a GAPDH antibody. Right panel, primary astrocytes obtained from Rac1 flox/flox ;tdTomato (control, cont) and GFAP-Cre;Rac1 flox/flox ; tdTomato (Rac1-KO) mice were subjected to immunoblotting using a Rac1 antibody. Comparable loading of proteins was confirmed using a GAPDH antibody. B, the cell cycle was evaluated using an LCV110 microscope from 48 to 120 h after electroporation in the experiment using WT primary astrocytes (siRNA ϩ GFP plasmid) or after the preparation on a glass-bottomed dish in the experiment using primary astrocytes from Rac1-KO (with tdTomato) and control mice. The left pair and the right pair in the graph show data obtained using Rac1-KD astrocytes (control: n ϭ 79, Rac1-KD: n ϭ 40; **, p ϭ 0.0003) and using Rac1-KO astrocytes (control: n ϭ 80, Rac1-KO: n ϭ 82; **, p Ͻ 0.0001), respectively. C, 48 -120 h after electroporation in the experiment using WT primary astrocytes (siRNA ϩ GFP plasmid) or in the preparation on the glass-bottomed dish in the experiment using primary astrocytes from Rac1-KO (with tdTomato) and control mice, cell migration capabilities were evaluated using an LCV110 microscope. The left pair and the right pair in the graph show data obtained using Rac1-KD astrocytes (control: n ϭ 31, Rac1-KD: n ϭ 65; **, p Ͻ 0.0001) and using Rac1-KO astrocytes (control: n ϭ 47, Rac1-KO: n ϭ 39; **, p Ͻ 0.0001), respectively. D, 24 h after preparing the primary astrocytes from control and Rac1-KO mice in 24-well insets, cell migration capabilities were assayed using a CytoSelect migration assay kit (control: n ϭ 10, Rac1-KO: n ϭ 6; **, p Ͻ 0.0001). expression (53). Given the novel Rac1-GSPT1 signaling axis found in the present study, GSPT1 may participate in an oncogenic mechanism as a downstream target of active RAC1. Although Rac1-GSPT1 signaling is involved in cell cycle progression from G 1 to S phase, namely cell proliferation, this involvement does not seem to explain the entirety of the effects of Rac1 on better recovery of Rac1-KO mice after SCI. Cooney et al. (54) reported that the Nox that is most responsive in astrocytes and microglia after SCI is Nox2, a Rac1-activated Nox; in contrast, the expression of Nox4, a Rac-independent Nox, was constant over time in astrocytes. The group reported an increase in Nox2 in astrocytes at 24 h and 7 days after SCI, as well as reduced expression of pro-inflammatory cytokines via the systemic administration of a Nox2 inhibitor (54). Thus, Rac-activated Nox2 in astrocytes may be one factor that induces better recovery of Rac1-KO mice after SCI. In summary, we found that a mild suppression of astrogliosis promotes better functional recovery after SCI and that Rac1 in astrocytes is a potential target for developing new therapeutic modalities for CNS injury. Moreover, we identified GSPT1 as a novel downstream target of Rac1 that promotes cell proliferation through the progression of the G 1 to S phase transition. GSPT1 may be a more powerful target for cancer therapy in addition to therapy against CNS injury. Further study will be required to define the precise mechanisms by which Rac1 regulates GSPT1. SCI Model Experiments-14 -20-week-old Rac1 KO mice and control mice were used. The mice were anesthetized via the administration of trichloroacetaldehyde monohydrate (500 mg/kg, i.p.). After the mice had completely lost their righting reflex, surgical procedures to produce SCI were performed, as described previously (26) with slight modifications. Contusion . Reduced expression of GSPT1 in Rac1-KD and -KO astrocytes. A, primary astrocytes obtained from control and Rac1-KO mice were treated with or without LPS (0.5 g/ml) for 24 h. Expression levels of IL-1␤ were evaluated by immunoblotting using an IL-1␤ antibody. Rac1-KO and comparable loading of proteins were confirmed using a Rac1 antibody and tubulin-␣ antibody, respectively. The arrow indicates the IL-1␤ bands. B, LN229 astrocytic cells transfected with pSUPER (sh-cont) or shRac1(618) were treated with LPS (0.5 g/ml) for 24 h. Reduced expression levels of Rac1 and IL-1␤ and comparable loading of proteins were confirmed using a Rac1 antibody, IL-1␤ antibody, and tubulin-␣ antibody, respectively. The arrow indicates the IL-1␤ bands. C, Rac1 was knocked down via transfection of 2.5 nM of siRNAs (si-cont, siRac1(618), or siRac1(1977)) in LN229 cells. Primary astrocytes were prepared from control and Rac1-KO mice. Reduced expression levels of GSPT1 were evaluated using a GSPT1 antibody. Rac1-KD/KO and comparable loading of proteins were confirmed using a Rac1 antibody and GAPDH antibody, respectively. D, 2.5 nM of siRNAs (si-cont or siRac1(618)) were transfected in LN229 cells 24 h prior to LPS treatment. After LPS treatment (0.5 g/ml) for 24 h, GSPT1 levels were evaluated using a GSPT1 antibody. Rac1-KD and comparable loading of proteins were confirmed using a Rac1 antibody and GAPDH antibody, respectively. E, LN229 cells were simultaneously treated with LPS (1.0 g/ml) and one of four inhibitors at the indicated concentrations (M; JNK-IN-8, SB203580, U0126, or BAY 11-7085) for 16 h. After the treatment, GSPT1 levels were evaluated using a GSPT1 antibody. Comparable loading of proteins was confirmed using a GAPDH antibody. injuries were produced by dropping a 6.5-g weight from a height of 7 mm once onto the exposed dura mater of the lumbar L1 level of the spinal cord using a stereotaxic instrument (Narishige, Tokyo, Japan). The mice were allowed to recover for 35 days. For behavioral scoring, the mice were placed individually in an open field (23.5 cm ϫ 16.5 cm ϫ 12.5 cm) and observed for 5 min. Open field locomotion focused on each hind limb was evaluated using the 0 -8-point BMS locomotion scale (25) and the 0 -4-point BSS locomotion scale (26). Microplanar Microbeam Irradiation at SPring-8 -8 -12week-old Rac1-KO mice and control mice were used. The SPring-8 synchrotron facility (Japan Synchrotron Radiation Research Institute, RIKEN, Sayo, Japan) was used to supply microplanar beam irradiation. The radiation beam traveled in a vacuum transport tube with minimized air scattering of the primary beam. X-rays were emitted from the vacuum tube into the atmosphere after first passing through a beryllium vacuum window and then into a 2.0-m helium beam path consisting of an aluminum tube and a thin aluminum helium window located 42 m from the synchrotron radiation output. The sample positioning system was placed 2.5 m from the thin aluminum window. This beamline produces nearly parallel X-rays, and the mice were irradiated at a position 2.5 m from the thin aluminum window. White beam X-rays with an energy level of ϳ100 FIGURE 6. Cell cycle delay by GSPT1-KD and rescue of cell cycle delay induced by Rac1-KD via overexpression of GSPT1. A, 5 and 10 nM of control (si-cont) or two GSPT1 siRNAs (si620 or si1374) were co-transfected with Venus-hGeminin plasmid into HeLa cells. At 48 h after transfection, expression levels of GSPT1 were evaluated using a GSPT1 antibody. Comparable loading of proteins was confirmed using a GAPDH antibody. B, 14 -96 h after transfection (10 nM of siRNA ϩ Venus-hGeminin plasmid) into HeLa cells, the cell cycle time of Venus-hGeminin transfected cells was observed under an LCV110 microscope (si-cont: n ϭ 68, si620: n ϭ 85, si1374: n ϭ 56; **, p Ͻ 0.0001). C, 2.5 and 5 nM of si-cont or siRac1(618) was co-transfected with the GFP plasmid into HeLa cells. For the rescue experiment, Rac1 siRNA ϩ GFP-GSPT1 plasmid was co-transfected into HeLa cells. At 48 h after transfection, expression levels of Rac1, GSPT1, overexpressed GFP-GSPT1, and GFP were examined by immunoblotting using a Rac1, GSPT, and GFP antibody, respectively. Comparable loading of proteins was confirmed using a GAPDH antibody. D, from 24 to 96 h after transfection (siRac1(618) ϩ GFP or GFP-GSPT1 plasmid) into HeLa cells, the cell cycle time of GFP or GFP-GSPT1 transfected cells was observed under an LCV110 microscope (2.5 nM siRac1, GFP: n ϭ 171, GFP-GSPT1: n ϭ 171; **, p ϭ 0.0003; and 5 nM siRac1, GFP: n ϭ 148, GFP-GSPT1: n ϭ 77; **, p Ͻ 0.0001). E, 10 and 20 nM of si-cont or siGSPT1(620m) were co-electroporated with the GFP plasmid into the primary astrocytes. 60 h after electroporation, the expression levels of GSPT1 were evaluated using a GSPT1 antibody. Comparable loading of proteins was confirmed using a GAPDH antibody. F, from 48 to 120 h after electroporation (20 nM of siRNA ϩ GFP plasmid) of WT primary astrocytes, the cell cycle time of the GFP-transfected cells was assessed using an LCV110 microscope (si-cont: n ϭ 107, si620m: n ϭ 90; **, p ϭ 0.0001). keV were derived through 3-mm Cu absorbance. The mice were irradiated with a single slit collimator at the same beamline, with multiple horizontal microplanar beams 100-m thick at an extremely high dose of 550 Gy with 400-m gaps between beams on the brain. Anesthetized mice were positioned horizontally in front of the horizontally propagating beams, with the right brain aligned perpendicular to the direction of the beam. The multislit collimator, which produces 10 peak dose areas composed of 100-m width with 400-m gaps between them to process the microplanar beam, was set downstream of the output of the beamline hatch. The details of this multislit irradiation system have been described previously (28). siRNA and shRNA plasmids were transfected into LN229 and HeLa cells using Lipofectamine 3000 (Invitrogen) or RNAiMAX (Invitrogen). siRNAs were transfected into primary astrocytes using a NEPA21 electroporator (Nepa Gene Co., Ltd., Japan). Compared with lipofection, electroporation has been reported to require a higher concentration of siRNAs (56). Thus, 20 nM siRNA was used for electroporation. Cells-LN229 astrocytic cells and HeLa cells were maintained in DMEM (Wako) containing 10% FBS (Nichirei Biosciences, Japan). Primary astrocyte cultures were prepared from mouse cerebral cortex at postnatal day 1 or 2. Dissected cerebral cortexes were dissociated in Eagle's minimal essential medium (Wako) supplemented with 10% FBS, 100 units/ml penicillin, and 100 g/ml streptomycin and were cultured in 25-cm 2 flasks (2 brains/flask) (Corning Inc., Corning, NY). After 5-7 days, the flasks were subjected to 2 h of continuous shaking to obtain purified astrocytes. Trypsinized cells and cell lysates were used for the experiments, as indicated (only one trypsinization step was used for the primary astrocytes). The percentage of primary astrocytes obtained from GFAP-Cre;Rac1 flox/flox ;tdTomato mice with tdTomato fluorescence was 80 -90%. All cells were maintained in a 5% CO 2 humidified incubator at 37°C. Cell Cycle Analysis-LN229 and HeLa cells were cultured on 35-mm glass-bottomed dishes (gbd; MatTek, Ashland, MA). LN229 cells were transfected with Venus-tagged hGeminin ϩ shRNA expression plasmid (pSUPER(rfp) or shRac1(618rfp)) using Lipofectamine 3000. HeLa cells were transfected with Venus-tagged hGeminin ϩ siRNA (control or siGSPT1). For rescue experiments, HeLa cells with transfection of siRac1(618) were simultaneously transfected with pEGFP-C1 or GFP-GSPT1 using Lipofectamine 3000. Starting at 24 h after transfection, the cells were imaged every 20 min for 72 h at 37°C in 5% CO 2 using a computer-assisted incubator fluorescence microscope system (LCV110; Olympus). This system enabled ultra long term imaging of living cells without removal of the cells from the culture conditions. LN229 cells with RFP fluorescence and HeLa cells with Venus or GFP fluorescence were analyzed as cells with shRNA or siRNA. Cell cycle (doubling time) was defined as the time from one cytokinesis to the next cytokinesis. The experiments were performed in duplicate, and at least three independent transfection experiments were conducted. siRNA (control, siRac1(618) or siGSPT1(620m)) was electroporated into WT primary astrocytes in combination with the pEGFP-C1 plasmid. Primary astrocytes obtained from GFAP-Cre;Rac1 flox/flox ;tdTomato (Rac1-KO) mice and control (Rac1 flox/flox ;tdTomato) mice or primary astrocytes subjected to electroporation were cultured on 35-mm gbd. Starting at 48 h after plating on gbd, the cells were imaged every 20 min for 72 h using a live imaging LCV110 system, as described above. Astrocytes with tdTomato fluorescence and GFP fluorescence were considered to be Rac1-KO cells and cells containing siRNA, respectively. Scratch Wound (Wound Healing) Assay-LN229 cells were transfected with shRNA expression plasmid (pSUPER(gfp) or shRac1(618gfp)), or pEGFP(C1) ϩ siRNA (control or siGSPT1) using Lipofectamine 3000. At 24 h after transfection, culture media were changed to serum-free media, and cells were grown for an additional 24 h. Forty-eight hours after transfection, an approximately 1,000-m-wide section of the cells was scratched using a sterilized 1,000-l filter tip, and imaged every 20 min for 48 h using a live imaging LCV110 system (see the "cell cycle analysis" section for details). Cells with GFP fluorescence were analyzed as cells with shRNA or siRNA. Cells whose cell bodies has translocated were considered to be migrated cells. Cell migration was defined as the distance from the leading edge of the cell at the starting time point to the same point on the cell at the ending time point. Experiments were performed in duplicate, and at least three independent transfections were conducted. siRNA (control, siRac1(618), or siGSPT1(620m)) was electroporated into WT primary astrocytes in combination with the pEGFP(C1) plasmid. Primary astrocytes obtained from GFAP-Cre;Rac1 flox/flox ;tdTomato (Rac1-KO) mice and control mice or primary astrocytes treated with electroporation were cultured on 35-mm gbd. Forty-eight hours after plating on gbd, the cells were imaged every 20 min for 72 h using a live imaging LCV110 system. Astrocytes with tdTomato fluorescence and GFP fluorescence were considered to be Rac1-KO cells and cells containing siRNA, respectively. Cell Migration Assay-Migration of primary astrocytes was evaluated using the CytoSelect cell migration assay kit (12 m pore size, colorimetric format; Cell Biolabs, Inc., San Diego, CA) according to the manufacturer's protocol. Briefly, 2.0 ϫ 10 5 primary astrocytes obtained from GFAP-Cre;Rac1 flox/flox ; tdTomato (Rac1-KO) mice and control (Rac1 flox/flox ; tdTomato) mice or 5.0 ϫ 10 5 WT primary astrocytes treated with electroporation (pEGFP-C3 ϩ 20 nM of siRNA (control or siGSPT1(620m)) suspended in serum-free DMEM were placed in 24-well insets, and 500 l of DMEM containing 10% FBS was added to the lower wells of the 24-well plate. After 24 h (Rac1-KO) or 32 h (GSPT1-KD), non-migrating cells on the interior surface of the inset were removed, and migrated cells on the exterior surface of the inset were stained using stain solution. After extraction of the migrated cells in extraction solution, the A value at 560 nm was obtained using a plate reader (Multiskan GO; Thermo Fisher Scientific). The data are shown as percentages of control. Section Preparation and Immunohistochemistry-At 35 days after SCI and 21 days after microbeam irradiation injury, the animals were deeply anesthetized using pentobarbital and transcardially perfused with ice-cold 0.9% saline solution and then with 4% paraformaldehyde in 0.1 M phosphate buffer (pH 7.4) (57). Spinal cords and brains were dissected, post-fixed overnight in the same fresh fixative, and then embedded in paraffin. Three-m sagittal sections of the spinal cord and cerebellum with brainstem were obtained (from 5-mm rostral and caudal to the injury site in the case of SCI). After deparaffinization and permeabilization with PBS containing 0.3% Triton X-100 (PBS-0.3T), spinal cord sections were incubated overnight with a GFAP antibody in PBS containing 0.03% Triton X-100 (PBS-0.03T) at 4°C, followed by incubation with Alexa 488-conjugated secondary antibodies for 1 h at 24°C. The cerebellum with brainstem sections were incubated overnight with a GFAP antibody in PBS containing 0.03% Triton X-100 (PBS-0.03T) at 4°C, followed by diaminobenzidine staining using a Vectastain ABC kit (Vector Laboratories, Burlingame, CA) and counterstaining using Cresyl violet solution (Muto Pure Chemicals, Tokyo, Japan). Quantification of GFAP immunoreactivity was performed as follows. For SCI cases, fluorescent images were captured using a fluorescence microscopy system (Biozero, Keyence, Japan). Immunoreactivity in a 100-m area from the lesion edge was measured using ImageJ software (National Institutes of Health, Bethesda, MD). Areas that appeared brighter than the background were defined as GFAP-positive areas. The number of pixels were calculated, and the GFAPpositive area was defined as a percentage of the number of pixels for the entire area. For microbeam irradiation injury cases, diaminobenzidine stainings were photographed using a light microscope (Axioplan II; Carl Zeiss) with a DP26 camera (Olympus). Immunoreactivity areas that appeared brighter than the background (as analyzed by ImageJ software) were defined as GFAP-positive areas, which were defined in a linear band. The region of interest (ROI) was defined as a 400-m square with its center in the linear immunoreactivity band of the GFAP-positive area, and the GFAP-positive immunoreactivity area was calculated as a percentage of the total area. Immunoblotting-The cells were lysed in homogenizing buffer (58) by sonication in the presence of protease inhibitor mixture, protein phosphatase inhibitor mixture (Nacalai Tesque, Tokyo, Japan), and 1% Triton X-100. Total lysates were centrifuged at 800 ϫ g for 5 min at 4°C, and the supernatants were subjected to SDS-PAGE, followed by immunoblotting for 2 h at 24°C using primary antibodies diluted in PBS-0.03T. The bound primary antibodies were detected with secondary Ab-HRP conjugates using the ECL detection system (GE Healthcare). DNA Microarray-LN229 cells were transfected with pSUPER or shRac(618). At 24 h after transfection, LN229 cells were treated with LPS (0.5 g/ml) for 24 h, and then total RNAs were extracted using TRIzol (Invitrogen). The quality and quantity of RNA were determined using the Agilent 2100 Bio-Analyzer. Gene expression profiles were examined using the SurePrint G3 Mouse Gene Expression 8 ϫ 60K microarray kit (Agilent Technologies, Lexington, MA). Statistical Analysis-All data are presented as the means Ϯ S.E. For comparisons of two groups, unpaired Student's t tests were used. For comparisons of more than two groups, one-way analysis of variance (ANOVA) or repeated measures two-way ANOVA was performed and followed by Bonferroni post hoc test of pairwise group differences. Statistical analyses were performed using Prism 6.0 software (GraphPad, La Jolla, CA); p Ͻ 0.05 was considered statistically significant.
2017-07-31T02:40:55.937Z
2016-12-09T00:00:00.000
{ "year": 2016, "sha1": "316b878781c51b62f110feccd34315da24e6682a", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/292/4/1240.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "359f9fbfc503063d8427793968d5c3d3969f4b8d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10281977
pes2o/s2orc
v3-fos-license
Staphylococcal alpha toxin promotes blood coagulation via attack on human platelets. Staphylococcus aureus plays a major role as a bacterial pathogen in human medicine, causing diseases that range from superficial skin and wound to systemic nosocomial infections . The majority of S. aureus strains produces a toxin, a proteinaceous exotoxin whose hemolytic, dermonecrotic, and lethal properties have long been known (1-6). The toxin is secreted as a single- chained, nonglycosylated polypeptide with a M(r) of 3.4 x 10(4) (7, 8). The protein spontaneously binds to lipid monolayers and bilayers (9-14), producing functional transmembrane pores that have been sized to 1.5-2.0-nm diameters (15-18). The majority of pores formed at high toxin concentrations (20 mug/ml) is visible in the electron microscope as circularized rings with central pores of approximately 2 nm in diameter. The rings have been isolated, and molecular weight determinations indicate that they represent hexamers of the native toxin (7). We have proposed that transmembrane leakiness is due to embedment of these ring structures in the bilayer, with molecular flux occurring through the central channels (15, 19). Pore formation is dissectable into two steps (20, 21). Toxin monomers first bind to the bilayer without invoking bilayer leakiness . Membrane-bound monomers then laterally diffuse and associate to form non-covalently bonded oligomers that generate the pores. When toxin pores form in membranes of nucleated cells, they may elicit detrimental secondary effects by serving as nonphysiologic calcium channels, influx of this cation triggering diverse reactions, including release of potent lipid mediators originating from the arachidonate cascade (22-24). That alpha toxin represents an important factor of staphylococcal pathogenicity has been clearly established in several models of animal infections through the use of genetically engineered bacterial strains deleted of an active alpha toxin gene (25-27). Whether the toxin is pathogenetically relevant in human disease, however, is a matter of continuing debate. Doubts surrounding this issue originate from two main findings. First, whereas 60 percent hemolysis of washed rabbit erythrocytes is effected by approximately 75 ng/ml alpha toxin, approximately 100-fold concentrations are required to effect similar lysis of human cells (4-6, 13). The general consensus is that human cells display a natural resistance towards toxin attack. The reason for the wide inter-species variations in susceptibility towards alpha toxin is unknown but does not seem to be due to the presence or absence of high-affinity binding sites on the respective target cells (20, 21). Second, low-density lipoprotein (28) and neutralizing antibodies present in plasma of all healthy human individuals inactivate a substantial fraction of alpha toxin in vitro. These inactivating mechanisms presumably further raise the concentration threshold required for effective toxin attack, and it is most unlikely that such high toxin levels will ever be encountered during infections in the human organism. The aforegoing arguments rest on the validity of two general assumptions. First, the noted natural resistance of human erythrocytes to alpha toxin must be exhibited by other human cells. Second, toxin neutralization by plasma components, usually tested and quantified after their preincubation with toxin in vitro, must be similarly effective under natural conditions, and protection afforded by these components must not be restricted to specific cell species. From the *Institute of Medical Microbiology; the lInstitute of Clinical Chemistry; and the SInstitute of Clinical Immunology, D-6300 Giessen, Federal Republic of Germany Staphylococcus aureus plays a major role as a bacterial pathogen in human medicine, causing diseases that range from superficial skin and wound to systemic nosocomial infections . The majority of S. aureus strains produces a toxin, a proteinaceous exotoxin whose hemolytic, dermonecrotic, and lethal properties have long been known (1)(2)(3)(4)(5)(6). The toxin is secreted as a single-chained, nonglycosylated polypeptide with a Mr of 3 .4 x 10 4 (7,8) . The protein spontaneously binds to lipid monolayers and bilayers (9)(10)(11)(12)(13)(14), producing functional transmembrane pores that have been sized to 1 .5-2 .0-nm diameters (15)(16)(17)(18) . The majority of pores formed at high toxin concentrations (20 ug/ml) is visible in the electron microscope as circularized rings with central pores of -2 nm in diameter. The rings have been isolated, and molecular weight determinations indicate that they represent hexamers of the native toxin (7) . We have proposed that transmembrane leakiness is due to embedment of these ring structures in the bilayer, with molecular flux occurring through the central channels (15,19) . Pore formation is dissectable into two steps (20,21) . Toxin monomers first bind to the bilayer without invoking bilayer leakiness . Membrane-bound monomers then laterally diffuse and associate to form non-covalently bonded oligomers that generate the pores . When toxin pores form in membranes of nucleated cells, they may elicit detrimental secondary effects by serving as nonphysiologic calcium channels, influx of this cation triggering diverse reactions, including release of potent lipid mediators originating from the arachidonate cascade (22)(23)(24) . That a toxin represents an important factor of staphylococcal pathogenicity has been clearly established in several models of animal infections through the use of genetically engineered bacterial strains deleted of an active a toxin gene (25)(26)(27) . Whether the toxin is pathogenetically relevant in human disease, however, is a matter of continuing debate . Doubts surrounding this issue originate from two main findings . First, whereas 6070 hemolysis of washed rabbit erythrocytes is effected by -75 ng/ml a toxin, -100-fold concentrations are required to effect similar lysis of human cells (4)(5)(6)13) . The general consensus is that human cells display a natural resistance towards toxin attack . The reason for the wide inter-species variations in susceptibility towards a toxin is unknown but does not seem to be due to the presence or This study was supported by the Deutsche Forschungsgemeinschaft and the Verband der Chemischen Industrie . Address correspondence to S. Bhakdi absence of high-affinity binding sites on the respective target cells (20,21) . Second, low-density lipoprotein (28) and neutralizing antibodies present in plasma ofall healthy human individuals inactivate a substantial fraction of a toxin in vitro. These inactivating mechanisms presumably further raise the concentration threshold required for effective toxin attack, and it is most unlikely that such high toxin levels will ever be encountered during infections in the human organism . The aforegoing arguments rest on the validity of two general assumptions. First, the noted natural resistance of human erythrocytes to a toxin must be exhibited by other human cells . Second, toxin neutralization by plasma components, usually tested and quantified after their preincubation with toxin in vitro, must be similarly effective under natural conditions, and protection afforded by these components must not be restricted to specific cell species . In 1964, Siegel and Cohen (29) presented suggestive evidence that human platelets may be more susceptible towards attack by a toxin than erythrocytes . These authors described shape changes undergone by isolated platelets upon incubation with a crude toxin preparation . They also noted that such platelets apparently released (a) procoagulatory factor(s) parallel to leaking K+ and nicotinamide adenine dinneleotide in the absence of overt cell lysis. At the same time, Bernheimer and Schwartz (30) reported that a toxin induced a dramatic decrease in turbidity of rabbit platelet suspensions that, however, appeared to be due to frank platelet lysis. Since the time of these interesting observations, no further detailed studies have been conducted on the interaction of a toxin with human platelets . In the present communication, we report that human platelets indeed differ from human erythrocytes in being as sensitive towards a toxin attack as rabbit erythrocytes. It will be demonstrated that neutralizing antibodies to a toxin, although fully effective when assayed by standard procedures, fail to protect platelets against even low levels of a toxin in human blood. a toxin becomes the first bacterial cytolysin recognized to activate human platelets and promote blood coagulation in subcytolytic concentrations, processes that bear high potential relevance in staphylococcal infections . Materials and Methods Monomeric a toxin was recovered from a Sephacryl S-300 column after chromatography of a partially purified lyophilized toxin preparation (kindly donated by Behringwerke, Marburg, FRG), as described (15). Additionally, the toxin was purified in our laboratory from culture supernatants of S. aureus Wood 46 following a protocol that is to be reported elsewhere . These toxin preparations contained only very few residual contaminants that were approximated to represent <1-2°lo of total protein according to densitometric analyses of SDSpolyacrylamide gel stained with Coomassie Blue (Fig. 1). The toxin solutions with a protein content of 0.5-0.8 mg/ml were stored in aliquots at -20°C . Hemolysis Assays. Whole citrated human blood from healthy adults, or washed erythrocytes suspended to 50% hematocrit in PBS, were treated with 1 vol ofa toxin doubly diluted in PBS . Hemolysis was quantified by measuring hemoglobin absorbance at 412 nm in the supernatant after 60 min at 37°C. Whole Blood Samples. Blood was drawn in citrate, heparin (3 U/ml final concentration), or EDTA (10 mM, final concentration) and held at room temperature until used. Platelet Preparations. Platelet-rich plasma (PRP)' was obtained from five healthy adults ' Abbreviations used in this paper: PF-4, platelet factor 4; PPP, platelet-poor plasma; PRP, platelet-rich plasma . Platelet-poor plasma (PPP) was obtained by recentrifuging PRP at 5,600 g for 10 min, followed by four 2-min centrifugations of the supernatants at 8,000 g in a table top Eppendorf centrifuge . Platelet counts in PRP ranged between 2 .3 and 7 .5 x 105/gl ; PPP contained <100 platelets/gl . Platelets were isolated from PRP according to Mustard et al . (31) . The final platelet suspensions contained no white blood cells . The washed platelets were suspended in 135 mM NaCl, 2 .7 mM KCI, 12 mM NaHC03, 0 .3 mM NaH2PO 4 , 0 .35% (wt/vol) human serum albumin, 0 .1% (wt/vol) glucose, and were held at room temperature until used . Measurements of Platelet Aggregation and ATP Release. These measurements were conducted simultaneously in a Lumi-Aggro-Meter (model 400, Chrono-Log Corp., Coulter Electronics, Krefeld, FRG ; reference 32) . The aggregometer was equipped with a double-channel Omni-ScribeRII recorder (Coulter Electronics) for continuous recordings of optical measurements . ATP was assayed by the firefly method using luciferin/luciferase reagent (50 gl reagent per assay ; Chrono-Lume #395 from Chrono-Log Corp. ; reference 33) . Experiments were conducted with PRP and with washed platelets . PRP samples (0 .4 ml) were preincubated at 37°C for 3 min before aggregation assays . Washed platelet suspensions were supplemented with 2 mM Ca2 ', 1 MM Mg2', and 30 gg/ml fibrinogen (Kabi Vitrum, Munich, FRG) before addition of the stimuli . Measurements ofPlatelet Factor 4 (PF-4) Release. These measurements were conducted using a commercially available ELISA (Enzygnost PF 4, Behringwerke) . Experiments were conducted with whole blood samples and isolated platelets. In the latter, platelet suspensions were supplemented with 5 mM Ca2' before addition of a toxin . After an incubation for 20 min at 37°C, samples were centrifuged at 2,000 g (60 min, 4°C) to sediment the platelets, and PF-4 was determined in the supernatants . Determination of Lactate Dehydrogenese (LDH). LDH was determined with the use of a commercially available test (aca-LDH Testpack, DuPont Lab ., Bad Nauheim, FRG) . Measurements of Clot Time . These were performed with the use of a Mecrolab Clottimer 202 A (Heller Laboratories, Santa Rosa, CA) . Experiments were conducted at 37°C with citrated whole blood, citrated PRP, PPP, and mixtures of PRP and PPP containing varying platelet concentrations . Clot reactions were initiated through addition of 12 mM Ca plus or minus a toxin to the samples . Assessment of Platelet-bound Toxin. Washed platelets were suspended to 7-9 x 105 platelets/RI and treated with a toxin in the absence of Ca and fibrinogen for 20 min, 37°C . The toxintreated platelets were washed thrice in PBS by centrifugation at 8,800 g for 3 min in a table top Heraeus Christ Biofuge A centrifuge. The platelets were solubilized either through addition of 50 mM Triton X-100 for quantification of bound monomers, or by boiling for 30 s in 70 mM SDS for determination of total bound toxin . The specificity and performance of the sandwich ELISA used has previously been described in detail (20,21) . Other Reagents . ADP was obtained from Boehringer (Mannheim, FRG) . Fibronogen was from Kabi Vitrum (Munich, FRG) . mAb a4C1 against a toxin has been described (20). Another mAb, 4G3, was produced that does not inhibit toxin binding but inhibits lateral aggregation and oligomer formation in the cell membrane (Hugo, F, B. Eberspacher, and S . Bhakdi, unpublished data) . Indomethacin was purchased from MSD Sharp and Dome (Munich, FRG) and the thromboxane receptor antagonist BM 13 177 was a gift from Boehringer (Mannheim, FRG) . Commercial human Ig preparations were from Sandoz (SandoglobinR ; Basel, Switzerland) and Behringwerke (BeriglobinR ; Marburg, FRG) . Results Hemolysis of Human Erythrocytes by a Toxin. As known from the early literature (4), washed human erythrocytes are lysed only by high concentrations of a toxin, 60% lysis of a 50% cell suspension occurring at -12-15 ug/ml a toxin . In the presence of plasma proteins, onset of hemolysis was markedly retarded and comparable hemolysis of cells in whole citrated blood occurred at toxin concentrations of -50 gg/ml (Fig . 2). Hemolysis in whole blood was always nil at 5 pg/ml and 0 to <2% at 10 ftg/ml of a toxin . These results reiterate the intrinsic high resistance of human erythrocytes towards toxin attack and show that plasma factors, presumably antibodies and low-density lipoprotein, further effectively protect red cells against a toxin in whole blood. Release of PF-4 Invoked by a Toxin in Whole Blood. In sharp contrast to the resistance of erythrocytes towards lytic toxin action, human platelets present in citrated or heparinized whole blood responded to nonhemolytic levels of a toxin by release of granular constituents . Fig . 3 depicts the release of PF 4 in blood of one donor anticoagulated with citrate, heparin, or EDTA . Essentially, the same patterns were reproduced with two other donors . The background levels of PF4 measured in controls not treated with the toxin were somewhat high (200-600 ng/ml) due to the Release of PF 4 from platelets in (10 mM) whole human blood anticoagulated with heparin (3 U/ml), citrate (10 mM), or EDTA (10 mM) upon treatment with subhemolytic doses of staphylococcal a toxin. PF-4 was quantified in the supernatants after a 20-min incubation with the toxin. incubation of samples at 37°C . PF-4 release was noted at toxin levels of N1 ug/ml, and plateaued at 2.5-5 .0 gg/ml toxin in heparin and citrated blood. Unexpectedly, less PF 4 was measurable in supernatants of platelets treated with 12 .5 gg/ml toxin in the presence of citrate. The presence of 10 mM EDTA abolished the effects evoked by 2.5-12.5 leg/ml toxin, whereby PF-4 measured at 12 .5 gg/ml again presented the lowest values . At levels of ti 1 ug/ml, a toxin appeared to induce release of very small amounts of PF4, even in the presence of EDTA . a Toxin Induces Aggregation of Platelets andATPRelease in PRP. In classical aggregation tests, a toxin in the same concentration range of 1-2.5 gg/ml induced platelet aggregation and ATP release in PRP from five healthy individuals . Fig. 4 A depicts the classical platelet response to an ADP stimulus (34) ; shape change is followed first, by primary aggregation without ATP release, and then by secondary, irreversible aggregation that is paralleled by ATP liberation (35). Fig. 4 B depicts the platelet response in one donor evoked by 2 .5 gg/ml a toxin. A virtually identical aggregation pattern was noted after a short lagphase of -30 s. However, ATP release occurred earlier than with ADP, coinciding with the commencement of the aggregation response . ATP release was also always enhanced compared with the ADP response . At 1 gg/ml toxin, the lag-phase was prolonged to 60-80 s, and the aggregation rate was slower. The simultaneous commencement of aggregation and ATP release is well recognizable at this threshold toxin concentration (Fig . 4 C) . Aggregation always appeared irreversible even at low toxin levels . Toxin concentrations of <1 gg/ml usually did not elicit aggregation or ATP release (maximum time of observation: 5 min) . With one donor, however, a platelet response similar to that shown in Fig. 4 C was observed with 0.5 gg/ml toxin. With another donor, the platelet response commenced at 2.5 ttg/ml rather than at 1 gg/ml toxin. These differences are presently attributed to the varying levels of antitoxin antibodies and low density lipoprotein in the individual plasma samples. Release of PF-4 and ATP Is not Due to Platelet Lysis. Aliquots of PRP containing 3 x 105 platelets/g1 were incubated with 0-12 .5 wg/ml a toxin for 15 min at 37°C . After removal of platelets by centrifugation, LDH was determined in the supernatants. A sonicated PRP sample served as positive control. Whereas a concentration of 500 U/ml LDH was measured in the latter, all a toxin-treated samples presented FIGURE 4. Recordings of OD changes due to platelet aggregation (upper traces) and of ATP release (lower traces) in platelet-rich plasma induced by ADP (A) and staphylococcal a toxin (B and C) . 2 gM ATP was added as a calibration to the sample at the end of the experiment in B. Chart speed: two and a half columns correspond to 1 min. The traces representing ATP precede OD traces by one and a quarter columns. concentrations of 110 U/ml, identical to the LDH level of the saline control. Hence, a toxin applied in the given doses does not lyse human platelets in PRP Neutralizing mAbs Impart only Partial Protection of Platelets against Toxin Action. Two mAbs that were capable of neutralizing hemolytic toxin effects were used . The mAb a4C1 binds to native toxin monomers and prevents their binding to target cells. mAb 4G3 does not inhibit toxin binding, but blocks oligomerisation of membrane-bound toxin monomers . Both antibodies suppressed release of PF 4 from platelets in heparinized whole blood when preincubated with a toxin before its administration (Fig . 5 A) . As controls, six purified IgG murine mAbs directed against streptolysin-O or terminal C5b-9 complement complexes (for review, see reference 19) were used at similar concentrations . These irrelevant antibodies did not suppress the action of a toxin on platelets (Fig. 5 B) . The neutralizing capacity of both neutralizing mAbs was, however, overrun if the antibodies were not preincubated with toxin. In the aggregation experiment of Fig. 6, mAb a4C1 was used at a concentration of 10 ug/ml. If preincubated with 2 .5 gg/ml toxin for 2 min at 22'C, the antibody effected total neutralization, and no platelet aggregation was noted (molar ratio of toxin:antibody, N1 :1). Upon posttreatment with an additional 2 .5 gg/ml toxin, aggregation ensued ( Fig . 6 A) . If antibodies were applied simultaneously with the toxin, however, protracted aggregation occurred (Fig . 6 B) after a slightly prolonged lag-phase. If given 30 s after toxin application, the mAb was totally incapable of preventing platelet aggregation (Fig . 6 C . These results demonstrate that a toxin binds rapidly to platelets, and neutralizing antibodies are rather ineffective in protecting platelets against toxin action . At the same time, these results show that the noted platelet responses are due to binding and oligomerisation of a toxin, and not to a contaminant possibly contained in the toxin preparation. Preincubation of a toxin with any of the six irrelevant mAbs (molar ratio of toxin to antibody, ti1 :3) failed to affect toxininduced aggregation. Response of Washed Platelets to Toxin Attack. Isolated platelets exhibited a yet higher susceptibility towards a toxin, aggregation, and ATP release already commencing at toxin levels of 50-100 ng/ml and always maximal at 0.5-1 .0 Erg/ml . Release of PF 4 was also noted at these low toxin concentrations. If platelet suspensions were reconstituted with preparations of pooled human Igs (final IgG concentration : 10 mg/ml), the toxin levels required to elicit aggregation and ATP release returned to the region of ti 1-2 wg/ml. These results indicate that human IgG antibodies impart partial protection against the platelet activation effects of a toxin. However, as noted in whole blood and PRP, this protection ends at a relatively low threshold concentration in the range of 1-2 gg/ml a toxin. Toxin-dependent Platelet Stimulation Bypasses Cyclooxygenase Pathway. The presence of 50 pM indomethacin (cyclooxygenase inhibitor) or 5 pM BM 13177 (thromboxane receptor blocker) abrogated ATP release and secondary platelet aggregation induced FIGURE 6 . Inhibition of platelet aggregation and ATP release by mAbs . (A) a toxin was preincubated with mAb a4C1 at a molar ratio of 1 :1 (toxin/mAb) for 2 min at 22°C, and then added to PRP (final toxin concentration in PRP : 2 .5 ug/ml) ; no toxin effects were discerned . Addition of another 2 .5 ug/ml toxin to the sample resulted in platelet aggregation and ATP release. (B) Toxin and mAb were applied in the same dose as in A simultaneously. In this case, the mAb could not prevent platelet activation and aggregation. (C) toxin (2 .5 wg/ml) was applied 30 s before the mAb ; no protective effect of the antibody was observed . by ADP (Fig. 7, A and B) . However, neither inhibitor was able to influence the platelet response to a toxin (Fig . 7, C and D) . Hence, toxin-induced platelet activation bypasses the cyclooxygenase pathway (36)(37)(38) and is thromboxane independent. Quantitation of Toxin Binding to Platelets. Washed platelets were suspended in buffer without Cat+ and fibrinogen, treated with a toxin, and bound toxin was subsequently quantified by ELISA. As shown in Fig. 8, measurable binding of a toxin to washed platelets commenced at levels -100 ng/ml and increased with the amount of toxin offered. The binding exhibited no recognizable saturation and displayed FIGURE 7 . Toxin activation of platelets bypasses cyclooxygenase pathway. PRP samples were given 50 uM indomethacin (A and C) or 5 uM thromboxane receptor blocker BM 13 177 (B and D) and treated with 2 x 10-5 M ADP (A and B) or 2.5 wg/ml a toxin (C and D) . ADPdependent secondary aggregation and ATP release were blocked by both agents, whereas toxininduced effects remained unchanged. no characteristics of a receptor-ligand interaction ; the total net binding was calculated to be -10% at all toxin concentrations between 0.5 and 12 .5 Pg/ml. A similar binding behavior was previously noted with rabbit erythrocytes (21). The ELISA permitted quantitative differentiation between toxin monomers and oligomers. At Quantitation of the binding of a toxin to isolated human platelets. A suspension of platelets (9 x 10 5/Wl) was given a toxin at the depicted final concentrations, and platelet-bound toxin was quantified by ELISA, which differentiated between monomeric and total toxin. Net binding was -1017o of total toxin offered in each sample, and no saturability of binding was observed in the measured range . all toxin doses applied, monomers constituted -2 To of total toxin. The number of oligomerc toxin molecules bound at lowest toxin concentrations (100 ng/ml) was below the detection limit of the ELISA. Assuming that the major population of cellbound oligomers represented hexamers, we estimate that ATP release and platelet aggregation commences upon average binding of <10 toxin hexamers, and is maximal upon binding of <100 hexamers per platelet (Fig . 8) . These results emphasize the high efficiency of a toxin in activating human platelets. a Toxin Accelerates Blood Coagulation . These experiments were conducted with citrated blood, PRP, and PPP. Clotting was initiated by recalcification in the absence or presence of a toxin. As shown in Fig. 9, the presence of a toxin dosedependently enhanced the rate of clot formation, significant effects already being noted at 1 lag/ml toxin concentration . Maximal reduction of clot times of N70% occurred when the toxin was applied to PRP at levels of 2.5-5 ttg/ml . PPP exhibited no response to a toxin; hence acceleration of clot formation was dependent on the presence of platelets. Reductions in clot times similar to those observed in normal PRP (containing 2.3 x 10 5 platelets/pl) were already observed in plasma containing 2.3 x 104 platelets/ul, and a clot-enhancing effect of a toxin was noted even at platelet counts of 1-2 x 10 3/g1. Similar acceleration in clot formation was observed upon addition of a toxin to whole citrated blood. The reductions in clot times were totally abrogated if a toxin was preincubated with mAb a4C1 (two molecules antibody per molecule toxin, 2 min, 22°C) . Discussion Unexpected aspects regarding the action and potential pathophysiological relevance of a major bacterial cytolysin in the human organism arise from the present study. The dogma that human cells are generally insensitive towards staphylococcal a toxin attack must be rectified . In fact, human platelets exhibit similar susceptibility towards the toxin as rabbit erythrocytes, responding to toxin levels as low as 50 ng/ml. The platelet reaction comprises a classical irreversible aggregation upon stirring of PRP at 37°C in an aggregometer, paralleled by release of a granule constituents (39)(40)(41)(42)(43)(44), documented in the present study through measurements of liberated PF4. Toxin-treated platelets also released ATP, whereby differentiation between ATP released from the cytosol as opposed to release from dense bodies was not yet undertaken. Binding of a toxin to platelets is rapid, ensuing within 30-90 s at 37°C . Elicitation of the platelet reaction requires not only binding of monomers, but also formation of membrane-bound oligomers, since it can be prevented by preincubation of toxin with two mAbs, one acting to inhibit toxin binding, the other blocking toxin oligomerisation . The rapidity and high efficiency of toxin action on platelets could partially account for the second unexpected finding that neutralizing antibodies fail to effectively protect platelets against a toxin . In PRP and whole blood, the threshold concentration required for successful toxin attack is raised in the presence of human antibodies to only ti 1-2 wg/ml. In contrast, red cell lysis in whole blood is usually nil even at toxin concentrations of 10 ug/ml. We have found that human white blood cells display a similar resistance towards a toxin as erythrocytes (unpublished data). Our results thus confirm the suspicion of Siegel and Cohen (29) that they had "identified a human cell (or cell fragment) highly susceptible to this agent," and identify platelets as the primary cell targets for a toxin attack in human blood. The cause for the high susceptibility of platelets compared with other blood cells is unknown . The unsolved enigma regarding the widely differing susceptibility of various cell targets to toxin attack has recently been discussed in detail (21,45), and we have no new data to justify any further speculation at present . The binding data obtained in this study are very similar to those found for toxin binding to rabbit erythrocytes and speak against the presence of specific, saturable receptors on the platelets . As discussed previously (21,45), we believe that surface properties of the respective cell target, such as density and orientation of charged groups, are important in determining the concentration threshold at which successful toxin attack will occur. Even when this takes place, overall net toxin binding will be rather ineffective, and toxin molecules will not be quantitatively taken up by the cell targets . In the present study (Fig . 8), we estimate that only -10% of toxin offered in solution becomes platelet bound, the bulk of toxin thus remaining available for attack on other cells . The binding inefficiency of a toxin to rabbit platelets was indeed already noted by Bernheimer and Schwartz in 1965 (30) . These authors showed that continued loss of turbidity occurred when several aliquots of rabbit platelets suspensions were consecutively added to toxin solution . The precise nature of the toxin-induced reactions of human platelets has not yet been delineated, but it appears probable that the toxin forms transmembrane pores, and activation results from influx of calcium ions . The release of a granule constituents and ATP elicited by a toxin is not due to simple lysis of the platelets, since no release of cytoplasmic LDH was noted even at 12 .5 g,g/ml a toxin . This finding fully confirms the early report of Siegel and Cohen (29), who registered rapid K+ and NAD-efflux from toxin-treated platelets in the absence of protein release . Elec-tron microscopic studies are presently underway in this laboratory, and preliminary results also indicate that platelet lysis does not occur at the given toxin concentrations . Analogous, calcium-dependent cell activation after binding of pore-forming cytolysins to membranes has been demonstrated in several recent studies . Examples include the stimulation of arachidonate metabolism invoked by a toxin in endothelial cells (23) and leukocytes (24), and by complement C5b-9 in various cell targets (e.g. references [46][47][48][49] . With special regard to platelets, it is notable that earlier studies by Polley and Nachman (50,51), more recently followed up by Hansch et al . (52) and Wiedmer and Sims (53,54), have similarly demonstrated platelet activation by C5b-9 . In our present study, toxin-induced activation unsurprisingly exhibited a requirement for Cat+ , and release of PF 4 by 2 .5-12 gg/ml a toxin was totally abrogated in the presence ofEDTA . Why PF4 release is apparently diminished when platelets in citrated and EDTA-anticoagulated PRP are given high doses of toxin (12 .5 4g/ml) is unclear. We currently speculate that this may be due to rapid codiffusion of citrate and EDTA across the toxin pores into the cells . The cause of minimal PF-4 release observed in EDTA at marginal toxin levels (1~Lg/ml) also remains unknown . The failure of indomethacin and the thromboxane receptor blocker to suppress platelet response to a toxin contrasts with the effects of these inhibitors on ADPdependent stimulation and indicates that the platelet response to a toxin bypasses the cyclooxygenase/thromboxane pathway (36)(37)(38) . These preliminary results are presented because the realization that the described processes are refractory towards inhibition by related pharmacotherapeutic agents may be of practical importance . From a positive viewpoint, a toxin could become valuable as a membrane-perme-abilizing agent for probing the minimal requirements for granule exocytosis in platelets . Several recent studies have already begun to exploit the use of this toxin in the study of exocytic processes (e .g., reference 55) . The toxin could also be used as a tool to probe the importance of thromboxane in the induction of platelet aggregation (56,57) . A priori, it may not be surprising that a cytolysin that generates transmembrane pores stimulates platelets . The unexpected findings made in the present study relate to the extreme susceptibility of human platelets and the capacity of a toxin to evoke procoagulatory responses in these cells in whole blood at low concentrations such as may be expected to be present in tissues during severe infections with S. aureus and systemic disease . In vitro, a toxin can reduce clotting times by up to 70%, an effect that is dependent on the presence of platelets . In vivo, a toxin may thus act synergistically with staphylocoagulase to cause local thrombus formation . In severe deep infections and septicemia, systemic platelet responses to a toxin might even contribute to the pathogenesis of disseminated intravascular coagulation . The present study is a long overdue continuation of the pioneering work by Siegel and Cohen (29) and Bernheimer and Schwartz (30) . It is the first demonstration that low, nonhemolytic levels of a bacterial cytolysin can promote coagulation by selectively activating platelets in human blood . Staphylococcal a toxin in the nonhemolytic concentration range of -1 wg/ml binds to and stimulates platelets in human blood . After addition of the toxin to stirred platelet-rich plasma, a short lag-phase (30-70 s) is followed by platelet shape change and irreversible aggregation . The stimulation of platelets in whole citrated or heparinized blood has also been demonstrated by measurements of platelet factor 4 release . Aggregation and release of granule constituents are not inhibitable by indomethacin or by the thromboxane receptor blocker BM 13177 . Washed human platelets are sensitive to even lower concentrations of 0 .05-0.10 wg/ml a toxin . In the presence ofhuman IgG antibodies, the threshold for effective toxin attack returns to levels of N1 pg/ml. An mAb that inhibits toxin binding to cells totally suppresses platelet activation if preincubated, but not if applied simultaneously with the toxin . Activation of washed platelets correlates with binding of toxin oligomers to the cells, maximal activation occurring upon binding of an average of <100 hexamers per platelet . When added to recalcified citrated blood or to platelet-rich plasma, a toxin reduces clotting times by up to 70 % ; this effect is dependent on the presence of platelets . The collective data identify platelets as primary targets for a toxin attack in human blood, and demask its potential to invoke imbalance of hemostasis that may be of pathophysiological relevance during severe local and systemic staphylococcal infections in the human host .
2014-10-01T00:00:00.000Z
1988-08-01T00:00:00.000
{ "year": 1988, "sha1": "4068976a968e60a77850c8b0ebc0e187989d64d2", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/168/2/527.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "4068976a968e60a77850c8b0ebc0e187989d64d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119189185
pes2o/s2orc
v3-fos-license
Lepton Flavor Violation with Displaced Vertices If light new physics with lepton-flavor-violating couplings exists, the prime discovery channel might not be $\ell\to\ell'\gamma$ but rather $\ell\to\ell' X$, where the new boson $X$ could be an axion, majoron, familon or Z' gauge boson. The most conservative bound then comes from $\ell\to\ell'+\mathrm{inv}$, but if the on-shell $X$ can decay back into leptons or photons, displaced-vertex searches could give much better limits. We show that only a narrow region in parameter space allows for displaced vertices in muon decays, $\mu\to e X, X\to \gamma\gamma, ee$, whereas tauon decays can have much more interesting signatures. INTRODUCTION The Standard Model (SM) brings with it the accidental conservation of baryon number B and individual lepton numbers L e,µ,τ . The linear combination B + α L α is broken at a non-perturbative level [1] and the differences L α − L β are clearly violated by neutrino oscillations [2]. Despite of that, we have yet to observe a lepton-flavorviolating (LFV) process involving charged leptons, which is, without additional assumptions, decoupled from neutrino oscillations and thus a perfect signature of new physics [3]. 2. If X decays into visible particles, e.g. X → e + e − or X → γγ, much better limits could be obtained as long as the decay happens inside of the detector. This typically involves a reconstruction of the displaced vertex (DV) of X → vis and thus different cuts and triggers than usual. We stress that the signatures are background-free both due to their LFV nature and the DV. Similar considerations hold for LFV τ decays, which allow for many more visible DV channels, including X → hadrons. Invisible τ → X decays have been studied at ARGUS [29] (see also older limits in Refs. [30,31]), and are under investigation at Belle [32]. LFV decays with DV are only possible in certain kinematical regions, e.g. 2m e < m X < m µ − m e for µ → eX, X → ee, and furthermore require the X decay length to be larger than the experimental vertex resolution and smaller than the detector. This leaves a sliver of testable parameter space where limits can be put on BR(µ → eX, X → ee), illustrated in Fig. 1 (see later for details). Since sub-GeV particles X with couplings to leptons or photons are strongly constrained by other experimental searches, it is not obvious that there is viable parameter space for LFV DV. As we will see below, there is only a small feasible region for muon decays, whereas tauon decays are much less constrained and can have a plethora of interesting signatures. The focus of this letter will be these LFV decays with DV. Existing work is scarce; we are not aware of any analyzes for τ , but there is an old limit from SINDRUM on BR(µ → eX, X → ee) of order 10 −11 [42] and a thesis within the MEG collaboration on BR(µ → eX, X → γγ) with a limit of order 10 −11 [43]. We expect Mu3e [39] to vastly improve at least the SINDRUM limit, and en- Signatures of µ → eX, X → ee depending on X mass and Xee coupling strength. For mX < 2me or if X decays outside of the detector, the signature is mainly µ → e + inv (blue region). For mX > mµ − me or if the X decay length cτ is smaller than the vertex resolution, the signature is just prompt µ → 3e (red region). In between, a region with detectable displaced X → ee vertex exists (green), which of course depends on the detector geometry and acceptance. courage searches for these kind of LFV τ decays at B factories. FRAMEWORK We focus our analysis on pseudo-Goldstone bosons X, 1 whose couplings to SM leptons e,µ,τ can be conveniently parametrized as [17] with some effective scale Λ and hermitian (antihermitian) coupling matrix g A (g V ), to be assumed real in the following. In the second line we have integrated by parts and used the equations of motion, which is justified for on-shell particles. In the case of leptonic familons [12][13][14], Λ corresponds to the scale of the broken global flavor symmetry and the matrix structure of g A,V is determined by the symmetry generators [17]. However, these couplings arise even in simple unflavored singlet-majoron models at one-loop level [16] and depend on seesaw parameters [24]; in fact, measuring these majoron couplings 1 Vector bosons Z will behave similar to pseudoscalars since for light Z only the longitudinal component is produced in α → β Z [10]. The main difference is then in the Z decay, which in particular does not allow for γγ. Light CP-even scalars look qualitatively similar and will typically mix with the Higgs boson, leading to additional couplings [44][45][46]. could make it possible to reconstruct the seesaw parameters without having to detect the heavy neutrinos. Diagonal as well as off-diagonal couplings to charged leptons are thus a generic part of many models and in particular relevant for neutrino-mass models with global symmetries. We assume the mass of X, m X , to be an independent parameter. It proves convenient to define the scales The LFV two-body decays then take the form The boson decay is given by the last equation being valid for m β m α . The decay X → γγ induced by a fermion loop is typically suppressed, but of course becomes the dominant decay channel for m X < 2m e [47]. We will simply assume an effective photon coupling [2], with field-strength tensor F µν and its dualF µν = 1 2 F αβ µναβ . The coupling g γγ with mass dimension −1 could be generated by a triangle anomaly analogous to axions or via mixing with the longitudinal Z component as in majoron models [24,48]. In addition to the decay into leptons and photons one could easily imagine invisible decays (into neutrinos or additional new light particles) or decays into hadrons (for sufficiently heavy X). To simplify the analysis we will neglect these channels. The relevant quantity for DV is the decay length in the laboratory frame, in which X is typically boosted. For LFV with muons (e.g. MEG or Mu3e), the muon is stopped before it decays into eX, so X has the following momentum in the lab frame leading to the boosted decay length [2] Now P (x) = exp(−x/γcτ ) is the probability for X to travel a distance x without decaying. Note that the inclusion of additional X decay channels can only shorten the decay length, rendering the decay more prompt and reducing the rate by 1 − BR(X → inv). For tau decays (e.g. in Belle or LHCb) the situation is more complicated because the particles do not decay at rest in the lab frame. We will leave a dedicated analysis to our experimental colleagues but nevertheless approximate the tau at rest in the following. The additional boost can increase or decrease the physical decay length, depending on the direction of X emission in the tau frame. Since we will see that the parameter space for tau decays is wide open, our conclusions should be qualitatively correct. MUON DECAYS We start our analysis with LFV muon decays, which kinematically allow for µ → eX with X → γγ or X → ee. Decay µ → eX, X → γγ Assuming only g eµ and g γγ to be non-zero, we have the branching ratio in the narrow-width approximation [47] and the boosted decay length from Eq. (8), The experiment of choice for this decay chain is MEG [43] due to the better photon detection compared to Mu3e. 2 While MEG's detector geometry should allow for reconstructed vertices up to the meter scale, we can see from Fig. 2 (upper left) that such large decay lengths are incompatible with beam dump data. Limits on g γγ , rederived and updated recently in Refs. [49][50][51], are in fact so strong that they exclude X masses below 20 MeV and decay lengths longer than cm. Future experiments such as NA62, Belle-II, and SHiP [50,51] can push this limit to 0.1 cm (see also Ref. [53] for LHC prospects). While vertex resolutions of order cm might be possible in MEG(-II), many of the decays will appear prompt, but still have a different energy distribution from the general three-body decay µ → eγγ. For not-too-heavy X, the positron energy will actually be similar to that from µ → eγ for which MEG is optimized, which should improve the efficiency of this search. Assuming an improvement of the 30-year-old Crystal-Box limit [28] by an order of magnitude with MEG(II), i.e. a reach down to BR(µ → eX, X → γγ) 10 −11 for sufficiently prompt X decays [43], this corresponds to LFV scales 10 12 GeV Λ µe . For comparison, BR(µ → eX) with invisible X decay currently gives a lower limit 10 9 GeV Λ µe ; if this were to be improved to BR(µ → e + inv) 10 −8 with Mu3e [41] one could push this to 10 11 GeV Λ µe . This illustrates nicely how much limits on BR(µ → eX) can be improved if X decays back into observable particles within the detector. Optimistically, the observation of LFV DV allows us to determine three quantities: the invariant γγ mass gives m X , the total rate µ → eX, X → γγ gives Λ µe via Eq. (9), and the decay length gives g γγ via Eq. (10), i.e. the region in Fig. 2. This is the reason why LFV DV is such an interesting signature to pursue. Decay µ → eX, X → ee Setting all X couplings but g eµ and g ee to zero allows us to determine the X decay length of µ → eX, X → ee from Eq. (8), and compare to existing limits on g ee . At one loop, X contributes negatively to leptonic magnetic moments [54], so we can obtain a bound from (g − 2) e [55]. We will not bother deriving collider constraints on g ee (e.g. e + e − → γX, X → e + e − [56]) because they are not relevant for our region of interest. The most important constraints come once more from beam dumps [54,57], which again prohibit decay lengths longer than cm, see Fig. 2 (upper right). Note that Mu3e should be able to set a limit on g ee via µ + → e +ν µ ν e X, X → e + e − without LFV, analogous to the dark photon case discussed in Ref. [58]. Similar to the diphoton decay, µ → eX, X → ee with a DV around cm could potentially be distinguished from prompt decays at future experiments such as Mu3e, but this requires a dedicated analysis. The X mass is necessarily large in this region; for instance, from Fig. 2 one can read off that decay lengths below 1 mm correspond to allowed masses m X 15 MeV. Since the decay length is rather short, many of the decays will pass the cuts for prompt µ → 3e. The light-physics origin can nevertheless leave a trace in the Breit-Wigner X peak of the invariant e + e − mass. This is of course a very optimistic scenario in which we observe so many LFV events that we can determine the differential distributions. Mu3e aims to improve the BR limit on prompt µ → 3e decays down to 10 −16 [39]; prompt-enough µ → eX, X → ee decays should then naively be probed well below 10 −14 , which corresponds to limits on Λ µe up to [49][50][51]. In black we show contours of the boosted decay length γcτ of X → γγ, assuming X to be produced from an at-rest muon decay µ → eX (upper panel) or tauon decay τ → eX (lower panel). Here, the solid black line corresponds to γcτ = 0.01 cm, the dotted one to 0.1 cm, the dashed one to 1 cm and the dot-dashed line to 10 cm. Right: same as left, but for a scalar X with coupling to electrons, so the contours are for the boosted decay length γcτ of X → ee. 8 × 10 13 GeV if BR(X → ee) 1. This is the highest testable LFV scale in our analysis. With non-zero couplings g eµ and g ee the boson X unavoidably contributes to µ → eγ at loop level. Defining the function which is 1 + O(x) for small x, the µ → eγ branching ratio takes the simple form [59] BR(µ → eγ) ∼ 10 −18 10 GeV Λ ee to lowest order in the electron mass. This is unobservably small for the values probed by µ → eX and µ → eX, X → ee, illustrating the importance of lightboson searches. For µ → eX, X → ee there is potentially a second region of interest, with g ee couplings below the beam-dump limits. This region was excluded by the constraints from the supernova SN1987A for the di-photon channel ( Fig. 2 (upper left)), but the situation is different here. Supernova limits on g ee have of course been derived early on [60], but usually in the context of axions where the coupling to quarks is dominant. Thus, while supernova constraints have been significantly improved and refined for most other light-new-physics models and couplings [61], there has been surprisingly little progress for g ee . While we naively expect the supernova limit to overlap with the beam-dump limits as in most cases, the recent evaluation of the scalar coupling Xee in Ref. [62] shows that a gap between them is also possible. An updated constraint on our g ee following for example Ref. [61] goes beyond the scope of our letter but is certainly a worthwhile endeavor. Let us for now assume that there is viable parameter space around Λ ee ∼ 30 TeV, i.e. just below the beam-dump limits. This corresponds to decay lengths above 10 3 cm, which is of course far outside of the detector. Nevertheless, the probability for X to decay within the detector is not necessarily very small, roughly 1 − P (x) x/γcτ . The effective branching ratio for X decay inside the detector is then Now, BR(µ → eX) is expected to be pushed down to 10 −8 in Mu3e [41], while l dec /γcτ can be as big as 10 −3 . Thus, effective branching ratios BR(µ → eX, X → ee) with a DV in the detector can be as big as 10 −11 . Compared to the case of rather short-lived X discussed before, very few of the X → ee decays will appear prompt here, with most decays at the edge of the detector. This will reduce the efficiency of the search, but should still allow to improve the limit below 10 −11 with a dedicated analysis. If µ → eX is observed, a search for µ → eX, X → ee becomes of course obligatory. Pending updated supernova constraints, there is room for LFV DV µ → eX, X → ee anywhere in the Mu3e detector. TAUON DECAYS The same analysis as before can be made for tauons, with current LFV limits coming mostly from BaBar and Belle, about to be improved with LHCb and Belle-II [3]. Decay τ → X, X → γγ In complete analogy to the muon case, we can study τ → X, X → γγ by setting all other X couplings to zero. Note that the cases = e and = µ differ only near the phase-space closure m X m τ − m . As can be seen from Eq. (8), the X decay length is boosted by a factor m τ /m µ compared to the µ → eX decay, easily allowing for decay lengths of order 10 cm. In addition, the kinematically accessible masses m X ∼ GeV can evade beam-dump limits altogether and essentially allow for arbitrarily long or short X decay lengths (Fig. 2 lower left). Thus, even without proper knowledge of the tau momentum distribution in the lab frame or the vertex resolution of Belle or LHCb we can confirm the possibility of LFV DV tauon decays and encourage a dedicated search. Decay τ → X, X → ee Just like for τ → X, X → γγ, the large tauon mass boosts the τ → X, X → ee decay length out of the highly-constrained beam-dump region, allowing once more essentially arbitrary DV. Current limits on the prompt decays τ → ee are of order 10 −8 [2], which corresponds to scales Λ τ ∼ 10 9 GeV if the X decay is sufficiently prompt and BR(X → ee) 1. The sensitivity should not suffer much for the case of DV, as long as most X decay within the detector. The large tauon mass allows for a plethora of kinematically possible X decays. Focusing on the muon decay X → µµ, the main constraint on that coupling comes from the muon's magnetic moment. At one loop, X contributes with a negative sign to (g − 2) µ [54], whereas the experimental value is infamously larger than the SM prediction by about 3σ. Using very conservatively the 5σ constraint from (g − 2) µ , we obtain a lower limit Λ µµ 500 GeV (93 GeV) for m X m τ (m X = 1.5 GeV). This poses no problem for LFV DV, seeing as a large decay length requires much larger Λ µµ : 10 cm GeV m X 2 Λ µµ 10 6 GeV The parameter space for LFV with muon DV is thus wide open and ready to be explored. We expect the same sensitivity as for the ee mode discussed above. The last remaining LFV decay with displaced (neutral) vertex with leptons is τ → eX, X → µe. Since m µ +m e < m X by construction, there are no µ → eX constraints to limit Λ µe , so the main constraint comes from (g − 2) µ again, which is weak. It is hence completely possible to have τ + → e + X, X → µ ∓ e ± with a DV deep inside the detector, which has ample observables to identify it and should be a very background free decay. CONCLUSION The search strategies for rare lepton-flavor-violating decays have historically usually been motivated by heavy new physics, allowing for the use of effective field theory. While this indeed covers an immense region of model space, the absence of any signal so far implores us to challenge this approach. An obvious loophole comes in the form of light new particles with flavor violating couplings, which could be produced on-shell and travel a finite distance in the detector before decaying back into known particles. Here we have shown that such LFV displaced vertices are indeed possible for tauon decays τ → X, X → , γγ, with essentially unconstrained decay length. For muon decays µ → eX, X → γγ, beamdump experiments and supernova data already constrain the decay length to be below a cm, rendering these decays fairly prompt. The µ → eX, X → ee mode requires a dedicated re-analysis of supernova limits to evaluate the potential displaced-vertex lengths. We urge our experimental colleagues to perform dedicated searches for rare LFV DV decay channels, e.g. at MEG(-II), Mu3e, and Belle(-II).
2017-11-28T12:58:02.000Z
2017-10-05T00:00:00.000
{ "year": 2017, "sha1": "2d68eb5b2c15003fe72f7fb1dd859d1e7d33abe8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2017.11.067", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "6ab018b243497569b32edd194deead729437e832", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210975224
pes2o/s2orc
v3-fos-license
Neuroevolutionary learning in nonstationary environments This work presents a new neuro-evolutionary model, called NEVE (Neuroevolutionary Ensemble), based on an ensemble of Multi-Layer Perceptron (MLP) neural networks for learning in nonstationary environments. NEVE makes use of quantum-inspired evolutionary models to automatically configure the ensemble members and combine their output. The quantum-inspired evolutionary models identify the most appropriate topology for each MLP network, select the most relevant input variables, determine the neural network weights and calculate the voting weight of each ensemble member. Four different approaches of NEVE are developed, varying the mechanism for detecting and treating concepts drifts, including proactive drift detection approaches. The proposed models were evaluated in real and artificial datasets, comparing the results obtained with other consolidated models in the literature. The results show that the accuracy of NEVE is higher in most cases and the best configurations are obtained using some mechanism for drift detection. These results reinforce that the neuroevolutionary ensemble approach is a robust choice for situations in which the datasets are subject to sudden changes in behaviour. Introduction The ability of a classifier to learn from incremental and dynamic data extracted from a nonstationary environment (when data distribution changes over time) poses a challenge to the field of computational intelligence. In the context of neural networks, the problem becomes even more complicated, since most of the existing models must be retrained when a new data block is available, using the whole set of patterns learned until then. To cope with that sort of problem, a classifier must, ideally, be able to [43]: -Track and detect any changes in the underlying data distribution; -Learn with new data without the need to present the whole dataset again for the classifier; -Adjust its own parameters in order to address the detected changes on data; -Forget what has been learned when that knowledge is no longer useful for classifying new instances. All these abilities seek, in one way or another, to deal with a phenomenon called concept drift [51,22]. This phenomenon defines datasets that suffer changes over time, such as when there is a change in the relevance of the variables, or when the mean and variance of the variables change. Many approaches have been devised to accomplish some or all of the abilities mentioned above. One of the older and simpler approaches is a sliding window (not always continuous) on the input data used to train the classifier with the data delimited by this window [21]. Another method is to detect deviations and, if they occur, to adjust the classifier [7]. Some models, in turn, use rule-based classifiers, like [43,[59][60][61]. A more successful and widely used approach though is to use a group of different classifiers (ensemble) to cope with changes in the environment. Several different ensemble models have been proposed in the literature, including recent approaches like [56][57][58], and may or may not weigh each of its members. Most models using weighted classifier ensembles determine the weights for each classifier using a set of heuristics related to classifier performance in the most recent data received [22]. Although several algorithms have already been proposed in the literature for classification in concept drift scenariosmany even using ensembles -for this type of problem, neuroevolution has still been little explored. Neuroevolution uses evolutionary algorithms to adjust parameters that affect the performance of artificial neural networks, such as topology, learning rate, weights, among others. In this case, each solution of the evolutionary algorithm stores a representation of these parameters, which are evolved to find the optimal network for the problem. Applied to neural network ensembles, evolutionary algorithm is also able to dynamically adjust the entire model, a task that would be very arduous if performed manually, due to the complexity of the model. Because of the architecture complexity, it is necessary that the neuroevolutionary models based on classifier ensembles have good computational performance and fast convergence, in order to be able to be applied in real scenarios. This feature becomes even more relevant in nonstationary environments, since it is necessary to update the ensemble each time new data become available or when some change is detected in data. Thus, this step must be fast so as not to compromise the overall performance of the model. To deal with this issue, an interesting and still littleexplored strategy in the literature related to neuroevolutionary models is the quantum-inspired evolutionary algorithms. This is a class of evolutionary algorithms developed to achieve better performance in computationally intensive problems, inspired by quantum computing principles [17,18,2,39,52,8]. One of the main advantages of the quantum-inspired evolutionary models is that good solutions are obtained with the smallest possible number of evaluations. This class of algorithms has been previously used in the literature to solve combinatorial and numerical optimization problems, based on binary [18,39] and real representations [2,39,52], providing better results and using less computational effort than classical genetic algorithms [47]. Applied to neural network ensembles, quantum-inspired evolutionary algorithms can be used to model the neural networks and to determine the voting weights for each ensemble member. Thus, each time a new block of data arrives, the ensemble can be optimized, improving its classification performance for the new data. Models for learning in nonstationary environments can or cannot contain drift detection mechanisms. Most of the models found in the literature assume that the changes occur in a hidden context external to the model itself and, therefore, the drift cannot be predicted [15]. For this reason, these models use the passive and reactive approaches, that is, from the results of the model (in classification problems, the label predicted by the model is compared with the real label received), verify the drift occurrence and react to it only after the error is observed in the model. However, anticipating the detection of drift in the input data before they are submitted for prediction (i.e., before receiving the true labels) seems to be a more satisfactory approach since it permits to adjust the model previously to better deal with the new scenario and avoid the classification error. For this reason, the model proposed in this work uses this active approach, being an important differential compared to the existing approaches in the literature. Given the above, the main objective of this work is to propose and develop a self-adaptive and flexible model, with good accuracy and suitable for learning in nonstationary environments. A new quantum-inspired neuroevolutionary model, based on a Multi-Layer Perceptron (MLP) neural network ensemble, will be presented for learning in nonstationary environments. The proposed model, called NEVE (Neuroevolutionary Ensemble), has the following characteristics: -Contains a concept drift detection mechanism, with the ability to detect changes proactively or reactively. This method, already detailed in [10] allows the reaction and adjustment of the model whenever necessary; -Performs the automatic generation of new classifiers for the ensemble, most suitable for the new input data, using the quantum-inspired evolutionary algorithm for numerical and binary optimization (QIEA-BR) [39]; -Automatically determines the voting weights of each ensemble member, using the quantum-inspired evolutionary algorithm for numerical optimization (QIEA-R) [2,52], a simplified version of QIEA-BR. Several experiments were performed with artificial and real datasets to validate and compare the performance of the proposed model with other existing models for learning in nonstationary environments, verifying how the detection model affects the performance and accuracy of NEVE. This work is structured in four additional sections. Section 2 presents a brief review of the literature related to the fundamentals of concept drift. It also describes the evolutionary models with quantum inspiration used in this work: QIEA-R and QIEA-BR. Section 3 presents the proposed neuroevolutionary model (NEVE) and Section 4 discusses the experimental results. Finally, Section 5 presents the conclusions of this work and possibilities of future work. 2 Literature review Concept drift The term concept drift can be defined informally as a change in the concept definition over time and, hence, change in its distribution. Concept drift refers to a supervised learning scenario, where the relationship between the input data and the target variable changes over time [15]. An environment from which this kind of data is obtained is considered a nonstationary environment. Formally speaking, considering the posterior probability of a sample x belonging to a class y, according to [9] concept drift is any scenario in which this probability changes over time, that is: P t + 1 (y| x) ≠ P t (y| x).A practical example of concept drift mentioned in [29] is detecting and filtering out spam e-mails. The description of the two classes "spam" and "non-spam" may vary over time. They are user specific, and user preferences also change over time. Moreover, the variables used at time t to classify spam may be irrelevant at t + k. In this way, the classifier must deal with "spammers", who will keep creating new forms to trick the classifier into labeling a spam as a legitimate e-mail. Concept drift is usually classified in abrupt or gradual [15,51,54]. The abrupt drift occurs when a concept A is abruptly switched for another concept B, that is, at time t the source S1 is suddenly replaced by S2. The gradual drift, on the other hand, happens when a concept A is gradually exchanged for the other concept B. In this case, while there is no definitive change from concept A to concept B, we observe more and more occurrences of B and fewer occurrences of A. Both sources S1 and S2 are active, but as time passes, the probability of sampling the source S1 decreases as the sample probability of the source S2 increases. At the beginning of this drift, before more instances are observed, an instance of the S2 source can be easily mistaken for random noise. It is important to note that noise (or outlier) is not considered a type of drift because it refers to an anomaly or isolated occurrence of a random drift. In this case, there is no need to adapt the model, which should be robust to noise. The term "Drift Detection" refers to techniques and mechanisms for detecting drift by identifying points of change or small intervals during which the variations occur. In this case, the environment has sufficiently changed so that the existing models can no longer be effective to predict the behavior of the current data [15]. Several drift detection mechanisms have already been proposed in the literature, but most of them work reactively: they compare the class predicted by the classifier to the correct class label received later, noticing the drift only after its occurrence and the misclassification. Only then, the reactive detector applies a sequence of procedures to identify some change in the conditional class distribution -a concept drift. Examples of reactive detectors can be found in [14,5,36,4,42,3,31,23,13,46]. Few papers use a proactive approach. [28] applies principal component analysis (PCA) for features extraction before the drift detection. The authors discuss and show evidence that components with lower variance should be stored as the extracted features, since they are more likely to be affected by a change. The authors then choose a change detection criterion based on the semiparametric log-likelihood function that is sensitive to changes in the mean and variance of the multidimensional distributions. In [10], we proposed a new drift detection mechanism, called DetectA (Detect Abrupt Drift), which uses a proactive detection approach. This model is used in the experiments of this work and comprises three basic steps: (i) label the patterns from the test set (an unlabelled data block), using an unsupervised method; (ii) compute some statistics from the training and test sets, conditioned to the given class labels provided in the training set; and (iii) compare the training and testing statistics using a multivariate hypothesis test. Based on the results of the hypothesis tests, we attempt to detect the drift on the test set, before the real labels are obtained. Algorithms for handling concept drift problems can be categorized in several ways. Table 1, based on [9,27,29,30], summarizes the most commonly used classifications in the literature, with their respective definitions. Algorithms that use the passive approach (without drift detection) regularly update the model as new data arrives and a forgetting heuristic is used, independently of the existence of change. For example: in a classifier ensemble, the weights of the members are updated after each new data received (individual or in blocks), based on the recent accuracy of ensemble members. Without concept drift, the classification accuracy will be stable and the weights will converge. If any changes occur, the weights will change to reflect them, without the need for explicit detection [29]. However, this can be very costly if the amount of data that arrives is excessively large or if the application require user feedback to label the data, which can be time-consuming. One way to reduce this problem is to use special techniques to detect changes and adapt the model only when unavoidable, using the active approach [51], also called trigger approach. In general, when active approaches detect a drift, some action is taken, for example, configuring a window with the latest data and retraining the classifier, or adding a new classifier to the ensemble. Thus, the active method seeks to point out when the drift occurred and allows the model to modify itself or continue learning in the same way. A disadvantage of this method is the risk of having an imperfect mechanism that can produce false alarms, which is very common particularly in noisy datasets. In the passive mechanism, the learner believes that the environment can change at any time or can be continuously changing. The algorithm then continues to learn from the environment, building and organizing its knowledge base. If a change has occurred, it is learned. If nothing happened, the existing knowledge is reinforced [9]. The majority of literature ensembles follow a passive schema of adaptation, whereas active approaches are usually used with single online classifiers [27]. The models [24,26,48] are examples of passive approaches and the models [14,5,[36][37][38]32] are examples of active approaches. Regarding data entry, it is worth emphasizing that individual patterns can be converted into batches or blocks of data. The opposite is also possible, but block data can come in large quantities, making instance-based processing very timeconsuming [29]. Comparing single classifier x ensemble approaches, ensemble-based approaches are newer and tend to have better accuracy, flexibility, and efficiency than those using a single classifier [29]. It is important to remember that in massive datasets it is often preferred to use simple models -such as single classifiers -since there may not be time to execute and update an ensemble. On the other hand, some authors argue that a simple ensemble may be easier to use than certain simple adaptive classifiers, such as decision trees. When time is not the main concern, but high accuracy is required, an ensemble becomes the natural solution. For example, in mammography screening for tumors, it is acceptable to take a few minutes per image [30]. Ensemble approaches can use different methods to adapt to a concept drift. As mentioned earlier, responding to several types of concept drift is a difficult task for a simple classifier. For this reason, several systems based on classifier ensembles have recently been proposed to deal with concept drift learning, such as [49, 48, 11, 12, 44, 24-26, 45, 9, 33, 53, 6, 50]. The main novelty proposed in this work is the possibility of using an active drift detection mechanism (DetectA) together with an ensemble of neural networks, trained and combined through quantum-inspired evolutionary algorithms, allowing automatic and dynamic adjustment of the classifiers and their weights in the ensemble, using less computational time. Quantum-inspired evolutionary algorithms Classical evolutionary algorithms have been used successfully to solve complex optimization problems in a wide range of fields, such as automatic circuit design and equipment, task planning, software engineering and data mining, among many others [1,2]. The fact that this class of algorithms does not require rigorous mathematical formulations about the problem to be optimized, besides offering a high degree of parallelism in the search process, are some of the advantages of the use of evolutionary algorithms. However, some problems are computationally costly regarding the evaluation of the fitness function during the search process, making optimization by evolutionary algorithms a slow process for situations where a fast response is desired (as in online optimization problems). In order to address these issues, quantum-inspired evolutionary algorithms have been developed, which are a class of estimation distribution algorithms that perform better in combinatorial and numerical optimization when compared to their homologous canonical genetic algorithms [1,2,8,17,18,39,52]. These algorithms are inspired by concepts of quantum physics, in particular in the concept of superposition of states, and were initially developed for optimization problems using binary representation, such as the Quantum-Inspired Evolutionary Algorithm (QIEA-B) [17][18][19][20], which uses a chromosome formed by q-bits. Each q-bit consists of a pair of numbers (α, β), where |α 2 | + |β 2 | = 1. The value |α 2 | indicates the probability that the q-bit has value 0 when observed, while value |β 2 | indicates the probability that the q-bit has value 1 when observed. Thus, in QIEA-B, a quantum individual is formed by M q-bits, according to (1): where i = 1, 2, 3, ..., M. Active Uses some drift detection mechanism, learning only when the drift is detected. Individual input x Input in Blocks Individual Learn one instance at a time. They have better plasticity but poorer stability properties. They also tend to be more sensitive to noise as well as to the order in which the data are presented. In blocks Requires blocks of instances to learn. They benefit from the availability of larger amounts of data, have better stability properties, but can be ineffective if the batch size is too small, or if data from multiple environments are present in the same batch. Typically use some form of windowing to control the batch size. Single Classifier x Ensemble Single classifier Uses only one classifier. Ensemble Combines multiple classifiers. Quantum-inspired evolutionary algorithms were then extended to real representation, to better deal with numerical optimization problems. In these problems, the direct representation is more appropriate, in which real numbers are directly encoded in a chromosome rather than converting binary strings into numbers. With real numerical representation, the memory demand is reduced while the precision is increased [1]. Thus, the Quantum-Inspired Evolutionary Algorithm with Real Representation (QIEA-R) was developed [1,2], inspired by the concept of multiple universes of quantum physics. In this scenario, the algorithm allows performing the optimization process with a smaller number of evaluations, substantially reducing the computational cost. Next sections describe the QIEA-R and QIEA-BR models, which are better suited to neuroevolution. Quantum-inspired evolutionary algorithm with real representation (QIEA-R) Originally proposed in [1], this algorithm was used to solve numerical optimization benchmark problems and the neural evolution of recurrent neural networks. The results obtained demonstrated the efficiency of this algorithm in the solution of these types of problems. In QIEA-R, the quantum population Q(t) consists of N quantum individuals qi (i = 1, 2, 3, .., N) which are composed of G quantum genes. Each quantum gene is formed by a probability density function (PDF), which represents the superposition of states and is used to observe the classical gene. Quantum individuals can be represented by: where i = 1, 2, 3, ..., N, j = 1, 2, 3, ...,G and pij functions represent the probability density functions used by the QIEA-R to generate the values for the genes of the classical individuals. In other words, the pij(x) function represents the probability density of observing a given value for the quantum gene when its overlap is collapsed. The probability density function used by [1] is the square pulse, an uniform function of simple geometry, which can be defined by eq. 3: where Lij is the lower limit and Uij is the upper limit of the interval in which the gene j of the i-th quantum individual can collapse, i.e., assume values when observed. For the case where pij(x) is a square pulse, the quantum gene can be represented by storing the position of the center point of the square pulse and its width: μ ij and σ ij , respectively. The QIEA-R also uses a population of quantum individuals, which are observed to generate the classical individuals. The updating of the quantum individuals is carried out based on the evaluation of the classic individuals: μ ij and σ ij are altered in order to bring the pulse to the most promising region of the search space, increasing the probability of observing a certain set of values for the classical gene in the vicinity of the most successful individuals in the classical population. The pseudocode of the QIEA-R algorithm is shown in Appendix 1. In this work, the QIEA-R is used to evolve voting weights for each classifier member of the ensemble and thus determine the final decision of the ensemble. In this way, the chromosome will have size n, where n represents the number of ensemble members. Each gene, in turn, will represent the voting weight associated with each classifier. Further details on QIEA-R can be found in [1,2,52]. Quantum-inspired evolutionary algorithm with binary-real representation (QIEA-BR) The main motivation for creating an algorithm with mixed representation is that many real problems cannot be solved only by numerical decisions or combinatorial decisions. More specifically in the field of neural networks, the modeling process may involve combinatorial decisions (selection of the most relevant variables to the input layer, how many neurons should be used in the middle layer, etc.) and, simultaneously, numerical decisions (optimal values for synaptic weights). With this motivation, [40] proposed the creation of an algorithm with quantum inspiration and binary-real representation, called QIEA-BR, for simultaneous optimization of combinatorial and numerical problems, that is, of mixed nature. The QIEA-BR algorithm was the first evolutionary algorithm with quantum inspiration and mixed representation proposed in the literature and will inherit the main characteristics of its precursors, such as global problem-solving ability and probabilistic representation of the search space. This mixed representation results in high population diversity in each quantum individual and the need of fewer individuals in the population to explore the search space. The QIEA-BR algorithm also requires a population of quantum individuals that represents the overlap of possible states that the classical individuals can assume when observed. The quantum population Q(t), at any instant t of the evolutionary process, is formed by a set of N quantum individuals qi (i = 1, 2, 3, .., N). Each quantum individual qi of this population is formed by L genes gij (j = 1, 2, 3, ..., L). The main difference between the QIEA-BR and its predecessors is that part of the L genes is represented by q-bit, similar to QIEA-B, and another part by real quantum genes (q-real, similar to QIEA-R). Thus, the representation of a quantum individual i at any time instant t is given by: where the index b represents the binary part (q-bit) and the index r represents the real part (q-real). Thus a quantum individual can be described by: In this work, the QIEA-BR is used to perform the complete modeling of an artificial MLP neural network. The binary part selects the most appropriate input variables; defines which neurons (of a maximum number of neurons) are active in the hidden layer (1 active neuron, 0 inactive); and specifies the activation function of each neuron in the network (1 hyperbolic tangent and 0 sigmoid). The real part determines the values of all weights. Figure 1 illustrates the information that is encoded in each of the quantum genes, binary or real, of a QIEA-BR chromosome. This chromosome will be used in the neuroevolutionary models presented in Section 4. In QIEA-BR, the evolution of the weights and activation function of a certain neuron in the quantum and classical chromosomes is conditioned to that neuron being active in the corresponding binary part. That is, the genes representing the weights and activation functions will remain unchanged by quantum and classical evolutionary process if this neuron is inactive. The neural network created by QIEA-BR is similar to that shown in Fig. 2: the effective number of attributes in the input layer and of neurons in the hidden layer are evolved by the QIEA-BR, with the maximum size of inputs equal to the number of available attributes in the dataset (k) and the maximum number of neurons in the hidden layer (nh) configured by the user. Thus, the number of genes is given by: where nc is the number of classes in the classification problem. In this case, the evaluation function used is the classification accuracy given by: where C i is the class of the i-th pattern, whileĈ i is the class predicted by the individual (MLP). When C i ¼Ĉ i then the result is zero, otherwise it is equal to one. Each individual is submitted to this evaluation function, in such a way that the best individuals are those who have greater accuracy. Further details on QIEA-BR can be found in [40,52]. NEVE: Neuroevolutionary Model for Learning in Nonstationary Environments This section presents the proposed new quantum-inspired neuroevolutionary model, which is a self-adaptive and flexible , where each neural network member is trained and has its parameters (topology, weights, among others) optimized by QIEA-BR algorithm (see Section 2). This neuroevolutionary model is called NEVE (Neuroevolutionary Ensemble) and is composed of three main modules, detailed below and illustrated in Fig. 3: -Drift Detection; -Classifier Creation; -Evaluation and combination weights. The Drift Detection module is optional. If activated, for each new input data block received, the detection module checks if any drift has occurred. The model works with data blocks of configurable size. If it is necessary (or desired) to work with individual data inputs, the block can be set to size to 1. However, it is important to mention that the strategy of working with one instance at a time is not the most suitable for this model, as it may compromise its computational performance. Two methods of detection were proposed: proactive and reactive detection methods, resulting in four different approaches implemented for this drift detection module [ The Classifier Creation Module is responsible for creating a new classifier, which may or may not be added to the ensemble, depending on its maximum size defined by the user. It is worth mentioning that the decision to create a new neural network is linked to the drift detection mechanism used, which will be better detailed in the following subsections. If created, the new classifier is added to the ensemble if space is available or by replacing an older classifier of worse accuracy. This approach gives the ensemble the ability to learn the new data without having to analyze the old data, as well as allowing to forget the data that is no longer needed. In short, the classifier creation module determines the complete configuration of the new MLP network ensemble member using the QIEA-BR algorithm (presented in Section 2). The algorithm selects the most relevant input variables, specifies the number of neurons in the hidden layer (respecting the maximum limit configured by the user), and determines the weights and activation functions of each neuron. The number of output neurons is equal to the number classes in the application. Finally, the Evaluation Module is responsible for determining the final response of the classifier ensemble by combining the results presented by the classifier members. The QIEA-R algorithm is used to determine the most suitable voting weight for each classifier dynamically. The optimization of weights allows the model to easily adapt to sudden data changes by assigning higher weights to the classifiers best suited to the current concepts that govern the data. Three possible voting methods were implemented: -Linear Combination: It uses the QIEA-R algorithm to generate a voting weight for each classifier, which is multiplied by the output of each ensemble member (between 0 and 1), on a weighted average. The result of this weighted average is used to determine the ensemble response. If the problem has only two classes, the output is assigned to class 0 if the result is less than 0.5 and to class 1 otherwise; in case of problems with multiple classes, the class will be the one that presents the output with the highest value; -Weighted Majority Voting: As in the previous case, it uses the QIEA-R algorithm to generate a voting weight for each classifier. However, the outputs of the neurons from each ensemble network are first rounded (for values 0 or 1) and then multiplied by the corresponding classifier weight, thus forming a weighted average. Similar to the linear combination, in problems with only two classes, the output is defined as class 0 if the result of the weighted average is less than 0.5 and as class 1 otherwise; in the case of problems with multiple classes, the class associated with the output with the highest value is defined; -Simple Majority Voting: The output of each ensemble member is rounded to one of the possible classes, and the ensemble final output is the most chosen class among all classifiers. In this case, there is no need to determine voting weights. In summary, considering the detection mechanism used, there are four possible variations of the NEVE model proposed and detailed in the following subsections: -ND-NEVE, without detection -RD-NEVE, with reactive detection -PDGL-NEVE, with proactive detection and the Group Label approach -PDPMS-NEVE, with proactive detection and the Pattern Mean Shift approach The following subsections detail each of the four proposed NEVE variations. For each variation, an explanatory text and a pseudocode of the algorithm is presented. ND-NEVE (without detection) The first variation of NEVE, "NEVE without Detection" (ND-NEVE), as the name implies, does not use any detection mechanism. It consists of an ensemble of MLP neural networks that, with each new data block received, it trains a new MLP that can be added to the ensemble if space is available. The operation of ND-NEVE can be generalized as: when a data block t arrives (without the class labels), a new MLP network is trained using the QIEA-BR algorithm and t-1 block with the real class labels. The new network is provisionally added to the ensemble and the ensemble is tested with block t. Voting weights of all networks are determined using the QIEA-R algorithm and block t-1. The final ensemble classification is calculated using the test results with block t, the voting weights and the chosen voting method. Finally, we assume that the actual labels of block t become available and then, the permanence of the new network in the ensemble is evaluated. The pseudocode of the ND-NEVE is demonstrated in Appendix 2. RD-NEVE (with reactive detection) The second variation of NEVE is "RD-NEVE (with reactive detection)". This variation uses the reactive detection mechanism, detailed in [10]. For each new data block received, the ensemble classifies it and, as soon as the real class labels are obtained, the detection mechanism checks if a drift has occurred from the previous data block. If so, a new MLP is created, which is added to the ensemble if space is available. The operation of RD-NEVE can be generalized as follows: when a data block t arrives, the voting weights of all ensemble members are determined using the QIEA-R algorithm and the t-1 block. The ensemble is tested with block t and classification results are combined with the weights calculated by QIEA-R, using the chosen voting method to determine the final ensemble classification. It is assumed that the real labels of block t are later available and the reactive detection can be applied [10]. If a drift has occurred in block t, a new MLP network is created using the QIEA-BR algorithm and trained with block t. The new network is added to the ensemble if space is available or if it is better than at least one of the old networks, replacing it on the ensemble. The pseudocode of the DE-NEVE is demonstrated in Appendix 2. PDGL-NEVE (with proactive detection and Group Label approach) The third variation of NEVE is "PDGL-NEVE (with proactive detection and Group Label approach)". This variation uses the proactive mechanism of detection [10], where each new data block is clustered, using the centroids of the previous data block as the initial centroids of the algorithm. Based on the clustering results, the detection mechanism checks if a drift has occurred from the previous data block; if so, the model trains a new MLP with the new data block and the class labels suggested by the clustering algorithm. The operation of PDGL-NEVE can be summarized as: when block t arrives, its instances are grouped using the real classes of block t-1 as the initial suggestion of centroids, since the real class labels of block t are still unknown. Then, it is verified if a drift has occurred in block t in relation to block t-1. If so, a new MLP network is created using the QIEA-BR algorithm and trained using block t with the class labels provided by the clustering algorithm. The new network is provisionally added to the ensemble, which is tested with block t. The voting weights for all networks are determined using the QIEA-R algorithm and block t, also with the labels provided the clustering algorithm. The classification results and weights are combined using the chosen voting method to determine the final ensemble classification. It is assumed that the real labels of block t are later available and the initial centroids for the next grouping are updated, now considering the real class labels of the data block. The permanency of the new network in the ensemble is evaluated: it stays if space is available or if it is better than at least one of the old networks, replacing it in the ensemble. The pseudocode of the PDGL-NEVE is demonstrated in Appendix 2. PDPMS-NEVE (with proactive detection and Pattern Mean Shift approach) The fourth variation of the NEVE is "PDPMS-NEVE (with proactive detection and Pattern Mean Shift approach)". This variation also uses the proactive detection [10]. As in the previous variation, each new data block is grouped to verify if a drift has occurred from the previous data block. If so, a new MLP is trained with the previous labeled data block, and the new data block is "adjusted" towards the previous data block. In other words, when a drift is detected, instead of creating a new MLP using the new data block (as performed by the Group Label approach), the old data block is used to train the network and the drift is "removed" from the new data block. While in the Group Label approach the new network is suitable for the new data, in Pattern Mean Shift approach the new data is adjusted to the old network (trained with the old data). The pseudocode of the PDPMS-NEVE is demonstrated in Appendix 2. Briefly, the main difference between PDGL-NEVE and PDPMS-NEVE is that in PDPMS-NEVE, when a drift is detected, a new MLP is created using the previous labeled data block (and not the new data block with the labels provided in the grouping, as in the PDGL-NEVE). Then, the new data block is "adjusted" in the direction of the previous data block and it is submitted to the ensemble classification. In the PDGL-NEVE, on the other hand, the new data block is tested by the ensemble without adjusting the data. Additionally, in PDPMS-NEVE the data block used to determine the weights of each MLP is the old data block with the real labels, while in the PDGL-NEVE the new data block is used with the labels provided by the grouping. This section presented the neuroevolutionary model for learning in nonstationary environments proposed in this paper and detailed its four variations. The next section describes the experiments performed with the proposed detection methods. Experiments To assess the ability of the proposed model to learn in nonstationary environments and also to verify the best variations and configurations of the models regarding accuracy and computational performance, six different datasets were used on different simulations and scenarios. For the experiments, the four variations of the proposed model (described in Section 3) were used: ND-NEVE, RD-NEVE, PDGL-NEVE and PDPMS-NEVE. All experiments were run using standard libraries of MATLAB, as well as its Neural Networks package to train the baseline networks. Datasets description The datasets used in the experiments are: the SEA Concepts (an artificial dataset with a more controlled environment about the drifts) and four real datasets (Nebraska, Electricity, Cover Type and Poker Hand), where the exact moment that the drift occurs is unknown. The SEA Concepts dataset was artificially created by [49]. It is characterized by extensive periods without major changes in the environment, but with occasional abrupt drifts. The Nebraska dataset presents a compilation of climate measurements from the Offutt Air Force Base substation in Bellevue, Nebraska. Its objective is to predict whether a rainfall may appear, using data from the last 30 days. Both datasets are available in [41]. The Electricity dataset is extracted from the Australian New South Wales Electricity Market and the class label defines the price change related to a moving average of the last 24 h. The purpose of the problem is to predict whether the price will go up or down. The Cover Type dataset contains information cells corresponding to a forest cover of 30 × 30 meters, extracted from the US Forest Service (USFS). Its goal is to predict the type of forest cover among seven possible values (therefore, a multi-class problem). The Poker Hand dataset has ten possible categories as output, representing the poker hand that contains 5 cards. The purpose is to identify the type of a Poker hand among the ten possibilities. These datasets are available in [34]. Table 2 presents the main features of each dataset, as well as the block size and number of blocks used in the experiments. Execution details All executions begin at t = 0 and end when consecutive T data blocks are presented for training and testing, with each block being able to suffer different scenarios of concept drift with unknown rates and natures. As detailed in Section 3, the QIEA-BR algorithm evolves the topology of each new neural network, which is created following the criteria of each variation of the proposed model. The number of input variables is selected by QIEA-BR among the available variables in each dataset. For all datasets, a single hidden layer was used, whose number of neurons is evolved by QIEA-BR, having a maximum value specified by the user. The number of neurons in the output layer is equal to the number of classes in each dataset. The synaptic weights and activation functions of the hidden layer and the output layer are also determined by QIEA-BR. The parameters of the quantum evolutionary algorithms are the same as those used by [1,40] and they are detailed in Table 3. The three voting methods detailed in Section 3 were evaluated: linear combination, weighted majority voting and simple majority voting. The maximum ensemble size is also a parameter defined by the user. Table 3 presents the configuration of the parameters used in all the experiments. Thus, for each dataset, 72 different configurations of the model (4 × 3 × 3 × 2) were used, representing each possible combination of the parameters to be evaluated, as shown in Table 3. For each configuration, 30 simulations were performed and the average accuracy and computational time of these runs were calculated. Results The experiments presented below aimed at investigating the difference between accuracy (the ratio of number of correct predictions to the total number of input samples) and computational performance (execution time in seconds) among each of the four variations of the NEVE model, as well as the impact of the voting method, ensemble size and number of neurons in the hidden layer. Therefore, the objective of the experiment is to analyze how these modifications affect the results of the models for each dataset. Tables 4, 5, 6 and 7 show the results of the experiments performed considering the accuracy and the computational performance measured in seconds. It should be noted that execution time is provided only for the SEA Concepts, Nebraska and Electricity datasets. Due to the considerable size of Poker Hand and Cover Type datasets, their execution required the parallelization on several computers, making the comparison of runtime between simulations impracticable. In all cases, the observed standard deviation was less than 2%. We highlighted the best 20% results in bold and gray and the worst 20% results in italics and underlined. The analysis of Tables 4 to 7 shows that: -In general, the ND-NEVE, RD-NEVE and PDPMS-NEVE approaches provided the best accuracy, while the PDGL-NEVE had the worst accuracy; -Considering computational performance, the ND-NEVE, RD-NEVE and PDPMS-NEVE approaches presented the best computational times, and the PDGL-NEVE approach, the worst. It was observed, however, that the dataset also has a great influence on this criterion: the slowest was Electricity, which is the dataset that has the highest number of attributes and also a greater number of blocks among the datasets evaluated; -The best voting methods in terms of accuracy are, in this order: linear combination, followed by weighted majority and simple majority. This shows that the quantum algorithm is contributing positively to the accuracy of the model by determining the voting weights of the networks. Possibly, the early rounding performed in the weighted majority resulted in in attaining a lower average accuracy than the linear combination; -As for the computational performance, the best voting method was the simple majority, which was already expected since this method does not perform the determination of weights via quantum algorithm; -It is observed that, in general, the strategy of unlimited ensemble has lower accuracy than the limited ensembles. There was no significant difference in accuracy between the 5 and 10 ensemble size, which is a positive point, because the unlimited ensemble strategies also presented the worst computational performance, as expected. The unlimited ensemble tends to provide worse accuracy probably due to the increase in the search space of the QIEA-R for determining the voting weights when there are too many networks: it is enough to observe that, in all the datasets used, there are at least 400 data blocks, which allows ensembles of 400 networks for the unlimited case; -No substantial differences were observed either in the average accuracy or in the average computational performance considering the strategies of 5 and 10 neurons maximum in the hidden layer. Figure 4 presents a comparative graph of the computational time for the three binary datasets: SEA, Nebraska and Electricity datasets. It can be observed that the computational time of the ND-NEVE approach is superior to the others, whereas approaches with some type of detection present a similar mean computational time. This confirms that the proposed detection mechanism contributes to reducing the average execution time of the models. The accuracy of the proposed NEVE approaches was also compared with DWM [26], Learn ++, NSE [9], RCD [16], EFPT [55] and AMANDA [56] models. We used 3 different drift detectors for the RCD algorithm: DDM [14], EDDM [5] and ECDD [42]. These simulations were carried out using MOA [35], an open source framework for data mining that includes several learning algorithms implemented for classification, regression, clustering, concept drift detection, among others. For this comparison, we used the same block size chosen for NEVE simulations for all the datasets. In order to make a more coherent comparison with NEVE and to discard the influence of the base classifier on the accuracy of the model, in all other models, the MLP neural networks were used as base classifiers. All the models were parameterized using values indicated by the respective authors. Table 8 presents the results of the best reached configuration (in terms of accuracy) of each NEVE variation, compared to the results of the other models. We highlighted the best results, by dataset, in bold and underlined, the second best in bold and the worst in italics and underlined. When more than one value is highlighted, it means that there is no statistically significant difference in the performance of the classifiers for ≤ 0.05, according to Wilcoxon test. We made 30 runs for each possible configuration and each dataset. In all cases, the observed standard deviation was less than 2%. We can see from Table 8 that NEVE approaches obtained the best result in 2 datasets and the second best in the other 3. Apparently, the ND-NEVE and RD-NEVE approaches provide uniformly superior results in terms of accuracy. What is noticeable in this experiment, in general, is that the EFPT model is the main competitor of the NEVE in terms of accuracy considering SEA, Nebraska and Electricity datasets (as the author didn't performed tests with Poker and Covtype datasets, we could not compare the models in this datasets) and the DWM models seems to be the main competitor of the NEVE in terms of accuracy considering Poker and Covtype datasets. From the results presented, we can highlight that NEVE provides good results without the need for a detection method; however, by adding one, substantial gains in accuracy and computational performance can be obtained. This fact reinforces that the neuroevolutionary ensemble approach is a robust choice for situations in which datasets are subject to sudden behavioral changes. Conclusion This work presented a new neuroevolutive model with quantum-inspiration, based on a multi-layer perceptron (MLP) neural network ensemble for learning in nonstationary environments, called NEVE (Neuro-EVolutionary Ensemble). This model can be used in conjunction with the DetectA concept drift detection model [10], which has the ability to detect changes both proactively and reactively. The use of Quantum-Inspired Evolutionary Algorithms in conjunction with NEVE allows the automatic generation of new classifiers for the ensemble (including the decision of its topology, the most appropriate input variables and its weights) and determining the voting weights of each neural network member of the ensemble. Four different variations of NEVE were implemented: ND-NEVE (without detection), RD-NEVE (with reactive detection), PDGL-NEVE (with proactive detection and Group Label ap-proach), PDPMS-NEVE (with proactive detection and Pattern Mean Shift approach). These variations differ from each other in the way they detect and treat drifts, and were used in experiments with real and artificial datasets in order to evaluate which model variation and configurations achieved the best results. We varied the voting method, the maximum number of neurons in the hidden layer and the maximum size of the ensemble. It was found that the ND-NEVE, RD-NEVE and PDPMS-NEVE approaches produce best results in terms of accuracy and computational performance. It was also observed that the linear combination is the best voting method in terms of accuracy, and simple majority voting the best in terms of computational performance. The unlimited ensemble strategy has worse accuracy and computational performance than limited ensembles, with no significant difference between the 5 and 10 networks. Compared with other consolidated models of the literature, the accuracy of NEVE was found to be superior in most cases. It appeared that the ND-NEVE and RD-NEVE approaches provide uniformly superior results in terms of accuracy, but the addition of the detection method in some cases has resulted in substantial gains. This fact reinforces that the neuroevolutionary ensemble approach was a robust choice for situations in which datasets are subject to sudden behavioral changes. As future work, we intend to integrate, in a single evolutionary model, the creation of the neural network and the determination of voting weights, in order to perform the evolution process in a single integrated process. Also, it is intended to use NEVE for real applications, in order to validate its practical use, although it is very hard to know for sure if a dataset contains concept drift or not. Appendix 1 -Pseudocode of the QIEA-R algorithm The pseudocode of the QIEA-R algorithm is shown as follows. Adriano Soares Koshiyama rec e i v e d h i s B S c d e g r e e i n Economics from UFRRJ and M S c d e g r e e i n E l e c t r i c a l Engineering from PUC-Rio. Nowadays is a PhD Candidate in Computer Science at University College London (UCL), with its main research subject being in F i n a n c i a l C o m p u t i n g a n d Analytics. Its main research topics are related to: Machine Learning, Statistical Methods, Optimization and Finance.
2020-01-31T15:19:22.131Z
2020-01-30T00:00:00.000
{ "year": 2020, "sha1": "563cf3eaaae2af379aa68b89bc4773320047928d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10489-019-01591-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "71153848fde857205c6c332eb04dd8c2dfdc55bf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
28415044
pes2o/s2orc
v3-fos-license
Prochlorperazine in anxiety. Seventy patients suffering from primary anxiety state were selected for this clinical trial. They were randomly assigned to Prochlorperazine (Group I) and other drugs eg., Chlordiapoxide etc. (Group II). Hamilton's anxiety scale was employed to rate the anxiety. With the 4 weeks of therapy there was significant fall in score in both the groups of patients but the fall in score is more in Group I cases as compared to Group II (t = 11.5, df= 68, P < 0.001). The reduction in score (less than 10) and clinical improvement (medium to optimal improvement) was significantly more and faster in Group I (85%) cases as compared to Group II (70%) cases (χ(2) = 5.225, df = 1, P < 0.02). The side effects were least in Group I cases.Prochlorperazine (Stemetil) has given a significant anxiolytic effect without adversly affecting the mental function. In the dosage used prochlorperazine was free from the side effects and can be effectively employed in the management of anxiety. Too many drugs are being prescribed for the relief of anxiety and its associated symptoms. In 1973 more than 46 million prescriptions for sedatives and anti-anxiety drugs were dispensed by retail pharmacists in the National Health Services in England and Wales, and the increase since 1970 has been of the order, of one million annually (Tyrer 1976). Major tranquillizers eg., Chlorpromazine, Promazine, Trifluoperazine, Fluphenazine, Oxypertine etc., in lower dosage are commonly given in anxiety (Hamilton et al 1963, Milne andFowlier 1960). Their disadvantages are of producing extrapyramidal disorders and dependence. With the further studies on the phenothiazines (Chlorpromazine) for antiemetic effect, Prochlorperazine emerged out as an antiemetic and was less active than Chlopromazine as central sedative. Clinical studies demonstrated its (Prochlorperazine) considerably greater psychocorrective proper-ties (St. Laurent et al 1962, Pennington 1959, Nashipury and Moulick 1977, Ahuja 1974. Drug therapy has an important place in the relief of anxiety but this is being eroded • by indiscriminate and sometimes irresponsible prescriptions. Since most phenothiazines in low dosage have been said to be effective in the treatment of anxiety states, it was decided to study the value of Prochlorperazine (Stemetil) in the treatment of anxiety. Material and Methods The study was undertaken in 70 patients who were suffering from varying degree of anxiety and in whom the diagnosis of anxiety state was agreed upon by two consultants (Nashipury and Moulick 1977, Ahuja 1974, Ramchandran and Menon 1977. Only those patients in whom anxiety was diffuse and not limited to any particular object or situation were taken up for the study. None of them were receiving any tranquillizer at the time of study. A detailed case history was taken, physical and psychological examination was done. Hamilton Anxiety Scale (Hamilton 1959) was employed to rate the anxiety state at the beginning and end of the trial. Before the commencement of the trial all patients underwent basal investigations consisting of complete hemogram, erythrocyte sedimentation rate, urine and stool examination, liver function tests, X-ray/ screening of chest and electrocardiograms. Physiological measures eg., P.G.R., and fore-arm blood flow were employed wherever possible to assess anxiety. Investigations were carried out initially, in the middle and end of the trial. After a thorough physical and psychiatric assessment the patients were randomised into two groups (Ahuja 1974):-Group 1: 40 patients who were given Prochlorperazine (5 mg) 2 tablets three times a day or more or less as needed. Group II: 30 patients who were given other drugs, like Chlordiazepoxide, Trifluoperazine in their recommended doses. The patients were examined at pretreatment level and at weekly intervals throughout the 4 weeks trial, improvement or otherwise in the symptoms were noted in anxiety evaluation charts. As and when score came to less than 30 the dose of prochlorperazine in Group I patients were reduced gradually and when it came to less than 10 the drug was withdrawn within one week of time (Ahuja 1974). Results Initially 82 patients were registered in this clinical trial but 70 of them could complete all the stages of trial. Majority of them were from 4th (31.4%) and 3rd (37.2 %) decade of life (mean age = 33.28 ± 10.04 years) with male to female ratio of 4:1 (Table 1). Hamilton anxiety scale was employed to rate the symptoms at the begining and at the end of the trial, which is able to express quantitatively whether the improvement is observed clinically as compared to other methods (Ramchandran andMenon 1977, Lader andMarks 1971). In Hamilton's anxiety scale, the initial average score in Group I was 38.6 ± 6.14 and at the end of the trial the average score was 15.84 ± 8.16 (Table 3). Initial average score in Group II patients was 36.11 ±7.15 and at the end of trial 18.08 ± 7.86. In both groups drugs caused the reduction in anxiety but on comparing the fall in score in these two groups the fall in score was significantly more in Group I as compared to Group II patients (t = 11.2, df = 68, P<0.001). Score during the whole period is depicted in Table 4. The recovery rate was higher and rapid in Group I (87.5 %) as compared to Group II (70 %) cases which was statistically significant (X 2 = 5.22, df = 1, P < 0.02). The final assessment of the Comparing the two groups in attaining score less than 10 in 4 week time: X 2 = 5.225, df=l, P<0.02 significant. Comparison of fall of score between the two groups: t = 11.2, df » 68, P<0.001 Significant therapeutic response (Table 5) showed 87.5% had medium to optimal improvement in Group I patients while it was 70 % in Group II cases. Drug (Prochlorperazine) did not show any significant side effects. Slight transient drowsiness was noted in 3 patients (7.5 %) of Group I cases but it was not of an order to warrant reduction in dosage or withdrawal of the drug. Extrapyramidal side effects were noted in 3 cases (10 %) of Group II and many of them had drowsiness and dryness of mouth etc (5 cases or 16.6 %), which warranted to reduce the dose and even withdrawal of drug in 2 patients of Group II. Discussion The present study was on 70 cases in whom anxiety manifested both bodily and psychologically, which are commonly described as somatic and psychic anxiety. Both showed clearly as separate entities when analysed on anxiety rating (Hamilton et al 1963, Hamilton 1959. Somatic group includes autonomic symptoms that are not under voluntary control and the word psychic has more than one meaning. Cannon (1979) has described the physiology of the symptoms. The awareness of anxiety leads to feeling of dread and threat and other psychological symptoms. The 'fight' or 'flight' reaction is aroused by stimulation of the sympathetic nervous system, both through stimulation of sympathetic nerves and humorally by the release of adrenaline and other catecholamines (Tyrer 1979(Tyrer , 1982. This leads to an increase in cardiac out put and shunting of blood from skin and gut to cardiac and voluntary muscles. The somatic symptoms of palpitation, difficulty in breathing, dryness of mouth, cold skin, muscular tension and tremor follow rapidly from these physiological changes if neither fight not flight is appropriate. Many inventories and scales are available to measure the level of anxiety. In the present study mainly time tested and accepted Hamilton's anxiety scale was employed at the beginning and end of the trial to measure the levels of anxiety. Hamilton's anxiety scale is able to express quantitatively the clinically observed improvement (Ramchandran and Menon 1977) and it can be correlated significantly with clinical improvement which is not possible with other anxiety scales (Lader and Marks 1971). In Hamilton's anxiety scale the initial average score in Group I patients was 36.6 ± 6.14 and at the end of the trial was 15.86 ±8.16 and in Group II patients it was 36.11 ± 7.15 and 18.08 ± 7.86 respectively. This showed the significant reduction in score in both the groups with the treatment but the reduction was significantly more in Group I cases as compared to Group II cases (Table 3, t= 11.2, df = 68, P< 0.001). Comparing the medium and optimal improvement in two groups X 2 = 5.225, df = 1, P < 0.02 Sifnificant Beneficial response of prochlorperazine in small closes was reported in patients with a variety of somatic complaints leading to tension, uneasiness of mind and ill defined nervous feelings (Tyrer 1976, Milne and Fowler 1960, St Laurent et al 1962, Naishpury and Moulick 1977, Ahuja 1974. It is effective in geriatric patients suffering from anxiety and agitation associated with organic diseases. In the present series of cases 62.5 % in Group I and 50 % in Group II had an initial score more than 30 and 37.5 % in Group I and 50 % in Group II had score between 10-30, while no one had score less than 10. The recovery rate and rate of improvement was faster in Group I patients as compared to Group II cases. Ahuja (1974) and Nashipury and Moulick (1977) reported similar results with prochlorperazine in low dosage in anxiety. Paterfy and Pinter (1972) concluded that it had comparable and significant anxiolytic action when administered in low dosage. Least side effects had been noted with prochlorperazine while in Group II there was extra-pyramidal effects in few patients. In view of encouraging results obtained prochlorperazine (Stemetil) can be effectively employed in the management of anxiety.
2014-10-01T00:00:00.000Z
1985-07-01T00:00:00.000
{ "year": 1985, "sha1": "f6e99f51522ec317885e49d59cab9ce3300159ce", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f6e99f51522ec317885e49d59cab9ce3300159ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15231850
pes2o/s2orc
v3-fos-license
Clinically significant responses achieved with romidepsin across disease compartments in patients with cutaneous T-cell lymphoma Cutaneous T-cell lymphoma (CTCL) is a rare heterogeneous group of non-Hodgkin lymphomas that arises in the skin but can progress to systemic disease (lymph nodes, blood, viscera). Historically, in clinical trials of CTCL there has been little consistency in how responses were defined in each disease “compartment”; some studies only assessed responses in the skin. The histone deacetylase inhibitor romidepsin is approved by the US Food and Drug Administration for the treatment of CTCL in patients who have received at least one prior systemic therapy. Phase II studies that led to approval used rigorous composite end points that incorporated disease assessments in all compartments. The objective of this analysis was to thoroughly examine the activity of romidepsin within each disease compartment in patients with CTCL. Romidepsin was shown to have clinical activity across disease compartments and is suitable for use in patients with CTCL having skin involvement only, erythroderma, lymphadenopathy and/or blood involvement. Introduction Cutaneous T-cell lymphoma (CTCL) is a primarily indolent, heterogeneous group of non-Hodgkin lymphomas (NHLs) associated with a poor prognosis in late-stage disease ( IIB) [1,2]. The two most common subtypes of CTCL are mycosis fungoides (MF) and Sézary syndrome (SS), which constitute the majority of diagnoses [2][3][4], and CTCL is sometimes used interchangeably with MF/SS [5,6]. As a whole, CTCL is quite rare, constituting ∼4% of NHL diagnoses in the United States [3], with an age-adjusted annual incidence (Surveillance, Epidemiology, andEnd Results 2005-2009) of 10.2 per million persons [4]. CTCL arises in the skin but can progress to systemic disease (lymph nodes, blood, viscera), resulting in significantly reduced survival [2,[6][7][8]. Staging of CTCL is based on disease involvement in these compartments [6], and a multivariate analysis showed that lymph node and blood involvement were independent prognostic factors for poor survival [7]. Even in patients with early-stage disease, pruritus is a common symptom of CTCL that can be debilitating and significantly impact patient quality of life [9][10][11][12][13]. Despite the knowledge that CTCL can progress to extracutaneous disease involvement, historically there had been little consistency in clinical trial response definitions in each disease "compartment" for patients with CTCL [14], even for systemic agents approved by the US Food and Drug Administration (FDA) for the treatment of CTCL (bexarotene [15,16], denileukin diftitox [17,18], romidepsin [19,20] and vorinostat [21,22]; exact indications vary). In 2007, the International Society for Cutaneous Lymphomas (ISCL) and European Organisation for Research and Treatment of Cancer (EORTC) published an update for patients with MF/SS that adjusted the tumor/node/metastasis classification used and incorporated a blood classification into staging [6]. Then in 2011, the ISCL, United States Cutaneous Lymphoma Consortium (USCLC) and the Cutaneous Lymphoma Task force of the EORTC developed consensus guidelines for response definitions in the skin, lymph nodes, blood and viscera -as well as a composite global response score that includes all of these compartments -in patients with MF/SS [14]. Romidepsin is a structurally unique, potent, bicyclic, class I selective histone deacetylase inhibitor [23][24][25] approved by e. J. Kim et al. the FDA in 2009 for the treatment of CTCL in patients who have received at least one prior systemic therapy and in 2011 for the treatment of peripheral T-cell lymphoma in patients who have received at least one prior therapy [19]. Approval in CTCL was based on results from two phase II studies of romidepsin for the treatment of CTCL in patients who had received at least one prior systemic therapy that demonstrated durable responses (composite objective response rate [ORR] of 34% with median duration of response [DOR] of 13.7-15 months) [20,26]. Although these studies were initiated before the development of the updated staging system and consensus guidelines on response definitions, they both incorporated disease assessment in all compartments, and used a composite response rate as the primary end point [20,26]. The pivotal trial also incorporated an assessment of pruritus reduction as an additional measure of clinical benefit [20,27]. The objective of this analysis of the pivotal study of romidepsin for the treatment of CTCL was to examine disease compartment data in greater detail. Baseline characteristics, responses, adverse events and pruritus in patients with disease in skin (erythrodermic vs. non-erythrodermic), lymph nodes and/or blood, as well as compartment-specific responses, were examined. Study design GPI-04-0001 (trial registration: NCT00106431) was a pivotal, single-arm, open-label, phase II, multicenter study of patients with CTCL enrolled at 33 centers in eight countries. The study design and eligibility criteria have been previously described in detail [20]. Briefly, adult patients with stage IB-IVA CTCL (at study entry, by the Mycosis Fungoides Cooperative Group [MFCG]/American Joint Committee on Cancer [AJCC] criteria [28] according to the tumor-node-metastasis-blood [TNMB] categories and staging system described at the National Cancer Institute [NCI] workshop published in 1979 [29]) who had previously experienced  1 failure of systemic treatment were treated with romidepsin at 14 mg/m 2 intravenously for 4 h on days 1, 8 and 15 of up to six 28-day cycles. Patients with at least stable disease could continue treatment beyond six cycles. The protocol, informed consent form and other study documentation were approved by an institutional review board or independent ethics committee prior to patient enrollment. All patients provided written informed consent before beginning the study. Efficacy and safety assessments Efficacy assessments and response criteria have also been previously described in detail [20]. Disease assessments were performed for skin, lymph nodes and blood involvement. The extent of disease in the skin was determined using the Severity-Weighted Assessment Tool (SWAT) [30,31] and erythroderma score [18,32]. Nodal involvement was measured with the Response Evaluation Criteria In Solid Tumors (RECIST) methodology [33], and blood involvement was measured by determining the absolute count and percentage of circulating malignant T cells (Sézary cells) primarily via flow cytometry (CD4/CD7 and/or CD4/CD26 immunophenotype). The primary end point was the ORR (complete [CR] and partial [PR] responses), using a rigorous composite end point based on the sum of the percentage changes in measurements in the skin, lymph nodes and blood (Table I). Reduction in pruritus was not part of the ORR, but was assessed and analyzed as an additional indicator of clinical benefit. DOR was also a key secondary end point. Objective responses as well as pruritus scores based on a visual analog scale (VAS) were assessed within 2 weeks of treatment initiation, on day 1 of each treatment cycle, at the end-of-study visit (30 days after the last romidepsin dose) and every 2 months for patients who went off study without disease progression. Clinically meaningful reduction in pruritus (CMRP) on this trial was the focus of a previous article [29]. Pruritus was measured using a 100 mm VAS [18,22,32,34,35] from no itching (VAS  0) to unbearable itching (VAS  100). Patients with a baseline VAS score  30 were considered to have clinically significant pruritus; moderate pruritus was defined as a VAS score of 30-69; and severe pruritus was defined as a VAS score  70 [18,22,32,35]. CMRP was defined as a decrease in VAS score of  30 for  2 consecutive cycles for patients with CT, computed tomography; MRI, magnetic resonance imaging; PD, progressive disease; PR, partial response. * Confirmed responses must be repeated at least 1 month after initial assessment. † Δ skin  percentage change in total score from baseline of weighted body surface area (patients without erythroderma) or erythroderma scale (patients with erythroderma). ‡ Δ lymph node  percentage change in size of abnormal lymph nodes (sum of longest diameter) from baseline based on physical examination and/or CT/MRI. § Δ peripheral blood  percentage change in absolute number of circulating malignant T-cells (Sézary cells) from baseline. moderate to severe pruritus at baseline; this threshold was prospectively selected based on expert input and previous use in clinical trials of other FDA-approved agents for CTCL [18,22,32,35]. Treatment-emergent adverse events (TEAEs) were assessed on days 1, 8 and 15 of each cycle according to the NCI Common Terminology Criteria for Adverse Events grading system (version 3) and tabulated by Medical Dictionary for Regulatory Activities system organ class. Compartment analysis Baseline characteristics, composite response rates, CMRP and AEs were examined for patients with only skin involvement, patients with erythroderma, patients with lymphadenopathy ( 1 lymph node  1.5 cm by conventional measurements or  1.0 cm by spiral computed tomography scan), patients with blood involvement (Sézary cell count  5% of lymphocytes) and patients with higher blood tumor burden (Sézary cell counts  1000 cells/mL and/or Sézary cells  20% of lymphocytes) at baseline. In addition, the proportion of patients with responses in each disease compartment was calculated. In skin, lymph nodes and blood, a complete compartment response was defined as no evidence of disease. A partial skin response was defined as a  50% decrease in SWAT or erythroderma score. A partial lymph node response was defined as a  30% decrease in the sum of the longest diameter (SLD) of lymph nodes in patients with lymph node involvement at baseline. A partial blood response was defined as a  50% decrease in circulating Sézary cells in patients with blood involvement or higher blood tumor burden at baseline. Statistical methods All patients who received  1 dose of romidepsin were included in the efficacy and safety analyses. Time to response and DOR data were summarized by Kaplan-Meier methods. p-Values were calculated for differences between the following pairs of patient groups: with or without only skin involvement, with or without erythroderma, with or without lymphadenopathy, with or without blood involvement and with or without higher blood tumor burden. Differences in baseline characteristics were assessed by Fisher exact tests, Wilcoxon tests or t-tests, differences in response rates were assessed by Fisher exact tests, differences in time to response or DOR were assessed by log-rank tests, differences in AEs were assessed by Fisher exact tests and differences in pruritus reduction were assessed by Wilcoxon tests. Test results were not adjusted for multiple comparisons. Role of the funding source Study GPI-04-0001 was conceived and the protocol written by Dr. William McCulloch and colleagues at Gloucester Pharmaceuticals, Inc. (now a wholly owned subsidiary of Celgene Corporation), with the assistance of practicing clinician colleagues, including coauthors Dr. Sean Whittaker and Dr. Youn Kim. The study was funded and run by Gloucester Pharmaceuticals, using a clinical research organization (Inveresk Research Grouop, Inc., which merged with Charles River Laboratories International, Inc., which was then acquired by Kendle International Inc. during the trial). Data interpretation was a collaborative effort by study personnel (Gloucester employees and trial investigators), a number of whom are coauthors. Financial support for medical editorial assistance was provided by Celgene Corporation. Results At baseline, 17 of 96 patients were diagnosed with SS, and the majority of patients on trial had advanced disease (16) 10 (40) (14) 6 (24) 4 (10) 6 (11) 2 (5) 0 Denileukin diftitox, n (%) 14 (15) 3 (12) 4 (10) 8 (15) 5 (14) 1 (8) higher blood tumor burden. Median compartment-specific DORs were generally numerically shorter than the composite median DOR; only the skin response in patients without erythroderma achieved a duration at or above the median composite DOR of 15.0 months (Table IV). Overall, the most common drug-related TEAEs were nausea, asthenic conditions and vomiting (Table V). Patients with blood involvement or higher blood tumor burden had significantly more frequent diarrhea not otherwise specified (NOS) than patients without blood involvement or higher blood tumor burden (p  0.004, p  0.002, respectively); and patients with blood involvement also had significantly more frequent dysgeusia (p  0.003). There were no other significant differences in common drug-related TEAEs across the patient subgroups examined. The majority of patients (65 of 96) had at least moderately severe pruritus at baseline, over half of which were characterized as severe (36/65; Table VI). Of these patients, 28 (43%) experienced CMRP, including 19 (53%) with severe pruritus at baseline. Unsurprisingly, patients with erythroderma had significantly higher baseline VAS scores than those without erythroderma (mean 78.6 vs. 59.9, p  0.001). Patients with at least moderately severe pruritus at baseline without lymphadenopathy achieved significantly greater improvement in VAS score (45.2 vs. 33.9, p  0.025) and rate of CMRP (65.4% vs. 28.2%, p  0.005) than those with lymphadenopathy. Presence of erythroderma, blood involvement or higher blood tumor burden did not significantly impact improvement in VAS score or rate of CMRP. (71%  stage IIB) and were heavily pretreated (median of 2 [range [1][2][3][4][5][6][7][8] prior systemic therapies; Table II). Patients with lymphadenopathy were more frequently female with an Eastern Cooperative Oncology Group (ECOG) performance status of 1 than those without lymphadenopathy (p  0.006, p  0.014, respectively). Patients with higher blood tumor burden were also more frequently female and had received more prior systemic therapies than those without higher blood tumor burden (p  0.029, p  0.044, respectively). The types of prior therapies received were similar across the patient subgroups; the only significant differences reported were higher rates of bexarotene and photopheresis for patients with blood involvement versus without blood involvement (p  0.047, p  0.001, respectively), and a lower rate of photopheresis (0%) for patients with only skin involvement versus involvement beyond the skin (p  0.003). Discussion In this subanalysis of the pivotal phase II trial of romidepsin for the treatment of patients with CTCL who had received  1 prior systemic therapy, romidepsin was shown to have clinical activity in the skin, lymph nodes and blood (no patient with visceral involvement was enrolled on trial). Overall, patients in this trial were heavily pretreated with mostly advanced disease. Patients with higher blood tumor burden had received significantly more prior systemic therapies than other patients, and patients with blood involvement more often had received bexarotene or photopheresis (no patient with only skin involvement had received photopheresis). As expected, patients with erythroderma at baseline reported significantly higher pruritus at baseline than patients without erythroderma. Surprisingly, increased pruritus at baseline reported in patients with higher blood tumor burden did not reach significance compared with patients without higher blood tumor burden. Composite response rates, time to response and DOR were not significantly different across the patient subgroups, indicating that romidepsin is an effective therapy for patients with CTCL regardless of the compartments in which disease had manifested. When examining responses within each compartment, 40% of patients had a response in the skin (similar response rates were seen in patients with or without erythroderma), 33% of patients with lymphadenopathy had a response in the lymph nodes and more than half of patients with blood involvement or higher blood tumor burden had a response in the blood. Time to response within each compartment was similar to that seen for the composite response; however, the DOR within each compartment may have been shorter compared with the DOR for the composite response, indicating that disease VAS, visual analog scale; SD, standard deviation; CMRP, clinically meaningful reduction in pruritus (defined as a decrease in VAS score of  30 for  2 consecutive cycles for patients with moderate-to-severe pruritus at baseline). * Patients without definitive lymphadenopathy and blood involvement. † Patients with  1 lymph node  1.5 cm by conventional measurements or  1.0 cm by spiral computed tomography scan. ‡ Sézary cells  5% of lymphocytes. § Sézary cell counts  1000 cells/mL and/or Sézary cells  20% of lymphocytes. ¶ Baseline score of  30 mm. ** Indicates significantly different (p  0.05) distribution from the alternative category: patients with or without only skin involvement, erythroderma, lymphadenopathy, blood involvement or higher blood tumor burden. † † Baseline score of  70mm. (14) 2 (8) 6 (15) 6 (11) 10 (27) ** 6 (46) ** Headache 13 (14) 3 (12) 3 (8) of CTCL, a composite assessment was used to determine response that included the SWAT or summation of the bidimensional measurements in skin lesions and lymph nodes, and measurement of circulating tumor cells in the blood via flow cytometry [18]. Thus, although response rates for each of these approved drugs have been reported, the parameters for determining response varied widely, making it difficult to accurately compare response rates among systemic agents tested prior to development of the 2011 consensus guidelines. The 2011 consensus guidelines also stressed the need to include quality-of-life assessments in trials, including those specific to pruritus [14]. In agreement with the pruritus assessments in the romidepsin study discussed herein, the VAS continues to be used to quantify the severity of pruritus, and the guidelines also recommend elimination or stabilization of confounding pruritus treatments (e.g. antihistamines) when making comparative pruritus measurements [14]. The guidelines also highlight the need to determine what constitutes significant pruritus at baseline and what change in VAS should be considered significant improvement, but they do not make recommendations on how to define these parameters. The pivotal romidepsin study, as well as studies of vorinostat, denileukin diftitox and extracorporeal photochemotherapy for patients with CTCL have used a 100 mm VAS scale with a threshold of 30 mm as the definition of a clinically significantly reduction in pruritus [18,22,32,35]. However, published data on pruritus reduction with systemic agents other than romidepsin are limited. Development of consensus guidelines on measurement of pruritus in patients with CTCL is key to providing therapies that alleviate this debilitating illness. In the pivotal study of romidepsin for the treatment of CTCL, romidepsin demonstrated clinical activity across disease compartments and is suitable for use in patients with erythroderma, lymphadenopathy and/or blood involvement. Utilization of the 2011 consensus guidelines in future clinical trials will allow for better understanding of the kinetics of disease in each compartment and what initiates and drives patient response or relapse.
2018-04-03T05:03:53.281Z
2015-05-20T00:00:00.000
{ "year": 2015, "sha1": "55e3cdf58112247e1efb9ea50a4396cf2bc738e6", "oa_license": "implied-oa", "oa_url": "https://www.tandfonline.com/doi/pdf/10.3109/10428194.2015.1014360?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "55e3cdf58112247e1efb9ea50a4396cf2bc738e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9471654
pes2o/s2orc
v3-fos-license
The influences of patient's trust in medical service and attitude towards health policy on patient's overall satisfaction with medical service and sub satisfaction in China Background It is widely accepted that patient generates overall satisfaction with medical service and sub satisfaction on the basis of response to patient's trust in medical service and response to patient's attitude towards health policy in China. This study aimed to investigate the correlations between patient's trust in medical service/patient's attitude towards health policy and patient's overall satisfaction with medical service/sub satisfaction in current medical experience and find inspiration for future reform of China's health delivery system on improving patient's overall satisfaction with medical service and sub satisfaction in considering patient's trust in medical service and patient's attitude towards health policy. Methods This study collaborated with the National Bureau of Statistics to collect a sample of 3,424 residents from 17 provinces and municipalities in a 2008 China household survey on patient's trust in medical service, patient's attitude towards health policy, patient's overall satisfaction and sub satisfaction in current medical experience. Results Patient's overall satisfaction with medical service and most kinds of sub satisfaction in current medical experience were significantly influenced by both patient's trust in medical service and patient's attitude towards health policy; among all kinds of sub satisfaction in current medical experience, patient's trust in medical service/patient's attitude towards health policy had the largest influence on patient's satisfaction with medical costs, the influences of patient's trust in medical service/patient's attitude towards health policy on patient's satisfaction with doctor-patient interaction and satisfaction with treatment process were at medium-level, patient's trust in medical service/patient's attitude towards health policy had the smallest influence on patient's satisfaction with medical facilities and hospital environment, while patient's satisfaction with waiting time in hospital was not influenced by patient's trust in medical service/patient's attitude towards health policy. Conclusion In order to improve patient's overall satisfaction with medical service and sub satisfaction in considering patient's trust in medical service and patient's attitude towards health policy, both improving patient's interpersonal trust in medical service from individual's own medical experience/public trust in medical service and improving patient's attitude towards health policy were indirect but effective ways. Background Over the past few decades, patient satisfaction has taken a prominent position in the medical service research literature [1][2][3]. This attention has been justified since patient satisfaction (directly or indirectly) has become a key criterion for evaluating the quality of medical service and the encounters between medical professional and patient [4][5][6]. In fact patient satisfaction reflects not only patients' judgment and assessment of their medical experience but also their perception of the gap between what they want and what they receive [7]. Patient satisfaction is a summarizing response that results from patients' post-treatment cognitive and affective evaluation of medical service performance given pre-treatment expectation [8,9]. The central role of trust in medical relationships has been recognized for a long time [10][11][12][13]. Trust is seen as a global attribute of treatment relationships, one that encompasses subsidiary features such as satisfaction, communication, competency, and privacy-each of which has considerable importance in its own right [14]. Trust in medical service can be seen as trust in physician and medical institution, and it focuses on two questions "whether the physician and medical institution are competent to make a diagnosis and provide treatment" and "whether the physician and medical institution will act in the best interest of the patient" [15]. Most patients must depend on the physician and medical institution for the information they need to answer these two questions, which results in an unbalanced relationship [15]. The formation process of patient's trust in medical service is the learning and combining process of interpersonal trust in medical service from her/his own medical experience and public trust in medical service [16], then when patient forms stationary trust in medical service as response to the unbalanced relationship between patient and physician/medical institution, it influences patient's attitude towards, cognition on, and response mode with the effectiveness of medical service provided by physician and medical institution to a large extent [17]. Patient's attitude towards health policy also has taken a prominent position in China's medical service research literature [17,18] due to the fact that China's administrative health delivery system (simply denoted as CHDS) was mainly regulated by relevant government agencies including the health administrative agencies and the health insurance agencies, and health policy had strong guidance for the development of CHDS [18]. And evidence from interview in China showed that among patients who received the same medical service, patients who thought highly (lowly) of health policy in recent years usually had higher (lower) satisfaction with medical service in current medical experience [18]. Patient's overall satisfaction with medical service in current medical experience is an aggregation of sub satisfaction involving satisfaction with doctor-patient interaction, satisfaction with treatment process, satisfaction with waiting time in hospital (Most Chinese have no idea of appointment, so waiting time in China mainly refers to waiting time between registration and diagnosis.), satisfaction with medical facilities and hospital environment, and satisfaction with medical costs [17]. It is widely accepted that patients generate multi-attribute-based responses on their overall satisfaction and sub satisfaction [19] in current medical experience [20][21][22][23]: the attributelevel responses of overall satisfaction and sub satisfaction are composed of several parts; one major part are responses to severity of disease, stage of disease, treatment effect, medical expense, and reimbursement percentage of medical costs in current medical experience which have significant direct impacts on patient's overall satisfaction and sub satisfaction in current medical experience [17]; the other major part includes not only response to patient's trust in medical service which is a combination of response to interpersonal trust in medical service from her/his own medical experience and response to public trust in medical service [16] but also response to patient's attitude towards health policy, and both of them have indirect impact on patient's overall satisfaction and sub satisfaction in current medical experience [17,23]; a patient is expected to aggregate the attribute-level responses of satisfaction to an overall reflection of satisfaction. This aggregation process is presumed to be a heuristics-based decision-making process. Basically, a patient processes information to come to a decision on whether and to what extend she/he is satisfied with medical service [18]. So in order to improve patient's overall satisfaction and sub satisfaction, one indirect but effective way is to improve patient's trust in medical service (either improve interpersonal trust in medical service from individual's own medical experience or improve public trust in medical service) and patient's attitude towards health policy [17]. To overcome the most important persistent problem "medical service is expensive and difficult to access" in CHDS, which was especially serious in higher-level hospital (mainly refer to level-2 hospital and level-3 hospital. According to the Chinese Ministry of Health's most recent "Governing Rules for the Management and Classification of Hospitals" in 1989, public hospitals in China are classified into three levels: level 1 hospitals are "community hospitals or health clinics that provide direct prevention, treatment, health promotion, and rehabilitation services to participants of a defined community"; level 2 hospitals are "area hospitals that provide comprehensive medical and other healthcare services to participants of multiple communities, which may, to a certain degree, also serve as teaching hospitals and research bases"; level 3 hospitals are those that "provide high-quality, specialty medical and other healthcare services to participants in a minimum of several areas, and also serve as high-level teaching hospitals and conduct sophisticated research".), the Chinese government has already focused on how to improve patient satisfaction with medical service: in 2005 the Ministry of Health began a nationwide hospital management review in order to push health delivery organizations to improve patient satisfaction with medical service; some local governments used patient satisfaction with medical service to evaluate hospital performance and determine financial investment for hospital [24]; many hospitals established patient satisfaction feedback mechanism to improve their medical service quality [17]. The central and local governments have increased financial investments in CHDS for a long time, especially in higher-level hospital. Although the physical capacity of hospital (especially higher-level hospital) has improved quickly along with the increasing investment in buildings, equipments, and medical facilities, the scarce resources, especially human resources, have been misallocated due to improper pricing mechanism of medical service which can't rightly evaluate the advanced and sparse medical services, and although the number of medical professional has increased, the percentage of excellent medical professional has decreased, facing with the sharply increasing scale of patients most medical professionals have still kept on treating disease with outdated technology, as the result the overall quality and reliability of medical service have decreased [23]. Evidences from literature [17,23] all supported that the development of hospital's competence, especially competence of human resources (mainly refers to doctor's limited attention per patient), was usually far behind the increasing of patient's demand, which not only induced serious imbalance between the expansion and the inadequate competence of CHDS, but also induced serious imbalance between patient's increasing demand and CHDS's limited medical service supply. As the result of the two major imbalances, the decreasing quality and reliability of medical service in recent years has induced the decreasing of interpersonal trust in medical service from individual's own medical experience in recent years [23]; due to the persistent imbalance between the public high expectation of medical service and the poor actual effect of medical service provided by CHDS, the public trust in medical service has kept on decreasing in recent years [17,25]; the poor effect of China's health policy which was far below the public expectation has induced the deterioration of public attitude towards health policy in recent years [18]; patient's sub satisfaction with human resource related hospital competence (mainly refer to satisfaction with doctor-patient interaction and satisfaction with treatment quality) were usually lower than patient's sub satisfaction with medical facilities and hospital environment [23]; the persistent problem of long waiting time in hospital was in fact the problem of limited medical service supply and limited competence of CHDS as the result of scarce and inequitably allocated medical resources [26][27][28], and patients generally had low satisfaction with waiting time in hospital; the most important persistent problem "medical service is expensive and difficult to access" and the problem "supporting hospitals through drug sales" in CHDS have become more and more serious and attracted most public attention in recent years [25], patient's medical costs have kept on increasing and patient's satisfaction with medical costs has kept on decreasing up till now [17]. On the basis of current situation of CHDS, this study attempted to test whether patient's trust in medical service and patient's attitude towards health policy had significant influences on patient's overall satisfaction with medical service and sub satisfaction in current medical experience using a sample of 3,424 residents collected in a 2008 China household survey from 17 provinces and municipalities by the National Bureau of Statistics, and inspiration for future reform of CHDS on improving patient's overall satisfaction with medical service and sub satisfaction in considering patient's trust in medical service and patient's attitude towards health policy was also found. Data To obtain data on patient's trust in medical service, patient's attitude towards health policy, patient's overall satisfaction and sub satisfaction in current medical experience, this study collaborated with the National Bureau of Statistics to collect a sample of 3,424 residents from 17 provinces and municipalities in a 2008 China household survey. In this survey both residents and investigators were very patient, and the National Bureau of Statistics approved that the quality of data was high. 93.76% of residents in this survey were urban residents, and only 6.24% of residents in this survey were rural residents, so the analysis in this paper mainly focused on urban residents. The questionnaire consisted of three parts: the first part inquired about patient's overall satisfaction with medical service and sub satisfaction (including satisfaction with doctor-patient interaction, satisfaction with treatment process, satisfaction with waiting time in hospital, satisfaction with medical facilities and hospital environment, and satisfaction with medical costs) in current medical experience; the second part inquired about patient's trust in medical service and patient's attitude towards health policy; the third part inquired about personal information about current medical experience which had significant influence on patient satisfaction in current medical experience, here severity of disease, stage of disease, treatment effect, medical expense, and reimbursement percentage of medical costs were included. The use of the dataset was approved by the National Bureau of Statistics. The characteristics of the study population in current medical experience were shown in table 1. The population distribution on each characteristic in current medical experience followed the natural distribution in China, which was the result of stratified sampling design by the National Bureau of Statistics. Descriptions for measures of patient's trust in medical service, measures of patient's attitude towards health policy, measures of overall satisfaction and sub satisfaction in current medical experience were in additional file 1. Regression model Ordered probit model is a generalization of the popular probit analysis for the case of more than two outcomes of an ordinal-dependent variable. Since the latent evaluation score y i is a linear function of independent variables written as a vector x i , here i is sample number, and y i = x i *b+ε i , where b is a vector of coefficients andε i is assumed to follow a standard normal distribution. For an ordered probit model with k cutoff points, define p j (j = 1,2,...,k) as the cutoff points of all y i , then y i ≦p 1 , p j <y i ≦p j+1 (j = 1,2,...,k-1) or y i >p k . Following the notation [29], the ordered probit model is expressed as Where y j (j = 0,1,...k) is the discrete value of y i and Ф is the cumulative standard normal distribution function. The marginal effect of x i on the probability of binary can be calculated according to this formula [29]: Whereis the standard normal density function, and based on (4), (5) and (6) the vector of coefficient b can be estimated. Theoretical model specification The following ordered probit models (n = 1,2,...,6) were estimated to test the different correlations between patient's trust in medical service/patient's attitude towards health policy and patient's overall satisfaction with medical service/sub satisfaction in current medical experience: Here y ni (n = 1,2,...,6) were patient's overall satisfaction with medical service/sub satisfaction in current medical experience, specifically, y 1i was patient's overall satisfaction with medical service in current medical experience, y 2i was patient's satisfaction with doctor-patient interaction in current medical experience, y 3i was patient's satisfaction with treatment process in current medical experience, y 4i was patient's satisfaction with waiting time in hospital in current medical experience, y 5i was patient's satisfaction with medical facilities and hospital environment in current medical experience, and y 6i was patient's satisfaction with medical costs in current medical experience; x 1i was patient's trust in medical service, and x 2i was patient's attitude towards health policy; z m was control variable, since individual's satisfaction in current medical experience were influenced by severity of disease, stage of disease, treatment effect, medical expense, and reimbursement percentage of medical costs in current medical experience, they were all controlled as dummy variables in the regression models; error termε i was assumed to be distributed normal. Results The descriptive statistics of patient's trust in medical service, patient's attitude towards health policy, and patient's overall satisfaction with medical service/sub satisfaction in current medical experience were shown in table 2. Regression results of theoretical models were in table 3. From results of regression model 1-6, among patient's overall satisfaction with medical service and all kinds of sub satisfaction, the coefficients for both patient's trust in medical service x 1i (p < 0.05) and patient's attitude towards health policy x 2i (p < 0.01) in regression model 1-3, 5, 6 were all significantly positive, while only the coefficients for both patient's trust in medical service x 1i (p > 0.1) and patient's attitude towards health policy x 2i (p > 0.1) in regression model 4 were not significant, which revealed that patients who had higher degree of trust in medical service or who had more optimistic attitude towards health policy usually not only had higher overall satisfaction with medical service in current medical experience, but also had higher satisfaction with doctor-patient interaction, higher satisfaction with treatment process, higher satisfaction with medical facilities and hospital environment, and higher satisfaction with medical costs in current medical experience, while patient's satisfaction with waiting time in hospital in current medical experience wasn't influenced by either patient's trust in medical service or patient's attitude towards health policy. From the sizes of the significant coefficients for both patient's trust in medical service x 1i and patient's attitude towards health policy x 2i in each regression model, generally speaking, among all kinds of sub satisfaction, patient's trust in medical service/patient's attitude towards health policy had the largest influence on patient's satisfaction with medical costs, the influences of patient's trust in medical service/patient's attitude towards health policy on patient's satisfaction with doctor-patient interaction and satisfaction with treatment process were at medium-level, while patient's trust in medical service/patient's attitude towards health policy had the smallest influence on patient's satisfaction with medical facilities and hospital environment. Since patient's overall satisfaction with medical service was an aggregation of all kinds of sub satisfaction in current medical experience, the influence of patient's trust in medical service/patient's attitude towards health policy on patient's overall satisfaction with medical service was larger than the influence of patient's trust in medical service/patient's attitude towards health policy on any kind of sub satisfaction. Discussion Patient's overall satisfaction with medical service and most kinds of sub satisfaction in current medical experience were significantly influenced by patient's trust in medical service/patient's attitude towards health policy; the influence of patient's trust in medical service/ patient's attitude towards health policy on patient's overall satisfaction with medical service could be considered as an aggregation of the influences of patient's trust in medical service/patient's attitude towards health policy on all kinds of sub satisfaction in current medical experience; among all kinds of sub satisfaction in current medical experience, patient's trust in medical service/patient's attitude towards health policy had the largest influence on patient's satisfaction with medical costs due to the fact that the persistent problem "medical service is expensive and difficult to access" in China was the most important problem patients paid most attention to in considering their trust in medical service/attitude towards health policy in their medical experiences [17,25]; the influences of patient's trust in medical service/patient's attitude towards health policy on patient's satisfaction with doctor-patient interaction and satisfaction with treatment process were larger than the influence of patient's trust in medical service/ patient's attitude towards health policy on patient's satisfaction with medical facilities and hospital environment, this was because human resources, especially the number and quality of senior medical professional in CHDS had not increased concurrently to keep pace with the excessive physical expansion (mainly refer to expansion of buildings, equipments, and medical facilities in hospitals) of CHDS, then in considering trust in medical service and attitude towards health policy patients usually paid more attention to human resource related hospital competence (mainly refer to doctor-patient interaction and treatment quality) than medical facilities and hospital environment [23,25]; the reason why patient's trust in medical service/patient's attitude towards health policy had no significant influence on patient's satisfaction with waiting time in hospital in current medical experience was that long waiting time was a general problem in CHDS and patient's trust in medical service/patient's attitude towards health policy was no longer important in patient's consideration of satisfaction with waiting time in hospital [17]. On the basis of these findings, this study found the following inspiration for future reform of CHDS on improving patient's overall satisfaction with medical service and sub satisfaction in considering patient's trust in medical service and patient's attitude towards health policy: the increase of patient's trust in medical service and the improvement of patient's attitude towards health policy could induce the increases of patient's overall satisfaction with medical service and most kinds of sub satisfaction, the Chinese government could focus on improving patient's trust in medical service (involving improving patient's interpersonal trust in medical service from individual's own medical experience and improving public trust in medical service) and improving patient's attitude towards health policy to indirectly but effectively improve patient's overall satisfaction with medical service and sub satisfaction; in order to increase patient's interpersonal trust in medical service from individual's own medical experience, the most effective way was to improve the quality and reliability of medical service; the practical way for the Chinese government to increase the public trust in medical service was to solve the persistent imbalance between the public high expectation of medical service and the poor actual effect of medical service provided by CHDS; in order to improve patient's attitude towards health policy, more practical health policy on improving medical service should be formulated and put in place for the public; the Chinese government should not only emphasis on financial investment in physical capacity of CHDS, in fact human resources, especially the number and quality of senior medical professional in higher-level hospital have not increased concurrently to keep pace with either the excessive physical expansion of CHDS or patient's increasing demand for medical service, so the development of human resources in CHDS (especially in higher-level hospital) should be paid much more attention to in order to improve patient's satisfaction with doctor-patient interaction and satisfaction with treatment process in considering patient's trust in medical service and patient's attitude towards health policy; the Chinese government should also release inappropriate administrative intervention for CHDS, use proper and efficient management to inspire hospital (especially higher-level hospital) to supply advanced medical services at proper price to patient in need in order to improve patient's satisfaction with medical costs in considering patient's trust in medical service and patient's attitude towards health policy, and further alleviate the persistent problem "medical service is expensive and difficult to access" [23]; the problem of long waiting time in hospital was fundamentally caused by limited medical service supply and limited competence of CHDS, only when medical resources were more abundant and were more equitably allocated [26,27,[30][31][32] can patient's satisfaction with waiting time in hospital be improved in considering patient's trust in medical service and patient's attitude towards health policy. Conclusion This study found that both patient's trust in medical service and patient's attitude towards health policy had significant influences of different levels on patient's overall satisfaction with medical service and most kinds of sub satisfaction (involving satisfaction with doctorpatient interaction, satisfaction with treatment process, satisfaction with medical facilities and hospital environment, and satisfaction with medical costs) in current medical experience. And this study found the following inspiration for future reform of CHDS on improving patient's overall satisfaction with medical service and sub satisfaction in considering patient's trust in medical service and patient's attitude towards health policy: patient's interpersonal trust in medical service from individual's own medical experience, public trust in medical service, and patient's attitude towards health policy should be improved in order to indirectly but effectively improve patient's overall satisfaction with medical service and sub satisfaction; the quality and reliability of medical service should be improved; the persistent imbalance between the public high expectation of medical service and the poor actual effect of medical service provided by CHDS should be solved; more practical health policy on improving medical service should be formulated and put in place for the public; inappropriate administrative intervention for CHDS should be released; the development of human resources in CHDS should be paid much more attention to; medical resources should be made more abundant and allocated more equitably. Additional material Additional file 1: Measures of patient's trust in medical service, measures of patient's attitude towards health policy, measures of overall satisfaction and sub satisfaction in current medical experience.
2014-10-01T00:00:00.000Z
2011-06-15T00:00:00.000
{ "year": 2011, "sha1": "280ba0e51cf5f4691df1118b9fca99d8bad943a4", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-11-472", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6197d5141f3d3ed64360edb9b7d1b36c966a4d1", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
115608808
pes2o/s2orc
v3-fos-license
Study of the Intelligent Behavior of a Maximum Photovoltaic Energy Tracking Fuzzy Controller The Maximum Power Point Tracking (MPPT) strategy is commonly used to maximize the produced power from photovoltaic generators. In this paper, we proposed a control method with a fuzzy logic approach that offers significantly high performance to get a maximum power output tracking, which entails a maximum speed of power achievement, a good stability, and a high robustness. We use a fuzzy controller, which is based on a special choice of a combination of inputs and outputs. The choice of inputs and outputs, as well as fuzzy rules, was based on the principles of mathematical analysis of the derived functions (slope) for the purpose of finding the optimum. Also, we have proved that we can achieve the best results and answers from the system photovoltaic (PV) with the simplest fuzzy model possible by using only 3 sets of linguistic variables to decompose the membership functions of the inputs and outputs of the fuzzy controller. We compare this powerful controller with conventional perturb and observe (P&O) controllers. Then, we make use of a Matlab-Simulink® model to simulate the behavior of the PV generator and power converter, voltage, and current, using both the P&O and our fuzzy logic-based controller. Relative performances are analyzed and compared under different scenarios for fixed or varied climatic conditions. Introduction Solar energy conversion using photovoltaic (PV) generators has lately been in accelerated development, both for small and large installations.This clean, quiet and low-maintenance energy source has seen the largest growth rate with a renewable and continuous price reduction.Its further development requires improvement of conversion efficiency and component cost reduction.The electrical energy extracted by the photovoltaic generators depends on a complex equation relating the solar radiation, the temperature, and the total resistance of the circuit, which results in a nonlinear variation of the output power P as a function of the circuit voltage V in the form P = f (V) [1,2].There is a unique point, under given irradiation and temperature conditions, where the generator produces maximum power, named the MPP (maximum power point).This MPP is reached when the power's rate of change as a function of voltage is equal to zero.The nonlinear relationship of the power output from the PV generator with respect to environmental conditions renders the conversion efficiency of solar generators relatively low, so power extraction optimization becomes a key issue in solar energy conversion [3,4].This paper focuses on the development of a coupled fuzzy logic-mathematical Energies 2018, 11, 3263 2 of 20 analysis method as a maximum power point tracking (MPPT) technique to increase the power extracted by the PV generator. Motivation In the case of considering photovoltaic power output with respect to voltage for a particular solar generator under varying irradiation and temperature levels, we note that there is a unique point where maximum power can be harvested (Figure 1) [2,4]. Energies 2018, 11, x FOR PEER REVIEW 2 of 20 analysis method as a maximum power point tracking (MPPT) technique to increase the power extracted by the PV generator. Motivation In the case of considering photovoltaic power output with respect to voltage for a particular solar generator under varying irradiation and temperature levels, we note that there is a unique point where maximum power can be harvested (Figure 1) [2,4].A similar MPP tracking analysis can be performed by considering an I-V curve, as shown in Figure 2 below.If we consider irradiation S, a temperature T, and a varying resistive load Ri, then the solar cell provides a short-circuit current ISC and an open circuit voltage VOC.We note that there exists an MPP that can be identified from the I-V curve.Whatever the approach, P-V or I-V, the tracking of gradient variation of I or V enables us to identify the maximum power point from a PV generator [1,3].In the literature [2], there are a number of MPPT (Maximum Power Point Tracking) techniques used to optimize the efficiency of photovoltaic systems.A similar MPP tracking analysis can be performed by considering an I-V curve, as shown in Figure 2 below.If we consider irradiation S, a temperature T, and a varying resistive load R i , then the solar cell provides a short-circuit current I SC and an open circuit voltage V OC .We note that there exists an MPP that can be identified from the I-V curve.Whatever the approach, P-V or I-V, the tracking of gradient variation of I or V enables us to identify the maximum power point from a PV generator [1,3].In the literature [2], there are a number of MPPT (Maximum Power Point Tracking) techniques used to optimize the efficiency of photovoltaic systems.Photovoltaic systems are generally connected to static converters (DC-DC) driven by programmed controllers that continuously analyze the power output from the solar generator.MPPT controllers adjust the parameters to extract maximum energy, whatever the load and atmospheric conditions are [5].The MPPT methods portrayed in the different studies use different techniques and algorithms which widely differ in performance, such as convergence speed, implementation complexity, accuracy, and most importantly, the cost of implementation of the whole setup [6].In the following paragraphs, we briefly recall the principles of some of the most popular MPPT tracking algorithms. The "Hill Climbing/P&O Method" [7][8][9][10]: The principle of this algorithm is to calculate the power provided by the PV panel at time k, following a disturbance effect on the voltage of the PV panel while acting on the duty cycle, D. This is compared to the previous measurement at the moment k−1.If the power increases, we approach the MPP, and the variation of the duty cycle is maintained in the same direction.On the contrary, if the power decreases, we move away from the MPP.So, we have to reverse the direction of the change in the duty cycle. The "Incremental Conductance Method" [11,12]: The principle of this algorithm is based on the knowledge of the value of the conductance G = I/V on the increment of the conductance dG to deduce the position of the operating point relative to the MPP.If dG is greater than the opposite of the conductance −G, the duty cycle is decreased.On the other hand, if dG is lower than −G, the duty cycle is increased.This process is repeated until reaching the MPP. The "Fractional Open-Circuit Voltage Method" [2,4]: This method is based on the relation VMPP=α×VOC, where α is a voltage factor depending on the characteristics of the PV cell.To deduce the optimal voltage, the VOC voltage must be measured.As a result, the operating point of the panel is kept close to the MPP by adjusting the panel voltage to the calculated optimal voltage.This is achieved by cyclically acting on the duty cycle to reach the optimum voltage. The "Fractional Short-Circuit Current Method" [6,13]: This technique is based on the relation IMPP=α×ISC, where α is a current factor depending on the characteristics of the PV cell.The optimum operating point is obtained by bringing the current of the panel to the optimum current, changing the duty cycle until the panel reaches the optimum value. Algorithms based on fuzzy logic [3,[14][15][16]: MPPT control techniques based on fuzzy logic have recently been introduced because they offer the advantage of robust control and do not require exact knowledge of the mathematical model of the system.In addition, they improve performances (convergence speed, accuracy, ease of implementation, and low cost). Other MPPT techniques include the "Artificial Intelligence Algorithms" [10,17].These new technology MPPT algorithms are inspired by nature and biological structures.Among them we can Photovoltaic systems are generally connected to static converters (DC-DC) driven by programmed controllers that continuously analyze the power output from the solar generator.MPPT controllers adjust the parameters to extract maximum energy, whatever the load and atmospheric conditions are [5].The MPPT methods portrayed in the different studies use different techniques and algorithms which widely differ in performance, such as convergence speed, implementation complexity, accuracy, and most importantly, the cost of implementation of the whole setup [6].In the following paragraphs, we briefly recall the principles of some of the most popular MPPT tracking algorithms. The "Hill Climbing/P&O Method" [7][8][9][10]: The principle of this algorithm is to calculate the power provided by the PV panel at time k, following a disturbance effect on the voltage of the PV panel while acting on the duty cycle, D. This is compared to the previous measurement at the moment k − 1.If the power increases, we approach the MPP, and the variation of the duty cycle is maintained in the same direction.On the contrary, if the power decreases, we move away from the MPP.So, we have to reverse the direction of the change in the duty cycle. The "Incremental Conductance Method" [11,12]: The principle of this algorithm is based on the knowledge of the value of the conductance G = I/V on the increment of the conductance dG to deduce the position of the operating point relative to the MPP.If dG is greater than the opposite of the conductance −G, the duty cycle is decreased.On the other hand, if dG is lower than −G, the duty cycle is increased.This process is repeated until reaching the MPP. The "Fractional Open-Circuit Voltage Method" [2,4]: This method is based on the relation V MPP = α × V OC , where α is a voltage factor depending on the characteristics of the PV cell.To deduce the optimal voltage, the V OC voltage must be measured.As a result, the operating point of the panel is kept close to the MPP by adjusting the panel voltage to the calculated optimal voltage.This is achieved by cyclically acting on the duty cycle to reach the optimum voltage. The "Fractional Short-Circuit Current Method" [6,13]: This technique is based on the relation I MPP = α × I SC , where α is a current factor depending on the characteristics of the PV cell.The optimum operating point is obtained by bringing the current of the panel to the optimum current, changing the duty cycle until the panel reaches the optimum value. Algorithms based on fuzzy logic [3,[14][15][16]: MPPT control techniques based on fuzzy logic have recently been introduced because they offer the advantage of robust control and do not require exact knowledge of the mathematical model of the system.In addition, they improve performances (convergence speed, accuracy, ease of implementation, and low cost). Other MPPT techniques include the "Artificial Intelligence Algorithms" [10,17].These new technology MPPT algorithms are inspired by nature and biological structures.Among them we can mention the "particle-swarm-optimisation-based MPPT" [5,18], "genetic algorithms" [19], "neural networks" [12,20], and the "hybrid methods" [5,21].According to the literature [4,[22][23][24], we used a comparative study in Table 1 between the most used methods in terms of technical knowledge of PV panel parameters, complexity, speed, and accuracy.This paper is organized as follows: Section 3 is reserved for the study of the photovoltaic system, starting with a presentation of the photovoltaic panel.Next we explain all the parts constituting the architecture and the functioning of a PV-MPPT system.To improve the MPPT techniques' relative performances (convergence speed, accuracy, ease of implementation, and low cost), we have developed a control method using fuzzy logic that has been applied to a step-up boost MPPT for PV generators in Section 4. In Section 5, we talk about the most popular conventional MPPT controller based on the P&O algorithm.These techniques are studied and analyzed both theoretically and by simulation using Matlab-Simulink ® (R2018a, Mathworks, Natick, MA, USA) in Section 6.Then, a comparison is presented of the performance of both methods. According to the literature [4,[22][23][24], we used a comparative study in Table 1 between the most used methods in terms of technical knowledge of PV panel parameters, complexity, speed, and accuracy.This paper is organized as follows: Section 3 is reserved for the study of the photovoltaic system, starting with a presentation of the photovoltaic panel.Next we explain all the parts constituting the architecture and the functioning of a PV-MPPT system.To improve the MPPT techniques' relative performances (convergence speed, accuracy, ease of implementation, and low cost), we have developed a control method using fuzzy logic that has been applied to a step-up boost MPPT for PV generators in Section 4. In Section 5, we talk about the most popular conventional MPPT controller based on the P&O algorithm.These techniques are studied and analyzed both theoretically and by simulation using Matlab-Simulink ® (R2018a, Mathworks, Natick, MA, USA) in Section 6.Then, a comparison is presented of the performance of both methods. Challenges in Exploiting the Maximum Energy from the Photovoltaic System Our analysis is performed on the most sophisticated and widespread real photovoltaic cell model, consisting of two-diodes [1,25] as illustrated in Figure 3: Equation 1 expresses the mathematical relationship of the circuit output current in terms of the circuit parameters [25]: where: [ ] Equation ( 1) expresses the mathematical relationship of the circuit output current in terms of the circuit parameters [25]: where: I and V are the output current and output voltage of the photovoltaic cell, S is the irradiance, T is the absolute temperature in Kelvin, I ph (T) is the generated photo-current, I s1 and I s2 are the diode saturation currents and the reverse diode saturation currents, n 1 and n 2 are the diode ideality factors, Energies 2018, 11, 3263 5 of 20 R s the series resistance, and R p the parallel resistance.E g is the band-gap energy of the semiconductor, q is the elementary charge constant (1.602 × 10 −19 C) and k is the Boltzmann constant (1.38 × 10 −23 J/K), K 1 = 1.2 A/cm 2 K 3 and K 2 = 2.9 × 10 5 A/cm 2 K 5/2 . Equation ( 1) leads to a generalized equation of the entire photovoltaic panel with z photovoltaic cells, connected in series [1,25]: From Equation ( 5), we note that the output current of a photovoltaic panel connected to a load R i is highly dependent on the I-V variation of this load (Figure 2).Furthermore, Equation ( 5) illustrates that the I-V and P-V characteristics of the PV module vary not only with the connected load, but also with temperature and solar irradiance.Therefore, for each temperature and irradiance condition, it is necessary to track the corresponding MPP. Figure 1 illustrates the existence of an MPP on the P-V characteristic of PV generator, with variable irradiance and temperature according to Equation (5). To force the PV system to operate in its MPP region according to incident irradiation and temperature, it is necessary to include a maximum power point tracking (MPPT) device between the PV module and the load (Figure 4).The MPPT device consists of a DC-DC converter which can be buck-type, boost-type, or buck-boost type [23,26].The step-up boost converter has been chosen in this work. The transducer captures the instantaneous values of current I and voltage V from the PV array, Which are used by the computing circuit inputs for the calculation of the inputs of the fuzzy logic controller.The control output must be injected into another circuit of calculation to determine the duty ratio D, which will be used at the end of the process by the drive gate to control directly the Mosfet of the boost converter (Figure 4). The DC-DC converter is included between the array of photovoltaic cells and the energy storage unit (load) to match the voltage of the solar array with the battery voltage.If the duty ratio D of the converter is varied by a control circuit to constantly adjust the operating voltage of the solar panel to its point of maximum power V MPP , that means it is operated as a maximum power point tracker MPPT (Figure 5).The DC-DC converter is included between the array of photovoltaic cells and the energy storage unit (load) to match the voltage of the solar array with the battery voltage.If the duty ratio D of the converter is varied by a control circuit to constantly adjust the operating voltage of the solar panel to its point of maximum power VMPP, that means it is operated as a maximum power point tracker MPPT (Figure 5).The DC-DC switching converter consists of capacitors, inductors, and switches.Ideally, the power consumption of all these devices is very low, which is the reason for the efficiency of DC-DC switching converters [25,27].A metal oxide semiconductor field effect transistor (MOSFET) is used as a switching semiconductor device since it is easy to control using a pulse width modulation (PWM) signal generated by the controller.During the operation of the converter, the switch will be geared at The DC-DC converter is included between the array of photovoltaic cells and the energy storage unit (load) to match the voltage of the solar array with the battery voltage.If the duty ratio D of the converter is varied by a control circuit to constantly adjust the operating voltage of the solar panel to its point of maximum power VMPP, that means it is operated as a maximum power point tracker MPPT (Figure 5).The DC-DC switching converter consists of capacitors, inductors, and switches.Ideally, the power consumption of all these devices is very low, which is the reason for the efficiency of DC-DC switching converters [25,27].A metal oxide semiconductor field effect transistor (MOSFET) is used as a switching semiconductor device since it is easy to control using a pulse width modulation (PWM) signal generated by the controller.During the operation of the converter, the switch will be geared at The DC-DC switching converter consists of capacitors, inductors, and switches.Ideally, the power consumption of all these devices is very low, which is the reason for the efficiency of DC-DC switching converters [25,27].A metal oxide semiconductor field effect transistor (MOSFET) is used as a switching semiconductor device since it is easy to control using a pulse width modulation (PWM) signal generated by the controller.During the operation of the converter, the switch will be geared at a constant frequency f with an on-time value DT and an off-time value (1 − D)T, where T is the switching period and D is the duty ratio of the switch (D ∈ ]0,1[) (Figure 6). It is understood from Equation 6 that an increase in duty ratio results in an increase of the output voltage of the boost converter, and vice-versa.Hence, MPPT device instantly controls the decrease and increase of the duty ratio D in order to push the operating point to the MPP (Figure 5). Methodology The mathematical study of the P-V characteristic illustrated in Figure 5 leads us to the choice of the following MPPT algorithm: (1) The analysis of the slope m(pi) at the pi point on the P-V characteristic (Figure 5) is used to locate the actual operation point pi.According to this data, the controller will decide whether to increase or decrease the voltage by varying the duty ratio D. Mathematical equations of the step-up Boost converter used in the Matlab-Simulink model are as follows: It is understood from Equation 6 that an increase in duty ratio results in an increase of the output voltage of the boost converter, and vice-versa.Hence, MPPT device instantly controls the decrease and increase of the duty ratio D in order to push the operating point to the MPP (Figure 5). Methodology The mathematical study of the P-V characteristic illustrated in Figure 5 leads us to the choice of the following MPPT algorithm: (1) The analysis of the slope m(pi) at the pi point on the P-V characteristic (Figure 5) is used to locate the actual operation point pi.According to this data, the controller will decide whether to increase or decrease the voltage by varying the duty ratio D. Mathematical equations of the step-up Boost converter used in the Matlab-Simulink model are as follows: It is understood from Equation ( 6) that an increase in duty ratio results in an increase of the output voltage of the boost converter, and vice-versa.Hence, MPPT device instantly controls the decrease and increase of the duty ratio D in order to push the operating point to the MPP (Figure 5). Methodology The mathematical study of the P-V characteristic illustrated in Figure 5 leads us to the choice of the following MPPT algorithm: (1) The analysis of the slope m(p i ) at the p i point on the P-V characteristic (Figure 5) is used to locate the actual operation point p i .According to this data, the controller will decide whether to increase or decrease the voltage by varying the duty ratio D. (2) Analysis of the rate of change of the slope at the p i point ∆m(p i ) = s(p i ) expresses the rate of the approach and distancing of the MPP.This parameter is also included in the controller for faster MPP searching. The Configuration of the Fuzzy Controller Fuzzy systems are good models for nonlinear systems.Fuzzy models are based on fuzzy rules.These fuzzy rules provide information about uncertain nonlinear systems [28].A fuzzy logic controller Energies 2018, 11, 3263 8 of 20 consists of three main operations: "Fuzzification", "inferencing", and "defuzzification" [29,30].The input data are fed into a fuzzy logic-based system where physical quantities are represented as linguistic variables with appropriate membership functions.These linguistic variables are then used in the antecedents (If-Part) of a set of fuzzy "If-Then" rules within an inference engine to result in a new set of fuzzy linguistic variables, or a consequent (Then-Part) [31].Figure 8 illustrates the schematic representation of the fuzzy controller: approach and distancing of the MPP.This parameter is also included in the controller for faster MPP searching. The Configuration of the Fuzzy Controller Fuzzy systems are good models for nonlinear systems.Fuzzy models are based on fuzzy rules.These fuzzy rules provide information about uncertain nonlinear systems [28].A fuzzy logic controller consists of three main operations: "Fuzzification", "inferencing", and "defuzzification" [29,30].The input data are fed into a fuzzy logic-based system where physical quantities are represented as linguistic variables with appropriate membership functions.These linguistic variables are then used in the antecedents (If-Part) of a set of fuzzy "If-Then" rules within an inference engine to result in a new set of fuzzy linguistic variables, or a consequent (Then-Part) [31].Figure 8 illustrates the schematic representation of the fuzzy controller: Fuzzification The control circuit instantaneously measures the voltage V(i) and current I(i) of the photovoltaic generator and calculates power as P(i)=I(i)×V(i).As explained in Section 4.1, the controller analyses input1(i) that represents the slope of the current operating point on the P-V curve (m(pi)) and input2(i) that represents the rate of approaching or distancing of the point pi toward MPP.The fuzzy controller takes instantaneous measurements of these two input values and then decides and calculates the output, ∆ which is actually the change of the duty ratio of the MOSFET.The input and output variables of the fuzzy controller are expressed in terms of membership functions.Determination of the range of fuzzy linguistic variables that composes the membership functions of the input and output variables of the fuzzy controller is based on the experiences and observations of automation specialists who works with the PV system [31,32], as well as on the right choice of the rules of inference. In other words, our observations suggest that the value of the slope of a point pi on the curve in Figure 5 (which represents input1) will be positive, negative, or zero (zero is the MPP).The value of change of the slope of two points pi and pi−1 on the same curve (and which represents input2) will be either positive, negative or zero.The fuzzy controller will decide to increase, decrease, or stabilize the output of the command, which is ΔD. Therefore, in order to achieve the best possible results from our simulations experiments, and after several calculations and tests on our PV system, we decided to choose the decomposition of the following membership functions shown in Figure 9. Fuzzification The control circuit instantaneously measures the voltage V(i) and current I(i) of the photovoltaic generator and calculates power as P(i) = I(i) × V(i).As explained in Section 4.1, the controller analyses input 1 (i) that represents the slope of the current operating point on the P-V curve (m(p i )) and input 2 (i) that represents the rate of approaching or distancing of the point p i toward MPP.The fuzzy controller takes instantaneous measurements of these two input values and then decides and calculates the output, ∆D(i) which is actually the change of the duty ratio of the MOSFET.The input and output variables of the fuzzy controller are expressed in terms of membership functions.Determination of the range of fuzzy linguistic variables that composes the membership functions of the input and output variables of the fuzzy controller is based on the experiences and observations of automation specialists who works with the PV system [31,32], as well as on the right choice of the rules of inference. In other words, our observations suggest that the value of the slope of a point p i on the curve in Figure 5 (which represents input 1 ) will be positive, negative, or zero (zero is the MPP).The value of change of the slope of two points p i and p i−1 on the same curve (and which represents input 2 ) will be either positive, negative or zero.The fuzzy controller will decide to increase, decrease, or stabilize the output of the command, which is ∆D. Therefore, in order to achieve the best possible results from our simulations experiments, and after several calculations and tests on our PV system, we decided to choose the decomposition of the following membership functions shown in Figure 9.We propose to define the membership functions of the inputs and the output in terms of a set of linguistic variables: The real values of input1, input2, and the output are normalized by an input scaling factor [32,33].In this system, the input scaling factor has been designed as follows: • Input1 values are between −0.1 and 0.1; • Input2 values are between −100 and 100; • Output values are between −0.1 and 0.1. In the literature [31], different forms of membership functions may exist: Trapezoidal, triangular, rectangular, bell-shaped, concave shapes, etc. Triangular or trapezoidal shapes are generally used in this work as membership functions.The choice of the functions is also based on users' experience.Membership functions need to overlap to enable partial inclusion of the same linguistic variable at the same time in two different fuzzy sets [1,19,31]. The Inference Method The inference method works in such a way that a change in the duty ratio of the boost chopper leads to the voltage VMPP corresponding to the MPP.Following the analysis of an exhaustive number of combinations of input variables and an analysis of the corresponding outputs, we propose decision inference rules, illustrated in Figure 10: The real values of input 1 , input 2 , and the output are normalized by an input scaling factor [32,33].In this system, the input scaling factor has been designed as follows: • Input 1 values are between −0.1 and 0.1; • Input 2 values are between −100 and 100; • Output values are between −0.1 and 0.1. In the literature [31], different forms of membership functions may exist: Trapezoidal, triangular, rectangular, bell-shaped, concave shapes, etc. Triangular or trapezoidal shapes are generally used in this work as membership functions.The choice of the functions is also based on users' experience.Membership functions need to overlap to enable partial inclusion of the same linguistic variable at the same time in two different fuzzy sets [1,19,31]. The Inference Method The inference method works in such a way that a change in the duty ratio of the boost chopper leads to the voltage V MPP corresponding to the MPP.Following the analysis of an exhaustive number combinations of input variables and an analysis of the corresponding outputs, we propose decision inference rules, illustrated in Figure 10: In this work, we have used the Mamdani method [31] for fuzzy inference with the max-min operation fuzzy composition law, as illustrated in Figure 11. Defuzzification Following the inferencing operation, the controller output is expressed as a linguistic variable curve.Defuzzification methods are then used to calculate and decode the linguistic variable to a numerical value.In this work, we use a centroid method [31], which determines the crisp controller output as the value of the center of gravity of the final combined fuzzy set. Extract of the MPP Using the Perturb and Observe (P&O) and Fuzzy Methods Since the 1970's, the P&O (perturb and observe) method has been the most widely used approach in MPPT [5,12].There are several variants of the P&O method, including the one described in Figure 12 below, whose results are compared with our fuzzy logic-based MPPT model.In this work, we have used the Mamdani method [31] for fuzzy inference with the max-min operation fuzzy composition law, as illustrated in Figure 11.In this work, we have used the Mamdani method [31] for fuzzy inference with the max-min operation fuzzy composition law, as illustrated in Figure 11. Defuzzification Following the inferencing operation, the controller output is expressed as a linguistic variable curve.Defuzzification methods are then used to calculate and decode the linguistic variable to a numerical value.In this work, we use a centroid method [31], which determines the crisp controller output as the value of the center of gravity of the final combined fuzzy set. Extract of the MPP Using the Perturb and Observe (P&O) and Fuzzy Methods Since the 1970's, the P&O (perturb and observe) method has been the most widely used approach in MPPT [5,12].There are several variants of the P&O method, including the one described in Figure 12 below, whose results are compared with our fuzzy logic-based MPPT model. Defuzzification Following the inferencing operation, the controller output is expressed as a linguistic variable curve.Defuzzification methods are then used to calculate and decode the linguistic variable to a numerical value.In this work, we use a centroid method [31], which determines the crisp controller output as the value of the center of gravity of the final combined fuzzy set. Extract of the MPP Using the Perturb and Observe (P&O) and Fuzzy Methods Since the 1970's, the P&O (perturb and observe) method has been the most widely used approach in MPPT [5,12].There are several variants of the P&O method, including the one described in Figure 12 below, whose results are compared with our fuzzy logic-based MPPT model.The P&O method uses an algorithm that infers based on the duty ratio (increases or decreases) until the MPP is reached.As illustrated in Figure 12, V(K) and I(K) are continuously monitored, and the array output power P(K) is calculated.This instantaneous value P(K) is compared with the previously measured value of P(K − 1).If the two measured values are identical, this means that the maximum power point has been reached and no change is applied to the duty ratio.In the case where the output power and the voltage V(K) have changed between two successive measurements and in the same direction, the duty ratio is increased.If ΔP(K) increases while V(K) decreases and vice-versa, then the duty ratio is decreased [1,25]. In this paper, we compare the MPPT performance of the traditional P&O method with our fuzzy logic-based method.We illustrate in Figure 13 the fuzzy-based MPPT method and in Figure 14 the P&O MPPT method, as implemented on Simulink-Matlab ® .The P&O method uses an algorithm that infers based on the duty ratio (increases or decreases) until the MPP is reached.As illustrated in Figure 12, V(K) and I(K) are continuously monitored, and the array output power P(K) is calculated.This instantaneous value P(K) is compared with the previously measured value of P(K − 1).If the two measured values are identical, this means that the maximum power point has been reached and no change is applied to the duty ratio.In the case where the output power and the voltage V(K) have changed between two successive measurements and in the same direction, the duty ratio is increased.If ∆P(K) increases while V(K) decreases and vice-versa, then the duty ratio is decreased [1,25]. In this paper, we compare the MPPT performance of the traditional P&O method with our fuzzy logic-based method.We illustrate in Figure 13 the fuzzy-based MPPT method and in Figure 14 the P&O MPPT method, as implemented on Simulink-Matlab ® .The P&O method uses an algorithm that infers based on the duty ratio (increases or decreases) until the MPP is reached.As illustrated in Figure 12, V(K) and I(K) are continuously monitored, and the array output power P(K) is calculated.This instantaneous value P(K) is compared with the previously measured value of P(K − 1).If the two measured values are identical, this means that the maximum power point has been reached and no change is applied to the duty ratio.In the case where the output power and the voltage V(K) have changed between two successive measurements and in the same direction, the duty ratio is increased.If ΔP(K) increases while V(K) decreases and vice-versa, then the duty ratio is decreased [1,25]. In this paper, we compare the MPPT performance of the traditional P&O method with our fuzzy logic-based method.We illustrate in Figure 13 the fuzzy-based MPPT method and in Figure 14 the P&O MPPT method, as implemented on Simulink-Matlab ® . Simulations & Discussions The fuzzy logic-based MPPT model has been built to increase efficiency for variable climatic conditions.Hence, the ambient temperature and incident irradiation on the PV panel is defined as an array of instantaneous input values.The mathematical representation of the PV system is defined in Equations 2-5, implemented together with the following parameters: (1) The number of PV modules connected in series is 14; (2) the number of photovoltaic cells in each PV module, connected in series z = 36; the initial value of duty ratio was 0.1. Simulation Results for Fixed Climatic Conditions To evaluate the fuzzy logic-based MPPT system, we analyzed its power extraction capabilities and stability versus the traditional P&O controller.In this particular simulation, the PV model described previously has been simulated with fuzzy logic and P&O controllers for fixed climatic conditions, i.e., an irradiance of 1000 W/m 2 and temperature of 25 °C .The results are illustrated in Figure 15.For PV power output, the fuzzy logic-based MPPT method achieves maximum power output faster than the P&O controller (2.4 s compared to 12.3 s).Moreover, the fuzzy logic-based MPPT controller shows better performance not only in set point achievement, but also in stability and robustness (mitigation of power fluctuation).The PV generator reaches its maximum stable Simulations & Discussions The fuzzy logic-based MPPT model has been built to increase efficiency for variable climatic conditions.Hence, the ambient temperature and incident irradiation on the PV panel is defined as an array of instantaneous input values.The mathematical representation of the PV system is defined in Equations ( 2)-( 5), implemented together with the following parameters: (1) The number of PV modules connected in series is 14; (2) the number of photovoltaic cells in each PV module, connected in series z = 36; the initial value of duty ratio was 0.1. Simulation Results for Fixed Climatic Conditions To evaluate the fuzzy logic-based MPPT system, we analyzed its power extraction capabilities and stability versus the traditional P&O controller.In this particular simulation, the PV model described previously has been simulated with fuzzy logic and P&O controllers for fixed climatic conditions, i.e., an irradiance of 1000 W/m 2 and temperature of 25 • C. The results are illustrated in Figure 15.For PV power output, the fuzzy logic-based MPPT method achieves maximum power output faster than the P&O controller (2.4 s compared to 12.3 s).Moreover, the fuzzy logic-based MPPT controller shows better performance not only in set point achievement, but also in stability and robustness (mitigation of power fluctuation).The PV generator reaches its maximum stable power output just after a minor at t = 2.1 s, and the output remains stable within a 0.0001 W range.In the meantime, the P&O controller is slower to reach its set point and is subject to significant oscillations prior to stability achievement.Moreover, a steady regime is subject to a 0.0002 W continuous oscillation.Similar behavior is observed with the PV voltage output, while the P&O controller achieves its maximum set point after 15 s, compared to a rapid 2.4 s with a fuzzy logic controller.Furthermore, the P&O controller is subjected to an important overshoot prior to stabilization with a continuous 0.02 V oscillation, compared to virtually no oscillation in the case of our fuzzy controller.The same trend is noticed with the converter output voltage, PV module current, and converter current, while the fuzzy logic-based controller shows amazingly better performance than the P&O controller in speed for maximum power achievement, stability, and robustness. Energies 2018, 11, x FOR PEER REVIEW 13 of 20 power output just after a minor overshoot at t = 2.1 s, and the output remains stable within a 0.0001 W range.In the meantime, the P&O controller is slower to reach its set point and is subject to significant oscillations prior to stability achievement.Moreover, a steady regime is subject to a 0.0002 W continuous oscillation.Similar behavior is observed with the PV voltage output, while the P&O controller achieves its maximum set point after 15 s, compared to a rapid 2.4 s with a fuzzy logic controller.Furthermore, the P&O controller is subjected to an important overshoot prior to stabilization with a continuous 0.02 V oscillation, compared to virtually no oscillation in the case of our fuzzy controller.The same trend is noticed with the converter output voltage, PV module current, and converter current, while the fuzzy logic-based controller shows amazingly better performance than the P&O controller in speed for maximum power achievement, stability, and robustness.Performance improvement is the result of a faster and more appropriate variation of the duty ratio in the case of the fuzzy logic controller.Performance improvement is the result of a faster and more appropriate variation of the duty ratio in the case of the logic controller.In this case, the irradiance was quickly increased from S = 500 Wm −2 to S = 1100 Wm −2 via a step function at t = 30 s.As illustrated in Figure 16, the fuzzy logic-based MPPT method shows much better performance than the P&O controller.The fuzzy controller responds to the irradiance change virtually instantaneously and regains stability with amazing robustness for PV module power and voltage output (reduced power oscillation).The P&O controller takes longer to achieve stability, which occurs after signal oscillations in the case of the PV power output and after overshoot in the case of the PV voltage output.We note that the duty ratio variation by the fuzzy logic controller is much more rapid than that of the P&O controller when they detect the irradiance change.The duty ratio gradient decreases in the case of the fuzzy controller as compared to a constant gradient in the case of the P&O controller.This probably helps with both the speed of maximum power achievement and oscillation and overshoot mitigation.In this case, the irradiance was quickly increased from S = 500 Wm −2 to S = 1100 Wm −2 via a step function at t = 30 s.As illustrated in Figure 16, the fuzzy logic-based MPPT method shows much better performance than the P&O controller.The fuzzy controller responds to the irradiance change virtually instantaneously and regains stability with amazing robustness for PV module power and voltage output (reduced power oscillation).The P&O controller takes longer to achieve stability, which occurs after signal oscillations in the case of the PV power output and after overshoot in the case of the PV voltage output.We note that the duty ratio variation by the fuzzy logic controller is much more rapid than that of the P&O controller when they detect the irradiance change.The duty ratio gradient decreases in the case of the fuzzy controller as compared to a constant gradient in the case of the P&O controller.This probably helps with both the speed of maximum power achievement and oscillation and overshoot mitigation.In this case, we evaluate the relative performance of the P&O and a fuzzy logic-based controller for fixed temperature and slow irradiance increase from 500 Wm −2 to 650 Wm −2 .As illustrated in Figure 17, the irradiance is slowly and continuously increased from S = 500 Wm −2 at t = 20 s up to S = 650 Wm −2 at t = 80 s.In this case, we see that both controllers show almost similar performance.In this case, we evaluate the relative performance of the P&O and a fuzzy logic-based controller for fixed temperature and slow irradiance increase from 500 Wm −2 to 650 Wm −2 .As illustrated in Figure 17, the irradiance is slowly and continuously increased from S = 500 Wm −2 at t = 20 up to S = 650 Wm −2 at t = 80 s.In this case, we see that both controllers show almost similar performance.In this case, the temperature is decreased quickly via a step function at t = 30 s while keeping irradiance fixed at 1000 Wm −2 .We note similar observations for the case with quick irradiance increase with fixed temperature.The fuzzy-based MPPT controller reacts quickly to the change via a much more aggressive duty ratio change Figure 18.This leads to a faster maximum power output achievement with comparable stability with the P&O controller.In this case, the temperature is decreased quickly via a step function at t = 30 s while keeping irradiance fixed at 1000 Wm −2 .We note similar observations for the case with quick irradiance increase with fixed temperature.The fuzzy-based MPPT controller reacts quickly to the change via a much more aggressive duty ratio change Figure 18.This leads to a faster maximum power output achievement with comparable stability with the controller.In this case, as seen in Figure 19, both P&O and fuzzy logic controllers show comparable performance in PV power output achievement and stabilization.However, a notable difference appears in the case of the PV voltage output.The fuzzy controller shows no significant overshoot compared to P&O controller.Moreover, better performance is shown by the fuzzy logic controller when it comes to voltage output in the converter.In this case, as seen in Figure 19, both P&O and fuzzy logic controllers show comparable performance in PV power output achievement and stabilization.However, a notable difference appears in the case of the PV voltage output.The fuzzy controller shows no significant overshoot compared to P&O controller.Moreover, better performance is shown by the fuzzy logic controller when it comes to voltage output the converter. Conclusions The cost of solar energy is a major issue when it comes to its potential for greater development.Maximum power extraction is, therefore, an important parameter that influences the total production of PV systems and enables better payback of PV projects.In this paper, we have presented a fuzzy logic method which achieves faster and more stable power output at MPP from PV modules.In order to illustrate the performance of this controller, a Matlab-Simulink ® model was built, and simulations were done for different operation scenarios.The results were compared with commonly used P&O controllers.Simulation results proved higher efficiency in maximum power tracking for the fuzzy logic controller.The simulations showed that most significant performance differences were achieved with rapidly varying parameters that influence power output (temperature, irradiance).Moreover, the fuzzy logic-based controller, as compared to the P&O controller, shows better performance in maximum power tracking time delay, stability, and robustness in all cases.Better stability and robustness performances from the fuzzy logic-based controller offer major advantages in mitigation of power fluctuation.The fuzzy logic algorithm is a robust and efficient algorithm.Indeed, this algorithm works at the optimal point without oscillations.In addition, it is characterized by good transient behavior.However, the implementation of this type algorithm is easier than conventional algorithms. Conclusions The cost of solar energy is a major issue when it comes to its potential for greater development.Maximum power extraction is, therefore, an important parameter that influences the total production of PV systems and enables better payback of PV projects.In this paper, we have presented a fuzzy logic method which achieves faster and more stable power output at MPP from PV modules.In order to illustrate the performance of this controller, a Matlab-Simulink ® model was built, and simulations were done for different operation scenarios.The results were compared with commonly used P&O controllers.Simulation results proved higher efficiency in maximum power tracking for the fuzzy logic controller.The simulations showed that most significant performance differences were achieved with rapidly varying parameters that influence power output (temperature, irradiance).Moreover, the fuzzy logic-based controller, as compared to the P&O controller, shows better performance in maximum power tracking time delay, stability, and robustness in all cases.Better stability and robustness performances from the fuzzy logic-based controller offer major advantages in mitigation of power fluctuation.The fuzzy logic algorithm is a robust and efficient algorithm.Indeed, this algorithm works at the optimal point without In addition, it is characterized by good transient behavior.However, the implementation of this type algorithm is easier than conventional algorithms. We can conclude that the use of fuzzy logic for the control MPPT presents a very interesting advantage, because there are always amazing results for the acceleration of the speed of MPP pursuit, the stability, achieved through the elimination of oscillations in steady state, and robustness.These amazing results are obtained after multiple tests by the engineer user's experience, who decide the designs of the fuzzy regulator, but the disadvantage is that with each model of the photovoltaic system, we must study and specify the engineer's own parameters and membership functions and the rules of his own fuzzy controller that help to achieve the MPP.For perspective, we propose a generalized study which can contain a global and generalized fuzzy model for any model of a photovoltaic system, if possible.The analysis in this paper should be useful for MPPT users, designers, and commercial PV manufacturers. Figure 1 . Figure 1.Variation of the maximum power point (MPP) with variations of irradiation and temperature. Figure 1 . Figure 1.Variation of the maximum power point (MPP) with variations of irradiation and temperature. Figure 2 . Figure 2. Load effect on I-V photovoltaic characteristics. Figure 2 . Figure 2. Load effect on I-V photovoltaic characteristics. Figure 3 . Figure 3.The two-diode circuit model of a photovoltaic cell. Figure 3 . Figure 3.The two-diode circuit model of a photovoltaic cell. Figure 5 . Figure 5.The direction change of the duty ratio D for tracking the MPP. Figure 5 . Figure 5.The direction change of the duty ratio D for tracking the MPP. Figure 5 . Figure 5.The direction change of the duty ratio D for tracking the MPP. a constant frequency f with an on-time value DT and an off-time value (1 − D)T, where T is the switching period and D is the duty ratio of the switch ( ∈ ] [ 1 , 0 ) (Figure6). Figure 6 . Figure 6.Output voltage vo(t) of the ideal DC-DC switching converter. Figure 7 Figure 7 illustrates the step-up boost converter circuit used in the MPPT technique. Figure 7 . Figure 7.The ideal boost converter circuit. Figure 6 . Figure 6.Output voltage v o (t) of the ideal DC-DC switching converter. Figure 7 Figure 7 illustrates the step-up boost converter circuit used in the MPPT technique. Figure 6 . Figure 6.Output voltage vo(t) of the ideal DC-DC switching converter. Figure 7 Figure 7 illustrates the step-up boost converter circuit used in the MPPT technique. Figure 7 . Figure 7.The ideal boost converter circuit. Figure 7 . Figure 7.The ideal boost converter circuit. Figure 9 . Figure 9. Membership functions of the two entries: Input1, input2, and the output, with three sets of linguistic variables. Figure 9 . Figure 9. Membership functions of the two entries: Input 1 , input 2 , and the output, with three sets of linguistic variables.We propose to define the membership functions of the inputs and the output in terms of a set of linguistic variables: (1) Input 1 : N: Negative, Z: Zero, P: Positive, (2) Input 2 : N: Negative, Z: Zero, P: Positive, (3) Output: D: Decrease, S: Stabilize, I: Increase. Figure 11 . Figure 11.Max-min composition for the calculation of ∆D output. Figure 11 . Figure 11.Max-min composition for the calculation of ∆D output. Figure 11 . Figure 11.Max-min composition for the calculation of ∆D output. Figure 13 . Figure 13.PV total system Simulink representation with a fuzzy logic controller. Figure 13 . Figure 13.PV total system Simulink representation with a fuzzy logic controller.Figure 13.PV total system Simulink representation with a fuzzy logic controller. Figure 13 . Figure 13.PV total system Simulink representation with a fuzzy logic controller.Figure 13.PV total system Simulink representation with a fuzzy logic controller. Figure 14 . Figure 14.Details of the subsystems of the P&O Controller. Figure 14 . Figure 14.Details of the MPPT subsystems of the P&O Controller. 6. 2 . Simulation Results for the Changing Climatic Conditions 6.2.1.Simulation Results for a Fixed Temperature at 25 • C and Fast Increase of Irradiance from 500 Wm −2 to 1100 Wm −2 Figure 16 . Figure 16.Simulation results with a fast increase in irradiance at t = 30 s from S = 500 W/m 2 to S = 1100 W/m 2 at constant temperature T = 25 °C. Figure 16 . Figure 16.Simulation results with a fast increase in irradiance at t = 30 s from S = 500 W/m 2 to S = 1100 W/m 2 at constant temperature T = 25 • C. 6.2.2.Simulation Results for a Fixed Temperature at 25 • C and Slow Increase of Irradiance from 500 Wm −2 to 650 Wm −2 Energies 2018 , 20 Figure 18 . Figure 18.Simulation results with a fast decrease of temperature at t = 30s from T = 40 °C to T = 10 °C, with a constant irradiance of S = 1000 W/m 2 . Figure 18 . Figure 18.Simulation results with a fast decrease of temperature at t = 30 s from T = 40 • C to T = 10 • C, with a constant irradiance of S = 1000 W/m 2 .6.2.4.Simulation Results for Fixed Irradiance at 1000 Wm −2 and Slow Temperature Increase from 25 • C to 30 • C Figure 19 . Figure 19.Simulation results with a slow increase in temperature from T = 25 • C (t = 20 s) to T = 30 • C (t = 80 s), with fixed irradiance of S = 1000 W/m 2 . Table 1 . A comparative table of MPPT's characteristics. Table 1 . A comparative table of MPPT's characteristics.
2019-04-16T13:29:08.217Z
2018-11-23T00:00:00.000
{ "year": 2018, "sha1": "e9313ab169d4109145f0545f591df37d008d594f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/11/12/3263/pdf?version=1545312570", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e9313ab169d4109145f0545f591df37d008d594f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258804036
pes2o/s2orc
v3-fos-license
Experimental study of primary school students' independent reading based on the network teaching platform . The development of information technology, especially the rise of online teaching platforms, has changed the way people read and the methods they use. Under this influence, primary school students' independent reading has also been changed. However, due to limited self-control and understanding, primary school students face certain difficulties when using online teaching platforms for independent reading. Advantages and disadvantages of distance learning based on the network teaching platform were summarized in this paper. A one-year experiment study was carried out on our school fifth-grade students in independent reading using WeChat campus platform considering their current situation and existing problems of independent reading. In experiment study an independent reading mode was introduced. The results show that this reading mode can significantly improve students' independent reading ability. Introduction Reading is a personalized behaviour for students, and reading teaching is a process of dialogue among students, teachers, textbook editors, and texts. Independent learning plays a very important role in reading, and learners with strong independent learning abilities also have higher reading abilities [1]. Zimmerman [2] believes that independent learning is a process in which learners set their own learning goals and plans, continuously monitor, and adjust their own learning processes, choose, and use various strategies to adjust their learning behaviours, and effectively use material and social resources in the learning environment to achieve learning goals. Independent reading is an embodiment of independent learning during the reading process. Due to the development of information technology, remote education teaching based on online teaching platforms has entered people's lives. Currently, online teaching platforms have become an important education resource, such as open universities, Jones Online University, learning spaces, Blackboard platforms, Moodle platforms, etc. Online teaching platforms have become the direction of future education development. Sadeghi [3] summarized the research on distance education, stated the advantages and disadvantages of distance learning, and gave suggestions for improving distance education. He pointed out that in distance education, teachers should upload sufficient learning and testing resources and provide more support to students [3]. Biwer et al. [4] conducted a questionnaire survey on the resource management strategies and emergency distance learning adaptation abilities of 1,800 university students and pointed out that emergency distance learning makes it difficult for university students to regulate their attention, effort, and time, and they also lack motivation. Moreover, students need to invest more time and energy in self-study. Although remote education based on online teaching platforms has brought many advantages, it also has many disadvantages. Advantages of distance education including the flexible learning time and location [6], low learning costs [7], no commuting required [6], flexible choices [8], saving learning time [7], and the ability to earn money while learning [8]. However, disadvantages of distance education including the easy distraction [6][7] [8], high technical requirements [8], less interaction among classmates [9][10], difficulty in maintaining contact with mentors [11], online degrees being unacceptable for employment [6], and inability to regulate attention [4]. To overcome these disadvantages, students need to have strong learning autonomy [12] [13]. For primary school students, independent reading based on the network teaching platform are more difficult due to its less motivation, low self-control ability and low ability to choose reading resources. To overcome these disadvantages, researchers need to conduct more research. This article first analyses the current situation and existing problems of primary school students in our school using the WeChat campus platform for independent reading. Based on the existing problems, an independent reading mode is introduced, and a one-year experimental study is conducted on our school's fifth-grade students using this reading mode. After one year, the students' independent reading situation is rechecked. The results show that this reading mode can effectively improve students' independent reading ability. The current situation and existing problems of independent reading using the WeChat campus platform in our school In order to understand the current situation of independent reading among primary school students in our school using the WeChat campus platform, we randomly selected 100 fifth-grade students for a questionnaire survey. Figure 1 shows the content and results of the survey. As shown in Fig. 1, there are 6 questions in the questionnaire, including 4 aspects: reading methods, reading awareness, reading motivation, and reading resources. Based on the results of the questionnaire we summarized four problems for independent reading using the WeChat campus platform in our school. Independent reading methods are relatively simple In the "Reading methods" question, 40% of students choose to read-only without writing, 35% choose to outline the key points, 15% choose to label the key points, and only 10% choose to take reading notes. Students' reading methods are relatively simple, with the majority focusing on not taking notes or simply outline the key points. Students' awareness of independent reading is relatively weak In the "Do you adjust your reading behaviour while reading?" question, about 75% of students do not have a reading goal before reading. Only 25% of students can adjust their reading behaviour during reading and summarize and reflect on it. This indicates that students' awareness of independent reading is relatively weak without help. Students' motivation for independent reading is relatively insufficient In the " Do you like reading?" question, about 20% of students love reading, about 25% like it, about 45% are neutral, and about 10% do not like it. When asked "when do you reading" 30% of students read because they like it, 35% of students read because their teachers require it, and 35% of students read because their parents require it. Students have a positive attitude towards reading, but their motivation for reading needs to be improved. Students' resources for independent reading are relatively lacking In the "Do you subscribe to newspapers or educational magazines?" question, only 20% of students have been subscribing to newspapers and studying magazines, 25% of children have subscribed before, and 55% of children have never subscribed. While, in the "How many extracurricular books do you have?" question, 68% of students have less than 50 extracurricular books, 22% of students have less than 100 extracurricular books, and 10% of students have more than 100 extracurricular books. Therefore, there are relatively few resources available for students' independent reading. Pre-reading-Plan-Reading-Evaluation reading mode To improve our school's students' independent reading ability based on online teaching platforms, we introduce a Prepare-Plan-Read-Evaluate (PPRE) reading mode (see Fig. 2). This reading mode is divided into four stages: the first stage is the pre-reading resource preparation stage, where reading resources are designed and provided based on the students' learning dynamics, excellent reading materials are selected, and reading guidelines are published; the second stage is the reading planning stage, where students analyse their current reading level, select suitable reading resources, determine reading goals, and plan reading time; the third stage is the student reading adjustment stage, where the platform records students' independent reading data, timely feedback on reading situations, and teachers and students provide reading guidance. Students can perform self-adjustments based on feedback; and the fourth stage is the evaluation and update after reading stage, which mainly includes the results check after reading, teacher, self, and peer evaluations, as well as reading dynamic updates. It should be notice that, this independent mode was selected based on our current situation and existing problems of independent reading using the WeChat campus platform in our school. The pre-reading resource preparation can help to selected excellent reading materials for the primary school students and give their guides for the reading. This could also rich the learning resources. In the reading planning, the network teaching platform and the teachers can help the primary school students select reading resources and identify reading goals as well as plan the reading time. During the reading, the network teaching platform can record the students' reading data and feedback to the student, help them to improve their reading methods. What is more, in the reading and evaluation stage, both class mates and teachers can interactive with the student, give their suggestions, appreciations, as well as communications. This will help the students improve their reading awareness and motivations. Results and discussion During the study, we optimized the online teaching platform and conducted comparative research on students' motivation in independent reading, awareness of independent reading, and methods of independent reading using the questionnaire in Fig. 1. This questionnaire was also used to evaluate the students' independent reading in the middle of the experimental study. The corresponding results are shown in Figure 3, where the data during the study was obtained after six months of research. From the figure, students underwent significant changes after one year of research. Independent reading methods has been improved Through a questionnaire survey of students, it was found that students had more and easier access to independent reading resources on the online teaching platform. Before the study, only 20% of students had access to independent reading resources, which increased to 68% during the middle of the study. After the study, the percentage grew to 91%. Before the study, students lacked independent reading resources and could not read without relevant resources. After the study, most students could obtain independent reading resources, making reading much easier. This could be contributed to the improvement of their reading motivation as well as the excellent resources selected by the network teaching platform and the teachers. Students' motivation in independent reading improved The online teaching platform can do its best to meet students' needs for independent reading achievement, help them build confidence, rely on their own efforts to complete tasks, gain satisfaction from knowledge, and experience the joy of success. Students can maintain a strong interest in learning during the process of independent reading, stimulate their internal learning needs, and enable each student to learn how to learn, achieving the goal of willing to learn, enjoying learning, being able to learn, and learning well. Fig. 3 Reading resources, reading motivations, reading awareness, and reading methods comparison before, in the middle and after the experimental study Students' awareness of independent reading has been improved Students' awareness of independent reading has been significantly improved after the experimental study compared with that before the study. After the experimental study, students set reading goals, monitor their reading behaviour, identify reading problems, adjust their reading strategies, and develop the habit of independent reading. These are the concrete performance of the improvement of reading awareness. It is necessary for students to spend more time, thoughts, and efforts in the process of reading. Good reading habits will be their lifelong wealth and will have a significant impact on improving their ability to read independently. The methods of independent reading for students have become diversified Before the experimental study, the reading methods the students used in independent reading are "read-only without writing" and "outline the key points". However, after experimental study, the reading methods the students used in independent reading have become more diversified, such as, "skimming" "careful reading" and "summary method". Moreover, they can choose appropriate reading methods according to their own learning needs. Due to the limited research time and research conditions, the research subjects of this study were limited to students in one grade of the same school, and the sample size was small. In addition, teachers and parents also play a crucial role in students' independent reading on the online teaching platform. As observers or guides, more suggestions and countermeasures can be summarized from the perspectives of teachers and parents. It is hoped that further research can be conducted in these areas in the future. Conclusion Considering the disadvantages of distance learning based on the network teaching platform, a PPRE independent reading mode was introduced in our one-year experiment study based on fifth-grade students in independent reading using WeChat campus platform. It was shown that this reading mode can significantly improve students' reading motivation, reading awareness, as well as reading methods. In addition, the students' reading resources were also improved after experimental study. We believe this PPRE independent reading mode can also be used in students in other grade and schools. And more theorical and experimental studies should be down on the independent reading using network teaching platform in the future.
2023-05-20T15:06:34.898Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "f97297647eeab545b20a871bc80f60c0550eaeeb", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/17/shsconf_clec2023_01004.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e3e0dea88c9fc268a65fafd998dc228b71d6375b", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
119166905
pes2o/s2orc
v3-fos-license
Duality of reduced density matrices and their eigenvalues For states of quantum systems of $N$ particles with harmonic interactions we prove that each reduced density matrix $\rho$ obeys a duality condition. This condition implies duality relations for the eigenvalues $\lambda_k$ of $\rho$ and relates a harmonic model with length scales $l_1,l_2, \ldots, l_N $ with another one with inverse lengths $1/l_1, 1/l_2,\ldots, 1/l_N$. Entanglement entropies and correlation functions inherit duality from $\rho$. Self-duality can only occur for noninteracting particles in an isotropic harmonic trap. I. INTRODUCTION Duality is an important concept in different branches of physics. It relates physical quantities and physical behavior of two different systems with each other. Prominent examples are the particle-wave duality in quantum mechanics and the duality of Maxwell's electrostatic and magnetostatic theory obtained by a Lorentz transformation from a rest frame to a moving one. The particle-hole duality (see e.g. [1]) for models of electrons in solid state physics and the Kramers-Wannier duality [2] in statistical physics are further examples. The latter relates the free energy of an Ising model with nearest neighbor coupling J on a 2-dimensional lattice L to the free energy of an Ising model with coupling J * on the dual lattice L * . In particular, it links the low temperature behavior of the lattice system L to the high temperature properties of the dual system. For a self-dual lattice, i.e. L = L * , like the square lattice, the duality allows to determine the critical point for the phase transition of the corresponding Ising model [2]. An interesting duality has been discovered recently [3,4] for the Moshinsky atom [5], a two electron model, in three dimensions described by the hamiltonianĤ =ˆ for the Rényi entropy S q (Λ) = (1 − q) −1 ln Tr[(ρ 1 (Λ) q ] , q = 1, has been found [3] for the 1-particle reduced density operator (1-RDO) ρ 1 (Λ) of the ground state. Condition (1) means that for any given Λ ∈ (− 1 2 , 0) there exists a unique coupling constant Λ ′ (Λ) ∈ (0, ∞) (see Ref. [3]) such that S q (Λ) = S q (Λ ′ (Λ)). For the ground state of a generalized [24] Moshinsky atom of three electrons in one dimension a similar duality has been observed for the natural occupation numbers λ (1) k [6,7] λ (1) with [ℓ * − /ℓ * + ](ℓ − , ℓ + ) = ℓ + /ℓ − . ℓ + and ℓ − are two natural length scales related to both coupling constants of the model. Numerical analysis [8] has provided strong evidence that this kind of duality is not restricted to the ground state of the Moshinsky atoms with two or three electrons, but holds for any number N of electrons, and for all its eigenstates. In this communication we will answer three questions 1. What is the origin of that duality? 2. Does it hold for any harmonic model and for any N-particle eigenstate? 3. Is the duality also given for the eigenvalues of the M-particle reduced density operator (M-RDO) with M > 1 and what is the role of the exchange symmetry? These questions are of fundamental relevance since reduced density operators (RDO) and their eigenvalues play an important role in atomic physics and quantum chemistry [9][10][11]. They also have attracted strong attention in quantum information theory where the socalled quantum marginal problem is studied (see Refs. [12][13][14][15][16][17][18]). It asks whether given density operators (marginals) for subsystems of a multipartite quantum system are compatible in the sense that they can arise from a common total state. One of the prime examples originating from quantum chemistry is the M-particle N-representability problem [9,10], the problem of describing the family of M-RDO which can arise from an antisymmetric N-particle state. Stimulated by the results of Borland and Dennis [19] and Ruskai [20], Klyachko [13,14] in a ground-breaking work solved recently the 1-particle N-representability problem. His solution has revealed so-called generalized Pauli constraints, restrictions on the eigenvalues of the 1-RDO of fermionic systems, which are significantly strengthening Pauli's famous exclusion principle. Accordingly, answers to the questions raised above will add a new facet to the properties of RDO and their eigenvalues and therefore to this field of research. II. HARMONIC MODEL We consider a model for particles with mass m interacting harmonically. The corresponding hamiltonian is given byĤ For the case of identical particles the potential energy, the second term in Eq. (3), is invariant under particle permutations. This restricts {D rα,sβ } to two independent matrices D (1) and D (2) and the potential energy takes the form 1 are the Moshinsky atom [5] and its generalization to an arbitrary number of fermions [21], which were already mentioned in the introduction. Note, hamiltonian (3) can also be used to describe a d-dimensional harmonic lattice with N ′ sites, r = 1, ..., N ′ and dynamical matrix For the following technical analysis, to avoid annoying issues related to the dimensionality of physical quantities, we set several physical quantities as e.g. and m to 1 and treat any variable as e.g. positions x i , momenta p i and couplings D ij as dimensionless real numbers. This could also be achieved by introducing an arbitrary length scale L and dimensionless position operatorsx i /L, momentum operatorsp i /( /L) and hamiltonianĤ/( Ω) with Ω ≡ /(L 2 m). In position spaceĤ then takes the form The corresponding time-independent Schrödinger equation reads To determine the eigenenergies E we just need to diagonalize the symmetric coupling matrix D, where d ≡ diag(d 1 , . . . , d N ) is a diagonal matrix and R an orthogonal matrix. The coordinate transformation from x to the pseudo-positions y = R x is represented as a unitary The transformed hamiltonian where we introduced the dimensionless length scales ℓ µ = 1 √ dµ . The normalized eigensolutions of H (y) are products of 1-particle states ϕ with ν = (ν 1 , . . . , ν N ), ν µ ∈ N 0 , ℓ ≡ (ℓ 1 , . . . , ℓ N ) and H ν (z) are the Hermite polynomials. Then, the eigenstates of H (x) follow as For later purpose, we have made explicit the dependence of Ψ (ν) ℓ on ℓ ≡ (ℓ 1 , . . . , ℓ N ). Moreover, from Eq. (9) and the explicit form of the 1-particle states (10) we can infer a homogeneous structure, Here we have introduced the unitary rescaling operator V N (α), which is rescaling the length by a factor α and is local, Due to the harmonic character of the hamiltonian (4) it is worth studying the Schrödinger equation (5) also in the Fourier space. For this we introduce the Fourier transformation, a linear bounded operator on the space of Schwartz functions ϕ : R → C, For all k ∈ N, F 1 gives rise to Fourier operator F k acting on Schwartz functions Φ : R k → C which is extended uniquely to the space L 2 (R k ) according to the Plancherel theorem [22]. By Applying F N to Eq. (5) leads to the Schrödinger equation in momentum space withΨ ≡ F N Ψ, the Fourier transformation of Ψ and Due to the unitarity of F N , H (x) and H (p) have the same spectrum. Below we will see that this dual description, using either the position or the momentum space, is the origin of the duality of the M-RDO and of its eigenvalues. The eigenvalue problem for H (p) can be solved similarly as that for H (x) . With the pseudo-momenta π = R p and Eqs. (6), (7), H (p) is transformed to with the reciprocal (dimensionless) length scales Notice that the hamiltonians (8) and (16) are identical up to a swapping of ℓ andl. Since and H (p) = U(R) † H (π) U(R) the same also holds for H (x) and H (p) . Consequently, we find Relating this to Eq. (14) withΨ →Ψ (ν) ℓ finally yields the important relation It states that applying the Fourier operator F N to an eigenstate of the hamiltonian (4) does only lead to a rescaling of the length scales, i.e. a replacement of ℓ byl according to Eq. (17). III. REDUCED DENSITY OPERATORS Instead of elaborating on eigenfunctions we focus on reduced density operators. To keep the notation simple we restrict ourselves for most of this section to N-particle systems with fermionic or bosonic exchange symmetry. The generalization to arbitrary/no symmetries will be discussed at the end of this section. First, for a state Ψ ∈ L 2 (R N ) we define the corresponding density operator ρ : Here, ·, · N is the inner product on L 2 (R N ) . By making use of the tensor product structure where we introduced the partial trace Tr N −M [·] (see e.g. [23]). Due to the exchange symmetry In quantum chemistry [9] ρ (M ) is typically represented w.r.t. a basis of L 2 (R M ) which leads to an infinite-dimensional M-particle reduced density matrix. The 1-RDO, ρ (1) , plays a particular role, since its eigenfunctions {χ (1) k } provide a 1particle description of the N-particle state Ψ (N ) with the corresponding eigenvalues {λ (1) k } as occupation numbers, where k ∈ N. In case of a 2-particle observable (2) for a system of N identical particles, ρ (2) arising from Ψ (N ) is sufficient in order to calculate the expectation value  (2) Ψ (N) . One important concept based on that is the simplified calculation of fermionic ground state energies: For hamiltonians with 2-particle interactions, the ground state can be obtained by minimizing the energy expectation value just over 2-RDO ρ (2) , which arise from N-fermion states Ψ (N ) . The underlying set of states ρ (2) is much smaller than that of N-fermion quantum states, but the problem is then to determine the set of possible ρ (2) 's, which is known as the 2-particle N-representability problem [10]. This makes obvious why reduced density operators particular for M = 1 and M = 2 are intensively studied in atomic physics and quantum chemistry, as well as, in quantum information theory as explained in the last paragraph of Section 1. Before applying the concept of RDO to the eigenstates of our harmonic model (3) we comment on the relation of Fourier operators F ⊗ N 1 and partial traces. Since F k ≡ F ⊗ k 1 is a unitary operator and partial traces are respecting the local tensor product structure of This means that if N-particle density operators ρ andρ are conjugate by . Note that we skip the superscript ν to keep our notation simpler. Since the Fourier transformation F k is not only a unitary operator but even an isometry (Plancherel theorem [22]) we find for any Φ ∈ L 2 (R N ) by recalling Eq. (20) Hence, we haveρ ℓ = F N ρ ℓ F † N and according to the comment in the first two lines below Eq. (25) together with Eq. (19) implies for the M-RDO of any eigenstate of hamiltonian (4) This result for N-particle systems with fermionic or bosonic exchange symmetry can be generalized to arbitrary N-particle states. The only subtle difference is that without Equation (27) is the duality relation for the M-RDO corresponding to a general N-particle state of a system described by hamiltonian (4). We close this section by deriving a homogeneous structure for the M-RDO following from the homogeneity relation (12) for the N-particle eigenstates Ψ This yields the homogeneous structure By using the local structure V N (α) = V 1 (α) ⊗ N , Eq. (21), the unitarity of V 1 (α) and the properties of the partial trace it is an elementary exercise to show that the M-RDO inherit the same homogeneous structure (29), where V M (α) = V 1 (α) ⊗ M and M = |m|. IV. DUALITY FOR EIGENVALUES AND INTERACTION MATRICES In this section we work out the consequences of the duality condition (27) wherel ≡l(ℓ) is given by Eq. (17) and we recall that we have skipped the superscript ν in (31), which labels the underlying N-particle energy eigenstate. Identity (31) is our main result and it is the generalization of the duality condition (2) found in Ref. [6]. It does not depend on the use of dimensionless quantities and is valid for Since the d-dimensional harmonic model can also be represented by hamiltonian (3), the duality relation also holds for arbitrary dimensions. V. DISCUSSION AND CONCLUSIONS We have proven analytically that the recently discovered duality of the Rényi entropy (Ref. [3]) and of the natural occupation numbers (Ref. [6]) of the ground state of Moshinskytype atoms with two and three electrons, respectively, is generic for arbitrary N-particle systems with harmonic interactions, in any dimension. It originates from the duality of wave mechanics in position and momentum space in combination with the harmonic interactions. The dualities (27) and (31) are valid for all RDO of any arbitrary pure N-particle eigenstate. A possible fermionic or bosonic exchange symmetry for the case of identical particles is irrelevant. Moreover, duality also holds for arbitrary N-particle pure states of system (3), i.e. states of the form Ψ ℓ = ν c ν Ψ (ν) ℓ , which even generalizes to mixed states. As a consequence, for spinful particles described by hamiltonian (3), the duality (31) holds for the eigenvalues of any orbital RDO. Moreover, it also holds for any full (i.e. spin including) RDO. What is the physical implication of that duality? The kind of duality found here is quite analogous to the Kramers-Wannier duality, because the duality relation (27) x t Dˆ x , withT the kinetic energy operator, α, β ∈ R + coincides with that in Eq. As an example, consider identical particles in one dimension. As explained in section 2 there are only two coupling parameters D (1) and D (2) . Like in Refs. [6,7] behaviour, the duality found in the present contribution connects strongly localized states with ℓ µ "small" with weakly localized states of the dual model for which ℓ * µ is "large", or vice versa.
2014-08-13T20:07:05.000Z
2014-08-13T00:00:00.000
{ "year": 2014, "sha1": "b0cbf8194719c4554a4008654033540283cf034d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1408.3128", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b0cbf8194719c4554a4008654033540283cf034d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
219407204
pes2o/s2orc
v3-fos-license
An ultrasonic metallic Fabry–Pérot metamaterial for use in water Fabry-Pérot ultrasonic metamaterials have been additively manufactured using laser powder bed fusion to contain subwavelength holes with a high aspect-ratio of width to depth. Such metamaterials require the acoustic impedance mismatch between the structure and the immersion medium to be large. It is shown for the first time that metallic structures fulfil this criterion for applications in water over the 200–800 kHz frequency range. It is also demonstrated that laser powder bed fusion is a flexible fabrication method for the ceration of structures with different thicknesses, hole geometry and tapered openings, allowing the acoustic properties to be modified. It was confirmed via both finite element simulation and practical measurements that these structures supported Fabry-Pérot resonances, needed for metamaterial operation, at ultrasonic frequencies in water. It was also demonstrated the the additively-manufactured structures detected the presence of a sub-wavelength slit aperture in Introduction Acoustic metamaterials are those that have properties that arise from their inner structure and which are not ordinarily observed. For example, extraordinary effects can occur when acoustic signals are transmitted through multiple holes in a solid plate. This can occur when the diameter of the holes is less than the acoustic wavelength and when the hole exhibits an acoustic resonance [1]. Unlike the optical equivalent, there is no lower cut-off frequency for sound waves, and information can be transmitted through the sub-wavelength holes from one side of a perforated plate to the other [2]. Provided the hole is deep enough, propagation of acoustic waves within subwavelength-sized apertures can lead to the formation of Fabry-Pérot (FP) transmission resonances within each hole. FP resonance is produced by the constructive and destructive interference of waves confined in the holes that can produce multiple transmission peaks [3]. If an array of such holes are placed side-by-side, an acoustic metamaterial is created with some novel properties such as subwavelength acoustic imaging [3]. This is made possible by coupling the evanescent waves which exist within the fluid close to the sample and the FP resonance within each hole. Zhu et al. [4] have demonstrated that, at audible acoustic frequencies in air, features can be funnelled through a set of subwavelength apertures to produce acoustic images. This happens at frequencies at which the FP resonances within each hole occur, provided the holes are sufficiently close together across the metamaterial [5]. Such structures are of interest for ultrasonic imaging at subwavelength resolutions, holding promise for improved medical imaging and non-destructive evaluation [4,6,7]. However, this requires operation at higher frequencies and in water. To date, most research in this area has been based on either numerical simulations of these structures [6,8] or experimental measurements in air on samples fabricated using conventional machining. Examples include assembling tubes in an array and bonding them using glue for audible range applications in air [4], or drilling holes into a metallic plate [3,8,9]. The expected FP resonances were observed [8,10]. A number of publications have studied the effects of dimension, shape, symmetry and filling fraction of the holes on the acoustic response of such perforated plate structures [2,[11][12][13]. Laureti et al. [9] suggested that by decreasing the hole dimension, the position of the FP peaks shift to higher frequencies and the peak amplitude increases.The effect of hole filling fraction (the fraction of the material containing holes compared to solid substrate) was also studied and it was confirmed that by increasing the filling fraction the resonance peak shifts to higher frequencies and the transmission peaks become narrower [14]. Therefore, the resolution of the resultant metamaterial would be improved by decreasing the hole diameter and increasing the filling fraction [15]. The signal transmitted through a periodically perforated structure is affected not only by FP resonances within each hole, but also any acoustic signal that travels either along the surfaces of the solid material or within it. The interaction between these phenomena produces a complex transmission curve with maxima and minima at different frequencies [14]. A schematic diagram of a typical FP acoustic metamaterial is shown in Fig. 1(a). It is important that both the hole width (w) and the distance between hole centres are smaller than the acoustic wavelength (λ) for metamaterial behaviour to be observed [8], as this is needed for evanescent wave coupling between each hole. When the acoustic frequencies are low, then conventional machining is sufficient for sample preparation (for example, at 10 kHz the wavelength λ air in air is ∼33 mm for an acoustic velocity v air =330 m/s, so that fabrication of deep holes with w < < λ air can be easily achieved). At higher frequencies in air, Additive Manufacturing (AM) techniques such as microstereolithography can be used [9,16]. If now ultrasonic signals in water are to be studied, as is the case here, then two things need to be consideredthe features of the metamaterial will be smaller, and the material has to be chosen so that complications due to coupling of acoustic energy into the substrate are avoided. Consider first the required features for an ultrasonic frequency of 300 kHz in water, where v water = 1480 m/s, and hence λ water = 4.93 mm. It is known that the lateral size (w) of each hole needs to be < λ /5 for metamaterial behaviour [14], i.e. < 1 mm. In addition, FP transmission peaks will occur at resonances defined by the plate thickness (and hence channel length) d. Note that w is generally much smaller than d in acoustic metamaterials, and the FP structure supports multiple resonances with frequencies between f 1 and f max [9], as follows: The fundamental (the lowest frequency) FP resonance occurs at a frequency of f 1 = v/(2d) (or λ = 2d), noting that the highest permitted resonance frequency (f max ) occurs when the wavelength is equal to the smallest hole width w (also known as the ballistic regime). Note that there is some evidence that using higher-order resonances at a fixed frequency gives better metamaterial performance in terms of evanescent wave coupling [4]. To calculate the FP resonance frequency, the effective length of the hole should be calculated, and this requires calculation of an end correction for resonances within a pipe with two open ends [17]. The end correction for a pipe (or channel) with perfectly-reflecting walls is given by [18]: where Δl is the distance that should be added to the length of the pipe (which then reduces the value of f k ). Note that the amount of correction decreases at higher frequencies (where λ become smaller for a fixed w [19]). The choice of solid plate material within which the holes are to be fabricated is determined in part by the specific acoustic impedance the material (Z), which is the product of density and acoustic velocity of a given material. As an example, the typical value for Nylon is Z Ny = 1.32 MPa. s/m, whereas that for air is Z air = 420 Pa. s/m. This difference is required to ensure that the acoustic signal stays within the air channels, and does not couple into the solid and cause it to vibrate. In air, the large acoustic impedance mismatch makes this very easy to achieve. However, in the present case, we wish to investigate FP operation in water, for subsequent use in biomedical imaging. The value for water (Z water = 1.5 MPa. s/m) is much closer to that of many solid materials such as polymers which could be used to form the metamaterial structures, so it is important to fabricate the structures using a material with a relatively high acoustic velocity and density. As an illustration, consider aluminium (Z al = 17.06 MPa. s/m) and Nylon (Z Ny = 1.32 MPa. s/m). Note that the value of Z Ny is much closer to that of water (Z w ). As a result, the acoustic transmission coefficient (T) from water into the solid is 0.08 for aluminium and 0.53 for Nylon, calculated using the formula: where Z solid is the acoustic impedance of the solid in question. Thus, acoustic energy will be much more easily transferred into a typical polymer than a metal. Fabrication of acoustic metamaterials is one of the major challenges that limit their widespread applications [20]. Here, AM is proposed as a key enabling technology, allowing complex geometries to be realised in different length scales, which is not obtainable using conventional manufacturing methods [21,22]. Note that there have been some contributions made regarding the use of AM in the fabrication of different types of acoustic metamaterials using polymer structures (Phononic crystals [23], broadband absorbers [24] and FP structures [11,15,16,20]). However, as stated above, the acoustic impedance of polymers is not sufficiently high for operation in water to confine the acoustic signal within the subwavelength holes [16]. A recent paper [25] has demonstrated that a perforated steel plate, manufactured using conventional machining, could operate as an effective metamaterial for acoustic cloaking over the 7−12 kHz range. They thus showed that metallic structures are a good choice for use in water. In our case, with target frequencies of 200-800 kHz, and with the need to change hole dimensions and shapes, the fabrication method needed to change, as sub-millimeter accuracy would be needed. Powder Bed Fusion (PBF) techniques such as direct Selective Laser Machining (SLM) are capable of fabricating reproducible metal structures with this accuracy. Processing of aluminium, titanium and nickelbased alloys in SLM is well established, and this could benefit the widespread employment of acoustic metamaterials for enhanced imaging. In this study, SLM is used for the first time to fabricate perforated metallic plates containing arrays of through-thickness sub-mm width holes, for use at ultrasonic frequencies in water. The acoustic response of these structures was tested in through-transmission mode and conclusions are drawn on their performance by comparing experimental measurements with Finite Element simulations. SLM also allows for changes in hole geometry, such as flaring the entrance and exit, thus changing the bandwidth and frequency of operation. Sub-wavelength imaging is presented, illustrating that SLM is an ideal fabrication route for acoustic metamaterials of this type operating at ultrasonic frequencies. Sample design and fabrication Initial samples of A20X aluminium were prepared using a Renishaw AM250 SLM machine. The main motivation in using the AM250 initially was its larger build platform that accommodated multiple samples in one build. A range of samples with hole dimension starting from 0.4 mm to 1.5 mm were fabricated. Samples with hole width (w) of 0.8 mm were chosen because it was the smallest hole size that could be fabricated reproducibly with minimum blockage and distortion. It was important to make the holes as small as possible, as the metamaterial function degrades as w increases. Hence, we considered the stated hole size range as the best compromise between performance and ease of fabrication. Each sample contained a 24 × 24 grid of square holes, each having a width (w) of 0.8 mm. Each hole was separated from its nearest neighbour by 0.4 mm (i.e. with a lattice constant a = 1.2 mm) across the overall sample dimensions of 30 mm square. Various sample thicknesses (d) were fabricated in the 5−9 mm range, this range being chosen to have FP resonant frequencies in a convenient frequency range for ultrasonic measurements in a water tank at frequencies of 200−800 kHz (where λ water ranges from 18.5−7.4 mm). The SLM fabrication process was started by adding the support structure to the designed CAD model ( Fig. 1(a)). Note that although third-party software could be used to generate the support structure, the supports had to be aligned manually to avoid blocking the holes. Materialize Magics 2.1™ was used to slice the file and prepare the build file. A Renishaw AM250 SLM printer with a laser power of 200 W and a laser spot size of 150 μm was used in fabrication of the samples. In order to produce uniform vertical sub-millimetre channels, and to avoid having support structures within the holes, supports were used to create a 3 mm gap between the sample and the build platform, with the structures supported horizontally. This process both anchored the structure and helped with heat dissipation, preventing thermal warping and simplifying sample removal. It was also critical to avoid having support structure directly underneath the vertical channels, which might have produced blocked channels and would have therefore introduced unwanted further post-fabrication processing steps. After completing the fabrication process, the structures were cut from the build platform and the remaining support structures were ground away to leave a flat face. Fig. 2 is a photograph of a typical 30 mm square aluminium sample fabricated using SLM, with an expanded view showing some details of the nature of each hole. Various features can be seen. First, it is evident that material adhered to the sidewalls of the holes, giving them an irregular appearance. Although these features are difficult to quantify exactly, an estimate made using a Bruker Contour GT optical profilometer. The overall surface area, measured over a number of holes, was found to be in the range 0.46 ± 0.1 mm 2 (noting that a w = 0.8 mm hole would have a surface area of 0.64 mm 2 .) These were thus smaller than expected, and the likely effect of this will be discussed below. Note that the shortest wavelength λ water of ultrasound in water encountered in this work (at the maximum frequency of 800 kHz) was ∼ 2 mm. It was thus decided that the features encountered within the SLM-fabricated structures were acceptable, noting that most experimental measurements were in fact taken in the 300−500 kHz frequency range, where λ water ranges from ∼3−5 mm. Although work on Kelvin cell acoustic metamaterials has accurately produced strut diameters down to 0.4 mm using SLM [26], here we are dealing with holes, which are more difficult. It was also noted by Strano et al. [27] that surface roughness, caused by the staircase effect, balling and particles bound at the edges (as observed in this study) were dependent on the powder size distribution. With a vertical orientation the bound particles are the primary source of roughness. In our case, the size distribution of the aluminium powder particles was from 20 to 60 μm, leading to the observed roughness within the holes. A second set of samples was fabricated using a Realizer SLM 50 machine with a laser spot size of 30 μm and a laser beam power of 120 W. The smaller laser spot size compared to the Renishaw AM250 above translated to smaller melt pool which potentially increased the build resolution. This was needed to demonstrate the capability of AM in changing the hole profile, with flared entrance and exit points. Such modifications can improve the performance of FP metamaterial structures and is a topic of interest in this field [15]. The titanium samples, fabricated using an average particle size of 36 μm for the build, contained a set of single holes, separated by large hole-to-hole distances (a) so that isolated FP resonances could be studied without significant acoustic interaction between each other. A photograph of the top surface of containing four such isolated holes is shown in Fig. 3, with flare radii as indicated in Table 1. The roughness of each flare profile was measured to vary in R a value from 60−160 μm, depending on position across the flare, as also indicated in Table 1. It can be seen that the SLM process has been able to create these structures well, in this case in a 5 mm thick sample. Note that the roughness of the top flat surface was measured to be R a =10 μm. In relation to the surface roughness of the flare, Strano et al. [27] also demonstrated the importance of surface angle on the produced roughness, and this may be an explanation of the differences between each hole. Ultrasonic testing The testing of the samples at ultrasonic frequencies was performed in a custom-built testing tank filled with water at room temperature (20°C+/−2°C). The ultrasonic source was usually a 19 mm diameter Panametrics U8517150 ultrasonic immersion transducer with a centre frequency of 500 kHz, although other sources such as piezocomposite devices were used if, for example, a lower frequency of 300 kHz was required. The samples were tested in through-transmission, with the source transducer typically placed within a cylindrical waveguide (see Fig. 4) to help ensure that the incident wave was as close to a plane wave as possible, i.e. had a flat wavefront. A Panametrics 5072PR ultrasonic pulser-receiver was used to excite the transmitter transducer with a high voltage step waveform. A sample holder was designed and fabricated so that the metamaterial sample could be positioned reproducibly in a plane perpendicular to the waveguide axis via a slot in the holder. The receiver was a Precision Acoustics™ 0.5 mm diameter needle hydrophone, positioned at a known distance from the metamaterial surface. The waveguide/sample holder and the ultrasonic receiver could both be positioned independently within the water tank using a dual 3-axis scanning system under PC control. The throughtransmitted signals were acquired using a digital oscilloscope (Tektronix DPO7104) and transferred to a PC for post-processing. In practice, the waveguide/sample holder assembly was kept in a fixed location and the hydrophone moved to a specific location (such as over a hole). The hydrophone could also be scanned horizontally so as to sample the ultrasonic beam characteristics both with and without the metamaterial present in the holder, thus measuring its effect. The transmission characteristics of a given sample were measured in a two-step process. First, a reference waveform was recorded at a specific location without the sample being present in the sample holder between the source and receiver. The sample was then placed in the sample holder between the transducers and the signal transmitted through the structure was measured. FFTs were used to obtain the frequency spectra of these two measurements, so that dividing these two spectra allowed the effect of the metamaterial sample to be measured in the frequency domain. Finite element (FE) simulations Acoustic transmission through sub-wavelength sized square holes in SLM-fabricated samples was modelled using COMSOL Multiphysics®. An 80 μm tetrahedric mesh size was used so to have at least 10 elements to cover the channel width w. This was needed to confirm that the designs would produce FP resonant peaks, as required of a metamaterial, and that a metal substrate was necessary (and not a more easilyprinted polymer substrate). A plane-wave ultrasonic wave of pressure amplitude 1 Pa travelling perpendicular to the sample surface was used to study the acoustic transmission of the solid sample containing holes, and a transmission coefficient extracted for each frequency. The first task was to study the effect of acoustic impedance mismatch on the production of the FP peaks for a single 0.8-mm square hole. As stated earlier, the greater the difference in acoustic impedance Z between water and the solid substrate the greater the transmission of Fig. 4. Ultrasonic testing apparatus where the source transducer was placed within a cylindrical waveguide. The metamaterial sample was held in place using the sample holder shown. The receiver was a 0.5 mm diameter hydrophone, used to measure the response at the exit of a single hole. energy into the solid, these being an unwanted phenomenonhence the use of a metallic substrate. The first simulation was thus performed assuming a "hard solid" where no acoustic energy would flow into the solid from the water-filled hole. The result is shown in Fig. 5 for a sample thickness (d) of 5 mm. Here, the vertical dotted lines represent the positions of the FP resonance peaks expected from Eqs. (1) and (2), modified by the end correction of Eq. (3), which is a function of both the sample thickness (d) and the hole width (w). The simulation predicts the FP peaks to be a good match to the analytical expressions shown by vertical broken lines in this figure. Simulations were also performed for aluminium and Nylon substrates, and these are plotted together with the hard solid case (where Z is assumed to be infinitely large) in Fig. 6. As noted above, Nylon is much more closely-matched acoustically to water than is aluminium (Z Al ), so that the acoustic transmission coefficient (T) from water into the solid is much lower for aluminium. Thus, acoustic energy will be much more easily transferred into a typical polymer than a metal. Fig. 7 shows that both aluminium and a hard solid produced clear, regularlyspaced FP peaks. Comparing the two plots, peaks for the aluminium substrate were more resonant than those of the hard solid; even though T = 0.08 for aluminium, this still allows some energy to leak into the metallic substrate, causing the resonance to broaden and decrease slightly in amplitude. In comparison, the "hard solid" with a perfectlyreflecting boundary would have T = 0, and all energy would be confined within the water column. FP resonant peaks were also shifted to lower values in the simulation than for the hard solid case. In contrast, the expected FP peaks are completely absent in the Nylon structure, being replaced by an irregular distribution of resonant peaks. This is thought to be due to a large amount of acoustic cross-coupling into the solid, so that the solid substrate and water-filled columns interact in a complicated fashion. It is known that variations in hole size and shape can affect the operation of FP-type acoustic metamaterials [13]. It has been shown above that build geometries could vary from the CAD design, and hence it was important to study the likely effect of this. By carefully studying the fabricated samples, it was clear that the holes were slightly smaller than the designed size and that they had rounded corners. Accordingly, simulations were performed to study the likely effect of 1) decreasing the symmetry in the holes, 2) reducing the dimension of the holes to represent the experimental samples, and 3) the difference between holes that were perfectly square and those with rounded corners. This used PZFlex 2018™ (OnScale Inc, USA) FE software after importing the designed CAD model from Solidworks™, which was found to be more suitable for these more complex geometries. Again, a plane-wave ultrasonic wave of pressure amplitude 1 Pa travelling perpendicular to the sample surface was used. PZFlex was implemented using a simple cubic element mesh type, with a mesh size 37 μm along all three dimenions, which is much smaller than the smallest wavelengths of interest -50 elements per wavelenth at the highest frequency of interest (800 kHz). The first additional simulation looked at rounding the corners of the square hole. Fig. 7(a) shows the transmission coefficient of a w = 0.8 mm square hole in a 5 mm thick aluminium sample compared to a similar hole with rounded corners. The rounding radius of 0.1 mm was chosen for this simulation. The transmission coefficient plot is almost identical in each case, indicating that rounded corners with this extent of feature do not affect the position and amplitude of the FP peaks. As noted in Fig. 2, the dimensions of each hole fabricated by SLM differed across the sample. It was thus interesting to see the effect of the size of the hole (w). The simulated transmission characteristics of rectangular holes with a range of dimensions are compared in Fig. 7(b). By reducing the hole width w from 0.8 mm to 0.7 mm and 0.6 mm, the resonance peaks shift from 320 kHz to 324 kHz and 330 kHz respectively which is in good agreement with the findings in the literature [10]. From the simulation it is clear that in the rectangular holes, the position of the FP resonance peak is dependent on the dimension of the smaller side of the hole. The end correction, which is dependent on the hole width w (see Eq. (3)), could be one of the main contributing factors to these small shifts. Another area of interest is changing the shape of the holes. It is known that gradually changing the width of a hole with flared entrances and exits can help the acoustic impedance within the hole to match that in the surrounding air (noting that the restriction on airflow within the hole increases its acoustic impedance Z, which is actually the reason why FP resonances occur). Doing this is likely to decrease the Q of the resonance, and hence the bandwidth of operation, but to increase transmission amplitudes due to the greater surface area of the opening. This is of interest to ultrasonic imaging applications as it allows techniques for signal to noise enhancements (such as pulse compression) to be used, and also potentially improves image resolution. SLM is an ideal technology for the fabrication of such structures, giving flexibility over changes in hole dimensions throughout the build. For this reason, simulations were performed to study this effect. The results shown in Fig. 8 are for four cases with fixed hole dimension and thickness (w = 0.8 mm, and d = 5 mm) but with flares of 0, 0.4 mm, 0.8 mm and 1.6 mm radius. Titanium with specific acoustic impedance of Z Ti =27.33 MPa. s/m was assigned to the simulated structures. These predictions indicate that the effect of increasing the flare radius is to increase the amplitude of the FP resonance and shift the centre frequency to a higher value. The increase in the FP resonance amplitude means more energy is being coupled into the hole by a flared aperture. The shift in the centre frequency of the FP peak is due to change in the effective length or dimension of the hole over which the FP resonance Results of measurements in water The 0.5 mm diameter needle hydrophone was used to observe the ultrasonic signal at one of the holes in an aluminium metamaterial sample with d = 8.5 mm and w = 0.8 mm. This was to confirm that FP resonances were present. An ultrasonic waveform was thus recorded with the hydrophone tip positioned at the centre of one of the holes, and at a distance of 1 mm from the sample surface. A typical time trace from an 8.5 mm thick sample is shown in Fig. 9(a). The corresponding frequency spectrum of this signal (corrected for the baseline spectrum established with the hydrophone in the water) is shown in Fig. 9(b). Based on Eqn. (1), and considering the effect of end correction, the fundamental FP resonance peak is expected to occur at a frequency (f 1 ) of 82.9 kHz, which is outside the −6 dB bandwidth of the transmitter. However, the 4th, 5th, 6th and 7th harmonics at frequencies f 4f 7 can be seen in Fig. 9(b). Although theoretically the transmission coefficient of structures at frequencies close to their FP resonance frequency is expected to approach unity, experimentally the signal amplitude decreases for higher harmonics (at frequencies f 4 , f 5 etc. for successive harmonics) as has been observed in other studies [9]. Table 1 compares the measured FP resonant frequencies to those calculated. The good agreement indicated that the FP resonances in the metamaterial structure are present at the expected ultrasonic frequencies. Note that additional maxima and minima are present in the spectrum. This could be due to complications arising from the interaction between adjacent holes, as the hydrophone has a relatively wide angle of sensitivity at these frequencies, and could detect emission from more than one hole, complicating the received signal. Note that the higher harmonics are not as strong in the experimental measurements as in the FE predictions in Fig. 5. This is thought to be due to scattering effects within the hole. Higher harmonics, at higher frequencies, have shorter wavelengths in water, and these are quoted for each of the harmonics f 4f 7 in Table 1 (assuming a velocity in water of 1480 ms −1 ). Hence, ultrasonic propagation within each hole is more likely to be disrupted by the surface roughness on the inner wall at higher resonant frequencies, due to scattering effects. Energy will be lost, reducing the effectiveness in establishing a FP resonance. Thus, it is important to choose the correct operating frequency for FP effects to occur, and the upper limit will be dictated by the build resolution within the holes. Several extra peaks are observable in Fig. 9(b) which are not primary FP resonances. The main reason is that the output is the result of interaction between the array of holes in the metamaterial. The holes couple together via evanescent waves to produce metamaterial effects, but this only occurs within distances < λ water . However, cross coupling between the channels via conventional ultrasonic propagation along the surface of the sample is possible, and this could produce extra peaks in the normalised transmission coefficient [2]. It is interesting to note that there is very good agreement between the theoretical and experimental values for the resonant frequencies in Table 1, showing that the simple formula of Eq. (1) is a good predictor of the resonant frequency in samples containing either multiple or single holes ( Table 2). Because of the difficulty in eliminating interactions between closely spaced holes in the metamaterial sample, further studies were performed with samples which contained only a single hole. This allowed both single square holes and the effect of flaring of the aperture to be measured experimentally and compared to the FE simulations shown earlier in Fig. 8. Samples were available with various radii of curvature of the flare (photographs of the SLM-fabricated samples were shown earlier in Fig. 3). Experimental results are presented in Fig. 10 to compare directly to the FE simulations, where it can be seen that simulation and experiment show the same trends as a function of flare radius for holes 1−3. This indicates that the effect of increasing the flare radius is to increase the amplitude of the FP resonance (due to greater energy transfer into and out of the hole) and shift the centre frequency to a higher value (a flared aperture would also effectively shorten the length of the hole over which the FP resonance would be established). Note, however, that experimentally the main resonant frequency reduced less markedly than in simulations as the flare radius increased. This could be due to the stepped features already noted in Table 1). the fabricated sample. The main difference between the simulation and experimental results is in hole 4, where experimentally the expected strong resonance has not been established. This indicates that FP resonances could be difficult to establish practically at large radii of curvature. Here, the impedance mismatch between water in the hole and in free water (which allows FP resonances to be established) is lower, making any small disturbances (such as exact geometry and surface roughness) in a practical build more prominent. Note that these larger radii would also cause the lattice parameter (a) to increase due to the resultant wider spacing between holes; this would affect image resolution in a practical measurement, and hence smaller radii would be preferred in any case. A final experiment was performed to demonstrate the imaging resolution potential of the SLM-fabricated metamaterial. To show this, the frequency of operation was chosen to be centred at 300 kHz (a frequency close to the expected FP resonance peak shown earlier in Fig. 6 for an aluminium metamaterial sample with d = 8 mm and w = 0.8 mm in water). Imaging resolution was measured by inserting a 0.7 mm (∼ λ water /7 at 300 kHz) wide slit aperture of 19 mm length, machined within an additively-manufactured 4.3 mm thick PLA plate, placed between the ultrasonic transmitter and the metamaterial, as shown in Fig. 11. The slit was positioned close (< 1 mm) to the metamaterial surface. While fabricated from polymer, where some transmission was possible through the solid, The hydrophone was then scanned horizontally across the far side of the metamaterial, and at a distance of 1 mm from it (i.e. at a total distance of 9 mm from the slit aperture). The aim of this experiment was to demonstrate that the ultrasonic signal has been collected by the metamaterial, and transferred with a sub-wavelength resolution to the far side, using water as the medium. This occurs primarily because of coupling from so-called evanescent waves from each of the output holes in the metamaterial. At this frequency, the wavelength of the ultrasonic signal in water was λ water = 4.9 mm, so that λ water /w = 6.16. The metamaterial thus has sub-wavelength values for the hole width (w) and unit cell size (a) of 1.2 mm used. This experiment was thus a test of whether the features could be detected by the scanned hydrophone, even though the slit was less than a wavelength (λ w ) wide. Note that at a distance of 9 mm from the slit aperture, the slit would not normally be resolvable without the metamaterial in place. The slit was placed at a distance of ∼0.5 mm (∼ λ w /10) from the metamaterialit has to be close as the evanescent waves needed for Fig. 3 and with the features given in Table 1). The numbers refer to the holes as labelled in Fig. 3. Fig. 11. Schematic diagram of the image resolution experiment at 300 kHz. This used an aperture test plate that was placed between the ultrasonic source and the metamaterial, next to the latter's surface. The miniature hydrophone receiver was then scanned in a horizontal line parallel to the output surface of the metamaterial. The ultrasonic source, slit aperture and metamaterial were held stationary during this measurement. metamaterial operation act over short distances (< λ w ). Note that there was also likely to be ultrasonic transmission through the material of the slit aperture component, but sufficient contrast would have existed to test the imaging performance in the presence of the sub-wavelength aperture. The measurement was thus performed for the two cases, namely with and without the metamaterial in place, and the resultant normalised plots are shown together in Fig. 12(a). Note that in practice the received signal was of higher amplitude with the metamaterial absent, as would be expected due to losses at the input and output surfaces of the metamaterial. The plot with the metamaterial absent also exhibited a maximum value outside the slit region (identified by the vertical lines). This is a result of the amplitude variations across the transducer beam profile and subsequent diffraction by the slit. It can be observed that insertion of the metamaterial concentrated the energy into a smaller lateral region. Dividing these two data sets allowed the true effect of the metamaterial to be observed, as shown in Fig. 12(b). It can be seen that the slit was detected very well, even though it is sub-wavelength in width. The metamaterial uses evanescent waves to couple the ultrasonic output energy from multiple holes together, each having a Fabry-Perot resonance, to obtain these effects. The slit, being a single aperture, cannot function as a metamaterial as this coupling does not exist, but acts as a secondary acoustic source to demonstrate the detection of a sub-wavelength aperture. It should be noted that the lateral width of the slit aperture was not detected accurately. The resolution of such a measurement depends on both the hole width (w) and the unit cell size (a) in a complicated way that we are currently investigating, and to improve this would require a smaller values of both w and a. However, the above represents the first time that such a result has been reported at these ultrasonic frequencies in water, and demonstrates how PBF fabrication techniques can be used to create metallic metamaterial structures of interest to the ultrasonic imaging community. It is thus demonstrated that the interaction between FP resonances of the metamaterial and the evanescent waves allow us to detect features up to λ w /7 in water and that this has been made possible by the use of SLM fabrication. Conclusions This paper has demonstrated for the first time that metallic ultrasonic metamaterials can be fabricated to operate in water at the 200-800 kHz frequency range, important for many applications such as non-destructive evaluation and medical imaging. It was demonstrated that PBF processes such as SLM processes produced metamaterials that supported Fabry-Perot (FP) resonances. The acoustic properties of fabricated structures were measured at ultrasonic frequencies, where higher-order FP resonance peaks were observed using a needle hydrophone. The need for a metallic substrate to ensure a reasonable acoustic impedance mismatch with water was demonstrated in the numerical simulations. It is also proven that flaring of the apertures at each end could be fabricated, and that this resulted in a change in the frequency response of the structure. The findings also suggest that the presence of high surface roughness did not unduly affect the generation of FP resonances within each hole of the metamaterial, although it might have been a limitation in certain builds with large flare radii. A measurement in the presence of a slit aperture demonstrated that a slit with sub-wavelength width was detected successfully in the presence of the metamaterial structure. Note that the hole dimensions used within these metamaterials, in terms of depth and width, would both have an effect on the amplitude and frequency response of the resonances within each hole. We aim to study this in future work. Fig. 12. Ultrasonic data taken by scanning the hydrophone across the 0.7 mm wide slit shown in Fig. 11 at a horizontal distance of 9 mm from the slit. The size and location of the slit is shown by the vertical lines.
2020-05-21T09:06:46.212Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "4da1ba107b5b28520532ee6914a2bdf498cde79f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.addma.2020.101309", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "25812682290dae2ad9f77caecb065fd79a845621", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
132714969
pes2o/s2orc
v3-fos-license
NPK fertilizer on peanut and maize cultivar under agroforestry system According its characteristics, the sengon (Albizia chinensis) forests have potenty for crop cultivation as Agroforestry (AF) system. Sengon canopy at 3-4 years old has Fraction radiation transmission (FRT) about 80%, moreover soil fertility under tree has medium, especially, N (Nitrogen) and P (Phospor), and low K (Potassium) content. This study was performed to observe the effect of NPK fertilizer on maize and peanut, representing C4 and C3 crop type, which were cultivated between sengon stands. The study was conducted under 3, 4, and 5 years old of sengon stand. In each sengon stands age group, the experiment was designed as Randomize Complete Block Design (RCBD) consists of 7 factors treatment. The treatments were without fertilizer as control, P and K fertilizer (3 levels), N, P, and K fertilizer (3 levels), then data were analyzed using orthogonal contrast. The result showed that peanuts were cultivated successfully in all of sengon stand (3, 4, and 5 years old). Whereas maize only grow well under 3 years old of sengon stand. The application of P and K fertilizer were able to increase the growth and yield of both crops and will be improved if N was applied at the same time. Introduction Peanut and maize are important crops that cultivated post paddy (rice) cultivation in the same area. Recently, the rice filed area in Indonesia is threatened due to functions changing into other purpose, therefo land alternative is needed. In the other side, some forest areas have been developed as albazia plantations considering it various advantages such as for medicinal plant [1,2,3], as legume, having symbiosis with many bacteriaand the timber can be harvested in relativelz short time (8 years). This condition carry a great opportunity for agricultural extension purpose on suboptimum area (crop among trees stand) namely agroforestry systems (AF) [4]. The AF system has a function as an environmental friendly crop cultivation [5] because it maintains carbon storage [6], improves social and economic level of the farmers [7]. In the cultivation of plants with AF system, the availability of light is a very important factor, both radiation transmission fraction (FTR) and the hydrological behavior of tree canopies depend on its structure (number and density of branches, and leaf types) [8]. Tree canopy develop pararelly with tree age, for example in coconuts, FTR is lower in the group of 8 years old trees than in 4-6 years old [9]. In 2, 4 and 6 years old albazia trees, FTR decreases from 35,500 to 27,025 and 19,800 Lux respectively [10]. The decrease of FTR due to more age in sengon (Albizia sp) is caused by the increase of canopy density. To improve light availability, it can be achieved by trimming down one third of canopy at the lower tree as in teak-based on AF (Tectona grandis) and pine (Pinus mercusii) [11]. Despite the limited availability of light, the Albizia-based on AF system has the advantage of being a legume tree as it can produce litter. At one year old of sengon, this tree is able toproduced 30 kg of litter per tree per year [12]. Sengon leaves are small and easy to decompose [13], low C/N ratio, and enables to maintain soil physical properties. Besides N, sengon litter also can be used to increase Mg [14]. Nutrient release from the litter of A. procera follows the K> N> P trend, and found more in the pod, followed by the leaves and petiole [15]. Based on this facts, the sengon-based AF system is very potential as organic crop cultivation. For that, this study want to observe the potential of maize and peanut under 3, 4 and 5 year old sengon-based on AF systems, and how the plants respond to N, P, and K fertilizer. Material and method The experiment was conducted from March to December 2017 at KPH (Kesatuan Pengelolaan Hutan) Blitar, Perum Perhutani of East Java, at elevation of 0 -500 m above sea level (asl), with a geographical position of 8°09′0″ N and 112°0′0″E. The experiment conducted in two series: maize ('Pioneer' variety) and peanut ('Kelinci' variety). Each series was arranged in a completely randomized design (CRBD) with two factors. The first factor is age of Albazia chinensis (3, 4, and 5 years old) and the second factor is fertilizer. On Maize cultivation, the fertilizer treatment was P and K without N fertilizer (50-50, 100-100, and 150-150 kg ha -1 ), NPK fertilizer (100-50-50, 150-75-50, and 200-100-75 kg ha -1 ). On peanut cultivation, the fertilizer treatment was P and K without N (50-50, 100-100, and 150-150 kg ha -1 ), NPK fertilizer (25-50-50, 50-100-100, and 75-150-150 kg ha -1 ). Recommendation doses of urea, SP36 (Super phosphate), and Urea were used for the research. The data were analyzed using F test at α = 0.05 (analysis of variance) then followed with orthogonal contrast. The observation parameters are vegetative components such as plant height (measured from the base of the stem to end of the plant using a ruler), chlorophyll content (analyzed using spectrophotometer method), content of N (analyzed using Kjeldahl method), biomass (total weight of dry plants after being put in 110°C oven for 24 hours), and the weigh of seed production as generative component. Maize Maize cultivation in 3, 4, and 5 year old albizia-based on AF systems, with FTR of 82, 80 and 56%, respectively, shows that only the cultivation on the 3-year old albizia AF able to grow and produce. The failure of maize growth on the 5-year-old albizia AF suggested because low FTR, while maize, a C4 plant, is responsive to high irradiation [16,17]. But the failure of maize growth still occurs on the 4-year albizia AF with relatively high FTR. This indicated that the determinantion of maize growth is not limited only to light, but also humidity and high temperature to maintain high decomposition rate. Fertilizers play a major role in the growth and production of corn. Plant height, biomass, ear weight and yield (production) are higher in fertilized (P and K, as well as N, P, and K). Fertilizer with P and K, as well as N, P, and K shows more yield with the increasing doses ( Table 1). The highest maize production (4.14 mg ha -1 ) was in plants fertilized by N, P, and K (200-150-150 kg ha -1 ), which was lower than maize description which is 9.1 ton ha -1 [18]. This means that the availability of nutrients from Albizia litter is still low. Low nutrient availability can occur due to the relatively low of albizia canopy density which resulting the increase of soil temperature, high decomposition rate, and also lost of nutrients from mineralization due to erosion and leaching. Plant height is closely correlated (r: > 0.9) and is very determining for (R 2 : 0.8 to 0.99) biomass, ear weight, and production ( Figure 1). Peanut In contrast to the maize, peanut was successfully grown in 3, 4 and 5 year old albizia-based on AF systems. Peanut has a height and biomass that is almost the same in the three ages of albizia. Plant height as the response parameters for light ranged from 40-41 cm in 3,4 and 5 year of albizia and biomass as a photosynthetic parameters in the vegetative phase ranged from 41 to 44 g of plant -1 (Figure 2). Vegetative growth is almost the same among different light level, it was indicating that peanuts are tolerant of low light (C3 plants). The differences were occur in the generative growth of pod formation. The weight of the pods only ranges from 11 to 16 g, it was indicated that the number of pods is lower than the description (Kelinci variety): the weight of the pods of each plant were ranged from 24 g (15 pods and weight of 100 seeds is 45 g) [19]. The number of soy pods in the study of Albizia-based on AF [10] was ranged only from 4 to 9 per plant. Peanut pods are formed on the branches of plants, branches grow from pods. Therefore, the smaller number of pods were equals smaller number of branches. Although the plant height are similar but the number of pods is smaller, which means the number of nodes is lower or the internode is longer. The low number of pods were resulted in lower production (0. on the species, and it can result in the growth of shoots being stimulated, inhibited, or it has no effect at all [20]. The application of fertilizers were affected to the plant height, biomass, pod weight, and seed production. The addition of N to PK fertilizer were increasing the three parameters, as well as it was increasing in the dose of PK fertilizer and NPK fertilizer ( Table 2). The increase in biomass, pod weight, and seed production occurs in groundnut on all 3, 4, and 5 year old albizia AF system. The effect of N concentration on net photosynthesis is different in light intensity. In low irradiation, the N fertilizer is positively correlated with net photosynthesis [21] However, the increasing of N in low irradiation requires a balance of availability of other elements such as P and K because photosynthesis needs energy in the form of ATP (role of P) and stomatal regulation (role of K). This is similar to the shade research studying a dose of manure as a fertilizer, it was suggesting that the best growth of Curcuma zedoaria L. is 25% shade (low light) and 300 g manure per polybag [22]. 13.86z 0.78z Information: P1K1, P2K2 and P3K3 is P and K dose of 50-50, 100-100, 150-150 kg ha -1 , N1P1K1, N2P2K2 and N3P3K3 is NPK dose of 25-50-50, 50-100-100, and 75-150-150 kg ha -1 , while TP is no fertilizer. Different letter behind the number in one column there as significance (between a and b; m and n; p, q, and r; and x, y and z).
2019-04-26T13:49:48.309Z
2019-04-05T00:00:00.000
{ "year": 2019, "sha1": "405dd837c8b17b95d2c97146be552bd4572cf1c5", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/250/1/012006/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "62eb781c2605a4629e777663138d129c0dca328a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
76667896
pes2o/s2orc
v3-fos-license
Unsupervised Discovery of Parts, Structure, and Dynamics Humans easily recognize object parts and their hierarchical structure by watching how they move; they can then predict how each part moves in the future. In this paper, we propose a novel formulation that simultaneously learns a hierarchical, disentangled object representation and a dynamics model for object parts from unlabeled videos. Our Parts, Structure, and Dynamics (PSD) model learns to, first, recognize the object parts via a layered image representation; second, predict hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure; and third, model the system dynamics by predicting the future. Experiments on multiple real and synthetic datasets demonstrate that our PSD model works well on all three tasks: segmenting object parts, building their hierarchical structure, and capturing their motion distributions. INTRODUCTION What makes an object an object? Researchers in cognitive science have made profound investigations into this fundamental problem; results suggest that humans, even young infants, recognize objects as continuous, integrated regions that move together (Carey, 2009;Spelke & Kinzler, 2007). Watching objects move, infants gradually build the internal notion of objects in their mind. The whole process requires little external supervision from experts. Motion gives us not only the concept of objects and parts, but also their hierarchical structure. The classic study from Johansson (1973) reveals that humans recognize the structure of a human body from a few moving dots representing the keypoints on a human skeleton. This connects to the classic Gestalt theory in psychology (Koffka, 2013), which argues that human perception is holistic and generative, explaining scenes as a whole instead of in isolation. In addition to being unsupervised and hierarchical, our perception gives us concepts that are fully interpretable and disentangled. With an object-based representation, we are able to reason about object motion, predict what is going to happen in the near future, and imagine counterfactuals like "what happens if?" (Spelke & Kinzler, 2007) How can we build machines of such competency? Would that be possible to have an artificial system that learns an interpretable, hierarchical representation with system dynamics, purely from raw visual data with no human annotations? Recent research in unsupervised and generative deep representation learning has been making progress along this direction: there have been models that efficiently explain multiple objects in a scene (Huang & Murphy, 2015;, some simultaneously learning an interpretable representation (Chen et al., 2016). Most existing models however either do not produce a structured, hierarchical object representation, or do not characterize system dynamics. In this paper, we propose a novel formulation that learns an interpretable, hierarchical object representation and scene dynamics by predicting the future. Our model requires no human annotations, learning purely from unlabeled videos of paired frames. During training, the model sees videos of objects moving; during testing, it learns to recognize and segment each object and its parts, build their hierarchical structure, and model their motion distribution for future frame synthesis, all from a single image. Figure 1: Observing human moving, humans are able to perceive disentangled object parts, understand their hierarchical structure, and capture their corresponding motion fields (without any annotations). Our model, named Parts, Structure, and Dynamics (PSD), learns to recognize the object parts via a layered image representation. PSD learns their hierarchy via a structural descriptor that composes low-level concepts into a hierarchical structure. Formulated as a fully differentiable module, the structural descriptor can be end-to-end trained within a neural network. PSD learns to model the system dynamics by predicting the future. We evaluate our model in many possible ways. On real and synthetic datasets, we first examine its ability in learning the concept of objects and segmenting them. We then compute the likelihood that it correctly captures the hierarchical structure in the data. We finally validate how well it characterizes object motion distribution and predicts the future. Our system works well on all these tasks, with minimal input requirement (two frames during training, and one during testing). While previous state-of-the-art methods that jointly discover objects, relations, and predict future frames only work on binary images of shapes and digits, our PSD model works well on complex real-world RGB images and requires fewer input frames. RELATED WORK Our work is closely related to the research on learning an interpretable representation with a neural network (Hinton & Van Camp, 1993;Kulkarni et al., 2015b;Chen et al., 2016;Higgins et al., 2017;. Recent papers explored using deep networks to efficiently explain an object (Kulkarni et al., 2015a;, a scene with multiple objects Huang & Murphy, 2015;, or sequential data (Li & Mandt, 2018;Hsu et al., 2017). In particular, Chen et al. (2016) proposed to learn a disentangled representation without direct supervision. Wu et al. (2017) studied video de-animation, building an object-based, structured representation from a video. Higgins et al. (2018) learned an implicit hierarchy of abstract concepts from a few symbol-image pairs. Compared with these approaches, our model not only learns to explain observations, but also build a dynamics model that can be used for future prediction. There have been also extensive research on hierarchical motion decomposition (Ross & Zemel, 2006;Ross et al., 2010;Grundmann et al., 2010;Xu et al., 2012;Flores-Mangas & Jepson, 2013;Jain et al., 2014;Ochs et al., 2014;Pérez-Rúa et al., 2016;Gershman et al., 2016;Esmaeili et al., 2018). These papers focus on segment objects or parts from videos and infer their hierarchical structure. In this paper, we propose a model that learns to not only segment parts and infer their structure, but also to capture each part's dynamics for synthesizing possible future frames. Physical scene understanding has attracted increasing attention in recent years (Fragkiadaki et al., 2016;Chang et al., 2017;Finn et al., 2016;Ehrhardt et al., 2017;Shao et al., 2014). Researchers have attempted to go beyond the traditional goals of high-level computer vision, inferring "what is where", to capture the physics needed to predict the immediate future of dynamic scenes, and to infer the actions an agent should take to achieve a goal. Most of these efforts do not attempt to learn physical object representations from raw observations. Some systems emphasize learning from pixels but without an explicitly object-based representation (Fragkiadaki et al., 2016;, which makes generalization challenging. Others learn a flexible model of the dynamics of object interactions, but assume a decomposition of the scene into physical objects and their properties rather than learning directly from images (Chang et al., 2017;Kipf et al., 2018). A few very recent papers have proposed to jointly learn a perception module and a dynamics model (Watters et al., 2017;Wu et al., 2017;van Steenkiste et al., 2018). Our model moves further by simultaneously discovering the hierarchical structure of object parts. Another line of related work is on future state prediction in either image pixels (Xue et al., 2016;Mathieu et al., 2016;Lotter et al., 2017;Lee et al., 2018;Balakrishnan et al., 2018b) or object trajectories (Kitani et al., 2017;Walker et al., 2016). Some of these papers, including our model, Adelson, 1993). These papers often fail to model the object hierarchy. There has also been abundant research making use of physical models for human or scene tracking (Salzmann & Urtasun, 2011;Kyriazis & Argyros, 2013;Vondrak et al., 2013;Brubaker et al., 2009). Compared with these papers, our model learns to discover the hierarchical structure of object parts purely from visual observations, without resorting to prior knowledge. FORMULATION By observing objects move, we aim to learn the concept of object parts and their relationships. Take human body as an example (Figure 1). We want our model to parse human parts (e.g., torso, hands, and legs) and to learn their structure (e.g., hands and legs are both parts of the human body). Formally, given a pair of images {I 1 , I 2 }, let M be the Lagrangian motion map (i.e. optical flow). Consider a system that learns to segment object parts and to capture their motions, without modeling their structure. Its goal is to find a segment decomposition of I 1 = {O 1 , O 2 , . . . , O n }, where each segment O k corresponds to an object part with distinct motion. Let {M g 1 , M g 2 , . . . , M g n } be their corresponding motions. Beyond that, we assume that these object parts form a hierarchical tree structure: each part k has a parent p k , unless itself is the root of a motion tree. Its motion M g k can therefore be decomposed into its parent's motion M g p k and a local motion component M l k within its parent's reference frame. Specifically, M g k = M g p k + M l k , if k is not a root. Here we make use of the fact that Lagrangian motion components M l k and M g p k are additive. Figure 2 gives an intuitive example: knowing that the legs are part of human body, the legs' motion can be written as the sum of the body's motion (e.g., moving to the left) and the legs' local motion (e.g., moving to lower or upper left). Therefore, the objective of our model is, in addition to identifying the object components {O k }, learning the hierarchical tree structure {p k } to effectively and efficiently explain the object's motion. Such an assumption makes it possible to decompose the complex object motions into simple and disentangled local motion components. Reusing local components along the hierarchical structure helps to reduce the description length of the motion map M. Therefore, such a decomposition should naturally emerge within a design with information bottleneck that encourages compact, disentangled representations. In the next section, we introduce the general philosophy behind our model design and the individual components within. METHOD In this section, we discuss our approach to learn the disentangled, hierarchical representation. Our model learns by predicting future motions and synthesizing future frames without manual annotations. Figure 3 shows an overview of our Parts, Structure, and Dynamics (PSD) model. 4.1 OVERVIEW Motion can be decomposed in a layer-wise manner, separately modeling different object component's movement (Wang & Adelson, 1993). Motivated by this, our model first decomposes the input frame I 1 into multiple feature maps using an image encoder ( Figure 3c). Intuitively, these feature maps correspond to separate object components. Our model then performs convolutions ( Figure 3d) on these feature maps using separate kernels obtained from a kernel decoder (Figure 3b), and synthesizes the local motions M l k of separate object components with a motion decoder ( Figure 3e). After that, our model employs a structural descriptor ( Figure 3f) to recover the global motions M g k from local motions M l k , and then compute the overall motion M. Finally, our model uses an image decoder ( Figure 3g) to synthesize the next frame I 2 from the input frame I 1 and the overall motion M. Our PSD model can be seen as a conditional variational autoencoder. During training, it employs an additional motion encoder ( Figure 3a) to encode the motion into the latent representation z; during testing, it instead samples the representation z from its prior distribution p z (z), which is assumed to be a multivariate Gaussian distribution, where each dimension is i.i.d., zero-mean, and unit-variance. We emphasize the different behaviors of training and testing in Algorithm 1 and 2. NETWORK STRUCTURE We now introduce each component. Dimensionality. The hyperparameter d is set to 32, which determines the maximum number of objects we are able to deal with. During training, the variational loss encourages our model to use as few dimensions in the latent representation z as possible, and consequently, there will be only a few dimensions learning useful representations, each of which correspond to one particular object, while all the other dimensions will be very close to the Gaussian noise. Motion Encoder. Our motion encoder takes the flow fieldM between two consecutive frames as input, with resolution of 128×128. It applies seven convolutional layers with number of channels . The output will be a 64-channel feature map. We then upsample the feature maps by 4× with nearest neighbor sampling, and finally, the resolution of feature maps will be 128×128. Cross Convolution. The cross convolution layer (Xue et al., 2016) applies the convolutional kernels learned by the kernel decoder to the feature maps learned by the image encoder. Here, the convolution operations are carried out in a channel-wise manner (also known as depth-wise separable convolutions in Chollet (2017)): it applies each of the d convolutional kernels to its corresponding channel in the feature map. The output will be a d-channel transformed feature map. Motion Decoder. Our motion decoder takes the transformed feature map as input and estimates the x-axis and y-axis motions separately. For each axis, the network applies two 9×9, two 5×5 and two 1×1 depthwise separable convolutional layers, all with 32 channels. We stack the outputs from two branches together. The output motion will have a size of d×128×128×2. Note that the local motion M l k is determined by z k only. Structural Descriptor. Our structural descriptor recovers the global motions {M g k } from the local motions {M l k } and the hierarchical tree structure {p k } using Then, we define the structural matrix S as S ik = [i ∈ P k ], where each binary indicator S ik represents whether O i is an ancestor of O k . This is what we aim to learn, and it is shared across different data points. In practice, we relax the binary constraints on S to [0, 1] to make this module differentiable: S ik = sigmoid(W ik ), where W ik are trainable parameters. Finally, the overall motion can be simply computed as M = k M g k . Image Decoder. Given the input frame I 1 and the predicted overall motion M, we employ the U-Net (Ronneberger et al., 2015) as our image decoder to synthesize the future image frame I 2 . TRAINING DETAILS Our objective function L is a weighted sum over three separate components: L = L recon + β · L reg + γ · L struct , where β and γ are two weighting factors. (3) The first component is the pixel-wise reconstruction loss, which enforces our model to accurately estimate the motion M and synthesize the future frame I 2 . We have L recon = M −M 2 + α · I 2 −Î 2 2 , where α is a weighting factor (which is set to 10 3 in our experiments). The second component is the variational loss, which encourages our model to use as few dimensions in the latent representation z as possible (Xue et al., 2016;Higgins et al., 2017). We have L reg = D KL N (z mean , z var ) || p z (z) , where D KL (· || ·) is the KL-divergence, and p z (z) is the prior distribution of the latent representation (which is set to normal distribution in our experiments). The last component is the structural loss, which encourages our model to learn the hierarchical tree structure so that it helps the motions M l be represented in an efficient way: L struct = d k=1 M l k 2 . Note that we apply the structural loss on local motion fields, not on the structural matrix. In this way, the structural loss serves as a regularization, encouraging the motion field to have small values. (1) (1) (2) (3) (2) (1) Figure 4: Results of synthesizing future frames (a-e) and learning hierarchical structure (f) on shapes We implement our PSD model in PyTorch (Paszke et al., 2017). Optimization is carried out using ADAM (Kingma & Ba, 2015) with β 1 = 0.9 and β 2 = 0.999. We use a fixed learning rate of 10 −3 and mini-batch size of 32. We propose the two-stage optimization schema, which first learns the disentangled and then learns the hierarchical representation. In the first stage, we encourage the model to learn a disentangled representation (without structure). We set the γ in Equation 3 to 0 and fix the structural matrix S to the identity I. The β in Equation 3 is the same as the one in the β-VAE (Higgins et al., 2017), and therefore, larger β's encourage the model to learn a more disentangled representation. We first initialize the β to 0.1 and then adaptively double the value of β when the reconstruction loss reaches a preset threshold. In the second stage, we train the model to learn the hierarchical representation. We fix the weights of motion encoder and kernel decoder, and set the β to 0. We initialize the structural matrix S, and optimize it with the image encoder and motion decoder jointly. We adaptively tune the value of γ in the same way as the β in the first stage. EXPERIMENTS We evaluate our model on three diverse settings: i) simple yet nontrivial shapes and digits, ii) Atari games of basketball playing, and iii) real-world human motions. MOVEMENT OF SHAPES AND DIGITS We first evaluate our method on shapes and digits. For each dataset, we rendered totally 100,000 pairs for training and 10,000 for testing, with random visual appearance (i.e., sizes, positions, and colors). For the shapes dataset, we use three types of shapes: circles, triangles and squares. Circles always move diagonally, while the other two shapes' movements consist of two sub-movements: moving together with circles and moving in their own directions (triangles horizontally, and squares vertically). Figure A3 demonstrates the motion distributions of each shape. The complex global motions (after structure descriptor) are decomposed into several simple local motions (before structure descriptor). These local motions are much easier to represent. (a) Test on original dataset with 2 squares (b) Generalize to new dataset with 3 squares We also construct an additional dataset with up to nine different shapes. We assign these shapes into four different groups: i) square and two types of parallelograms, ii) circle and two types of triangles, iii) two types of trapezoids, and iv) pentagon. The movements of shapes in the same group have intrinsic relations, while shapes in different groups are independent of each other. These nine shapes have their own different motion direction. In the first group, the tree structure is the same as that of our original shapes dataset: replacing circles with squares, triangles with left parallelograms, and squares with right parallelograms. In the second group, circle and two types of triangles form a chain-like structure, which is similar to the one in our digits dataset. In the third group, the structure is a chain contains two types of trapezoids. In the last group, there is only a pentagon. As for the digits dataset, we use six types of hand-written digits from MNIST (LeCun et al., 1998). These digits are divided into two groups: 0's, 1's and 2's are in the first group, and 3's, 4's and 5's in the second group. The movements of digits in the same group have some intrinsic relations, while digits in different groups are independent of each other. In the first group, the tree structure is the same as that of our shapes dataset: replacing circles with 0's, triangles with 1's, and squares with 2's. The second group has a chain-like structure: 3's move diagonally, 4's move together with 3' and move horizontally at the same time, and 5's move with 4's and move vertically at the same time. After training, our model should be able to synthesize future frames, segment different objects (i.e., shapes and digits), and discover the relationship between these objects. Future Prediction. In Figure 4d and Figure 6d, we present some qualitative results of synthesizing future frames. Our PSD model captures the different motion patterns for each object and synthesizes multiple possible future frames. Figure A3 summarizes the distribution of sampled motion of these shapes; our model learns to approximate each shape's dynamics in the training set. Latent Representation. After analyzing the representation z, we observe that its intrinsic dimensionality is extremely sparse. On the shapes dataset, there are three dimensions learning meaningful representations, each of which correspond to one particular shape, while all the other dimensions are very close to the Gaussian noise. Similarly, on digits dataset, there are six dimensions, corresponding to different digits. In further discussions, we will only focus on these meaningful dimensions. Object Segmentation. For each meaningful dimension, the feature map can be considered as the segmentation mask of one particular object (by thresholding). We evaluate our model's ability on learning the concept of objects and segmenting them by computing the intersection over union (IoU) between model's prediction and the ground-truth instance mask. We compare our model with Neural Expectation Maximization (NEM) proposed by Greff et al. (2017) and Relational Neural Expectation Maximization (R-NEM) proposed by van Steenkiste et al. (2018). As these two methods both take a sequence of frames as inputs, we feed two input frames repetitively (I 1 , I 2 , I 1 , I 2 , I 1 , I 2 , ...) into these models for fair comparison. Besides, as these methods do not learn the correspondence of objects across data points, we manually iterate all possible mappings and report the one with the best performance. We present qualitative results in Figure 7 and Figure 5b, and quantitative results in Table 1. Our PSD model significantly outperforms two baselines. In particular, R-NEM and our PSD model focus on complementary topics: R-NEM learns to identify instances through temporal reasoning, using signals across the entire video to group pixels into objects; our PSD model learns the appearance prior of objects: by watching how they move, it learns to recognize how object parts can be grouped based on their appearance and can be applied on static images. As the videos in our dataset has only two frames, temporal signals alone are often not enough to tell objects apart. This explains the less compelling results from R-NEM. We included a more systematic study in Section A.3 to verify that. To evaluate the generalization ability, we train our PSD model on a dataset with two squares, among other shapes, and test it on a dataset with three squares. In each piece of data, all squares move together and have the same motion. Other settings are the same as the original shapes dataset. Figure 8 shows segmentation results on these two datasets. Our model generalizes to recognize the three squares simultaneously, despite having seen up to two in training. Hierarchical Structure. To discover the tree structure between these dimensions, we binarize the structural matrix S ik by a threshold of 0.5 and recover the hierarchical structure from it. We compare our PSD model with R-NEM and Neural Relational Inference (NRI) proposed by Kipf et al. (2018). As the NRI model requires objects' feature vectors (i.e., location and velocity) as input, we directly feed the coordinates of different objects in and ask it to infer the underlying interaction graph. In Figure 4f and Figure 6f, we visualize the hierarchical tree structure obtained from these models. Our model is capable of discovering the underlying structure; while two baselines fail to learn any meaningful relationships. This might be because NRI and R-NEM both assume that the system dynamics is fully characterized by their current states and interactions, and therefore, they are not able to model the uncertainties in the system dynamics. On the challenging dataset with more shapes, our PSD model is still able to discover the underlying structure among them (see Figure 5c). ATARI GAMES OF PLAYING BASKETBALL We then evaluate our model on a dataset of Atari games. In particular, we select the Basketball game from the Atari 2600. In this game, there are two players competing with each other. Each player can move in eight different directions. The offensive player constantly dribbles the ball and throws the ball at some moment; while the defensive player tries to steal the ball from his opponent player. We download a video of playing this game from YouTube and construct a dataset with 5,000 pairs for training and 500 for testing. Our PSD model discovers three meaningful dimensions in the latent representation z. We visualize the feature maps in these three dimensions in Figure 9. We observe that one dimension (in Figure 9d) is learning the offensive player with ball, another (in Figure 9e) is learning the ball, and the other (in Figure 9f) is learning the defensive player. We construct the hierarchical tree structure among these three dimensions from the structural matrix S. As illustrated in Figure 9g, our PSD model is able to discover the relationship between the ball and the players: the offensive player controls the ball. This is because our model observes that the ball always moves along with the offensive player. MOVEMENT OF HUMANS We finally evaluate our method on two datasets of real-world human motions: the human exercise dataset used in Xue et al. (2016) and the yoga dataset used in Balakrishnan et al. (2018a). We estimate the optical flows between frames by an off-the-shelf package (Liu, 2009). Compared with previous datasets, these two require much more complicated visual perception, and they have challenging hierarchical structures. In the human exercise dataset, there are 50,000 pairs of frames used for training and 500 for testing. As for the yoga dataset, there are 4,720 pairs of frames for training and 526 for testing. Future Prediction. In Figure 10 and Figure 11, we present qualitative results of synthesizing future frames. Our model is capable of predicting multiple future frames, each with a different motion. We compare with 3DcVAE , which takes one frame as input and predicts the next 16 frames. As our training dataset only has paired frames, for fair comparison, we use the repetition of two frames as input: (I 1 , I 2 , I 1 , I 2 , ..., I 1 , I 2 ). We also use the same optical flow (Liu, 2009) for both methods. In Figure 12, the future frames predicted by 3DcVAE have much more artifacts, compared with our PSD model. Object Segmentation. In Figure 13 and Figure 14, we visualize the feature maps corresponding to the active latent dimensions. It turns out that each of these dimensions corresponds to one particular human part: full torsos (13c, 14c), upper torsos (13d), arms (13e), left arms (14d), right arms (14e), right legs (13f, 14g), and left legs (13g, 14f). Note that it is extremely challenging to distinguish Figure 13: Results of segmenting parts (c-g) and learning hierarchical structure (h) on human motions. different parts from motions, because different parts (e.g., arms and legs) might have similar motions (see Figure 13b). R-NEM is not able to segment any meaningful parts, let alone structure, while our PSD model gives imperfect yet reasonable part segmentation results. For quantitative evaluation, we collect the ground truth part segmentation for 30 images and compute the intersection over union (IoU) between the ground-truth and the prediction of our model and the other two baselines (NEM, R-NEM). The quantitative results are presented in Table 2. Our PSD model significantly outperforms the two baselines. Hierarchical Structure. We recover the hierarchical tree structure among these dimensions from the structural matrix S. From Figure 13h, our PSD model is able to discover that the upper torso and the legs are part of the full torso, and the arm is part of the upper torso, and from Figure 14h, our PSD model discovers that the arms and legs are parts the full torso. CONCLUSION We have presented a novel formulation that simultaneously discovers object parts, their hierarchical structure, and the system dynamics from unlabeled videos. Our model uses a layered image representation to discover basic concepts and a structural descriptor to compose them. Experiments suggest that it works well on both real and synthetic datasets for part segmentation, hierarchical structure recovery, and motion prediction. We hope our work will inspire future research along the direction of learning structural object representations from raw sensory inputs. A.2 MOTION DISTRIBUTION OF SHAPE DATASET In Figure A3, we demonstrate the motion distributions of each shape. Figure A3: Motion distributions of different shapes before and after the structure descriptor. The first row is the ground truth and the second row is the prediction of our model. A.3 ADDITIONAL RESULTS OF R-NEM As mentioned in the main paper, R-NEM and our PSD model focus on complementary topics: R-NEM learns to identify instances through temporal reasoning, using signals across the entire video to group pixels into objects; our PSD model learns the appearance prior of objects: by watching how they move, it learns to recognize how object parts can be grouped based on their appearance and can be applied on static images. As the videos in our dataset has only two frames, temporal signals alone are often not enough to tell objects apart. This may explain the less compelling results from R-NEM. Here, we include a more systematic study to verify that. We train the R-NEM with three types of inputs: 1) only one frame; 2) two input frames appear repetitively (the setup we used on our dataset, where videos only have two frames); 3) longer videos with 20 sequential frames. Figure A4 and Table A1 show that results on 20-frame input are significantly better than the previous two. R-NEM handles occluded objects with long trajectories, where each object appears without occlusion in at least one of the frames.
2019-02-19T14:08:46.871Z
2019-03-12T00:00:00.000
{ "year": 2019, "sha1": "4b64cd1fb5ee85e874a792c49271e4bf5314d6d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4b64cd1fb5ee85e874a792c49271e4bf5314d6d7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
22429215
pes2o/s2orc
v3-fos-license
Inhibition of muscarinic receptor-linked phospholipase D activation by association with tubulin. Mammalian phospholipase D (PLD) is considered a key enzyme in the transmission signals from various receptors including muscarinic receptors. PLD activation is a rapid and transient process, but a negative regulator has not been found that inhibits signal-dependent PLD activation. Here, for the first time, we report that tubulin binding to PLD2 is an inhibition mechanism for muscarinic receptor-linked PLD2 activation. Tubulin was identified in an immunoprecipitated PLD2 complex from COS-7 cells by peptide mass fingerprinting. The direct interaction between PLD2 and tubulin was found to be mediated by a specific region of PLD2 (amino acids 476-612). PLD2 was potently inhibited (IC50 <10 nM) by tubulin binding in vitro. In cells, the interaction between PLD2 and tubulin was increased by the microtubule disrupting agent nocodazole and reduced by the microtubule stabilizing agent Taxol. Moreover, PLD2 activity was found to be inversely correlated with the level of monomeric tubulin. In addition, we found that interaction with and the inhibition of PLD2 by monomeric tubulin is important for the muscarinic receptor-linked PLD signaling pathway. Interaction between PLD2 and tubulin was increased only after 1-2 min of carbachol stimulation when carbachol-stimulated PLD2 activity was decreased. The expression of the tubulin binding region of PLD2 blocked the later decrease in carbachol-induced PLD activity by masking tubulin binding. Taken together, these results indicate that an increase in local membrane monomeric tubulin concentration inhibits PLD2 activity, and provides a novel mechanism for the inhibition of muscarinic receptor-induced PLD2 activation by interaction with tubulin. Mammalian phospholipase D (PLD) 1 hydrolyzes membrane phosphatidylcholine to generate phosphatidic acid and choline. PLD activity is dramatically activated in response to a variety of signals, including hormones, neurotransmitters, and growth factors (1). PLD products, phosphatidic acid itself or the hydrolyzed product diacylglycerol, have been known to act as intracellular lipid second messengers and to be involved in multiple physiological events, such as, the promotion of mitogenesis, stimulation of respiratory bursts, secretory processes, and actin cytoskeletal reorganization (2)(3)(4)(5)(6)(7). Therefore, signal-dependent PLD activity must be tightly controlled in cells. Many reports have been issued about the mechanisms of receptor-mediated PLD activation. Although the mammalian PLD isozymes, PLD 1 and PLD 2 , have some different regulatory properties, in general, agonist-induced PLD is activated by various protein kinases, including protein kinase C, proteintyrosine kinase, and the MAP kinase family, in addition to small G proteins of the ARF, Rho, and Ras families (8 -13). The signal-dependent activation of PLD is rapid and transient. Although, the activation kinetics depend on the stimulus and cell type, the PLD signal is usually diminished within 10 min (14). However, the mechanisms involved in PLD signal inhibition remain unknown. Signaling protein must be tightly regulated with respect to duration and strength (15). Some inhibitors of PLD activity have been reported (16 -25), but the roles of these inhibitors in signal dependent PLD activity has not been demonstrated, and inhibition of signal dependent PLD activity by a negative regulator has not been reported. Members of the muscarinic acetylcholine receptor family (M 1 -M 5 ) are considered to play important roles in various neurological processes such as learning, memory, emotion, perception, and cognition both in the central nervous system and the body periphery (26,27). These receptors are members of a family of receptors that are coupled to heterotrimeric transducer G proteins. The M 1 , M 3 , and M 5 acetylcholine receptor subtypes are efficiently coupled to the pertussis toxin-insensitive G␣ q/11 and G 13 subtype G proteins, leading to activations of PLC and PLD, whereas the M 2 , and M 4 receptors are coupled to pertussis toxin-sensitive G i and G 0 subtype G proteins (27)(28)(29). The mechanism of PLD activation by the muscarinic receptor has been mainly studied for the M 3 receptor subtype. PLD activation by the M 3 receptor is mediated by members of the ARF and protein-tyrosine kinase, protein kinase C, and Rho GTPase families (29 -34). Muscarinic receptor-stimulated PLD activity has been reported in several cell types, including submandibular and lacrimal gland acini, neuroblastoma cells, and tracheal smooth muscle cells (35). In most cell types, carbacholstimulated PLD activation is a very rapid and transient process, i.e. diminished within 2 min (36,37). However, the inhibition mechanisms of muscarinic receptor-linked PLD activity have not been elucidated. In this report, we found that a major component of the cytoskeleton proteins, tubulin, is a PLD 2 binder and inhibitor. Furthermore, we show the dynamic inhibition of PLD 2 activity by tubulin in muscarinic receptor signaling, suggesting a new mechanism for a G-protein-coupled receptor-PLD linkage. Co-immunoprecipitation of PLD 2 -binding Proteins-Cultured cells were harvested and lysed with PLD assay buffer (50 mM Hepes/NaOH, pH 7.3, 3 mM EGTA, 3 mM CaCl 2 . 3 mM MgCl 2 , 80 mM KCl) containing 0.5% Triton X-100, 1% cholic acid, 1 mM phenylmethylsulfonyl fluoride, 1 g/ml leupeptin, and 5 g/ml aprotinin. After a brief sonication, the lysates were centrifuged at 100,000 ϫ g for 1 h, and the cell extracts were incubated, respectively, with immobilized anti-PLD antibody overnight. The precipitates were washed four times and subjected to SDS-PAGE followed by immunoblot analysis. For silver staining, PLD and binding proteins were eluted from the immune complexes with antigen peptide of PLD antibody, as previously reported (10). Protein Identification by Peptide Mass Fingerprinting-The technique used was as described previously (26). In brief, the fraction containing the 55-kDa protein (p55) after co-immunoprecipitation from COS-7 cells was separated by SDS-PAGE, and the band corresponding to p55 was excised and digested with trypsin (Roche Molecular Biochemicals) for 6 h at 37°C. Masses tryptic peptides were measured using a Bruker Reflex III mass spectrometer, as described previously (17). Delayed ion extraction resulted in peptide masses with better than 50 parts/million mass accuracy on average. Using the amino acid sequences and the mass numbers of the tryptic peptides of p55, the Swiss-Prot data base was searched for a match. Purification of Recombinant PLD 2 from Baculovirus-transfected Sf9 Cells-His 6 -tagged human PLD 2 was purified from detergent extracts of baculovirus-infected Sf9 cells by chelating-Sepharose affinity column chromatography, as described previously (40). Construction and Preparation of GST Fusion Proteins-The fulllength cDNA of human PLD 2 was digested into fragments containing specific domains. These individual PLD 2 fragments were then ligated into the EcoRI or SmaI sites of the pGEX4T3 vector. Subcloning and polymerase chain reaction were used to produce expression vectors encoding the respective GST fusion proteins (19). Escherichia coli BL21 cells were transformed with the individual expression vectors encoding the GST fusion proteins, and after harvesting the cells the expressed GST fusion proteins were purified by standard methods using glutathione-Sepharose 4B. In Vitro Binding Analysis-In vitro binding was performed in 300 l of PLD assay buffer containing 0.1% Triton X-100 and 0.1% cholic acid for 20 min at 37°C. After a brief centrifugation, the precipitated complexes were washed five times in the same buffer before being loaded onto a polyacrylamide gel. In Vitro PLD Activity Assay-PIP 2 -dependent PLD activity was assayed by measuring choline release from phosphatidylcholine, as described previously (41), with minor modifications. In brief, the reaction was carried out at 37°C for 15 min in a 125-l assay mixture containing PLD assay buffer, the PLD preparation, and 25 l of phospholipid vesicles composed of dioleoylphosphatidylethanolamine, PIP 2 , dipalmitoylphosphatidylcholine, and dipalmitoylphosphatidyl-[methyl-3 H]choline (a total of 150,000 cpm/assay) at a molar ratio of 16:1.4:1. Oleatedependent PLD activity was assayed as described previously (40). In brief, phosphatidylcholine vesicles (25 l) containing 5 nmol of dipalmitoylphosphatidylcholine and 200,000 dpm of dipalmitoylphosphatidyl-[methyl-3 H]choline were added to a reaction mixture (175 l) containing 50 mM Hepes/NaOH, pH 7.0, 2 mM EGTA, 1.7 mM CaCl 2 , 20 M sodium oleate, and 0.1 M KCl. The final concentration of phosphatidylcholine in the reaction mixture was 25 M. The assay mixture was then incubated at 30°C for 1 h, and the reaction was terminated by adding 0.3 ml of 1 N HCl, 5 mM EGTA, and 1 ml of chloroform/methanol/HCl (50:50:0.3). After a brief centrifugation, the amount of [methyl-3 H]choline in 0.5 ml of the aqueous phase was quantified by liquid scintillation counting. In Vivo PLD Activity Assay-In vivo PLD activity was determined, as described previously (39). In brief, vector or human PLD 2 -transfected COS-7 cells were cultured for 48 h. The cells were then loaded with [ 3 H]myristic acid (10 Ci/ml) for 4 h and washed twice with Dulbecco's modified Eagle's medium. The loaded cells were treated carbachol with 0.4% butanol, scraped into 0.8 ml of methanol and 1 M NaCl (1:1), and mixed with 0.4 ml of chloroform. The organic phases were dried, and the lipids were separated by thin-layer chromatography on silica gel plates. The PLD activity of PLD 2 overexpressing PC12 cells was determined using the same procedures. The amount of [ 3 H]phosphatidylbutanol formed was expressed as a percentage of total 3 H-lipid to account for cell labeling efficiency differences. Immunoblot Analysis-Proteins were denatured by boiling for 5 min at 95°C in a Laemmli sample buffer (42), separated by SDS-PAGE, and immunoblot analysis was performed as described previously (19). Monomeric Tubulin Extraction-Monomeric tubulin were extracted from vector and PLD 2 -transfected COS-7 cells plated on 100-mm dishes, as described previously (43). Following treatments as indicated, cell monolayers were washed twice with phosphate-buffered saline (PBS) and scraped. The cells were then incubated with 0.2 ml of 0.5% Triton X-100 containing microtubule stabilizing buffer (2 M glycerol, 0.1 M Pipes, pH 7.1, 1 mM MgSO 4 , 1 mM EGTA, protease inhibitor mixture (Sigma)) for 30 min at 37°C. Cell lysates were centrifuged at 15,000 ϫ g for 20 min; the supernatants and the pellets contained monomeric and polymeric tubulin, respectively. Cell Fractionation-Cells were treated as above and washed with PBS. Cells were resuspended in lysis buffer (20 mM Tris, pH 7.5, 5 mM dithiothreitol, 250 mM sucrose, 2 mM EDTA, 10 mM EGTA, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml leupeptin), and immediately sonicated. Each sample was centrifuged at 600 ϫ g at 4°C. Supernatants were centrifuged at 20,000 ϫ g at 4°C. The pellet were collected and used as the membrane fraction. Immunocytochemistry-Immunocytochemical analysis was performed as described previously (44). In brief, PLD 2 -transfected PC12 cells were grown on coverslips, rinsed with cold PBS four times, and fixed with 4% (w/v) paraformaldehyde overnight at 4°C. After rinsing with PBS and blocking with PBS containing 1% goat serum and 0.1% Triton X-100 for 30 min at room temperature, the cells were incubated with 2 g/ml mouse ␤-tubulin monoclonal antibodies and rabbit PLD polyclonal antibodies for 2 h at room temperature. After washing six times with PBS, fluorescein isothiocyanate-conjugated goat anti-mouse antibodies and rhodamine-conjugated goat anti-rabbit antibodies were incubated with the cells for 1 h to allow visualization of tubulin and PLD 2 . After a further six washings with PBS, the slides were examined under a confocal microscope (Zeiss, Germany). 55-kDa Protein Precipitated with PLD 2 from COS-7 Cell Was Identified as ␣,␤-Tubulin-To find the binding partners of PLD 2 , we started our investigation by looking for cellular PLD 2 -binding proteins in transiently PLD 2 overexpressing COS-7 cells. To find the specific binding partner of PLD 2 , after the precipitation with anti-PLD antibody, PLD 2 were eluted with antigen peptide of PLD antibody, and resolved by SDS-PAGE, and visualized by silver staining. As a result, we found that the co-precipitate contained PLD 2 -binding proteins with relative molecular masses of 55,000 (p55) and some other binding proteins (Fig. 1A). To identify the PLD 2 -interacting protein, p55 was excised from the polyacrylamide gel, digested with trypsin, and the cleaved peptide mixture was then subjected to peptide mass fingerprinting by MALDI-TOF mass spectrometry. Fig. 1B shows the mass spectrum of the digested peptides of p55. The masses obtained were compared with proteins in the Swiss-Prot data base using the MS-Fit peptide mass search program. The peptides were found to have molecular masses that were almost identical to the calculated masses of the corresponding theoretically predicted tryptic peptides of ␣and ␤-tubulin. This peptide search result was performed at an accuracy of 50 parts/million, and the analyzed peptides covered 24% of the ␣-tubulin, and 50% of the ␤-tubulin sequence. To substantiate the identity of these proteins further, the presence of tubulin in the PLD 2 precipitate was confirmed using a monoclonal antibody specific to ␤-tubulin. As shown in Fig. 1C, p55 was blotted by ␤-tubulin monoclonal antibody. Based on these results, we concluded that the p55 interacting with PLD 2 from COS-7 cells is a ␣,␤-tubulin heterodimer (monomeric tubulin). Tubulin Directly Interacts with PLD 2 -To determine whether tubulin associates directly with PLD 2 , 99% pure monomeric tubulin was incubated with the PLD 2 . As shown in Fig. 2A, tubulin coprecipitated with PLD 2 in a concentration-dependent manner, demonstrating that tubulin binds directly to PLD 2 . To identify the PLD 2 sequence involved in tubulin binding, we constructed GST fusion proteins as shown in Fig. 2B, and tested them for their ability to bind to purified tubulin. GST-PLD 2 (amino acids 476 -612) was identified as the region that most potently bound to tubulin (Fig. 2C). Therefore, it appears that the region of the protein between amino acids 476 and 612 is important for direct interaction with tubulin. Tubulin Inhibits PLD 2 Activity in Vitro-Because tubulin binds to PLD 2 , the effect of tubulin on the activity of PLD 2 was examined. As shown in Fig. 3, tubulin inhibits PLD 2 activity in vitro in a concentration-dependent manner. The concentration required for half-maximal inhibition was determined to be ϳ2 nM in a PIP 2 -dependent PLD activity assay. To exclude the possibilities that the inhibition of PLD 2 activity by tubulin is caused by PIP 2 sequestration, we also performed a PLD 2 activity assay in the absence of PIP 2 . Previously, we reported that PLD 2 activity is stimulated by oleate in vitro (40). PLD 2 activation by oleate was found to be progressively inhibited by increasing the tubulin concentration with an IC 50 of ϳ10 nM. These results suggest that PLD 2 activity is inhibited by direct interaction with tubulin. Microtubule Dynamics Affect the Interaction of PLD 2 with Tubulin and PLD 2 Activity in COS-7 Cells-To test whether microtubule dynamics affect interaction between PLD 2 and tubulin and that of PLD activity in cells, we transfected the PLD 2 genes into COS-7 cells, and treated them with nocodazole or Taxol to change the cellular monomeric tubulin concentrations. Nocodazole depolymerizes microtubules and causes the monomeric tubulin concentration to increase. On the other hand, Taxol polymerizes microtubules and causes a monomeric tubulin concentration reduction (Fig. 4A, upper panel). As shown Fig. 4A, when the protein complex containing PLD 2 was isolated using anti-PLD antibody as a probe, endogenous tubulin was co-immunoprecipitated with PLD 2 in COS-7 cells. Pretreatment with nocodazole increased the interaction between PLD 2 and tubulin, but in Taxol-treated cells this interaction decreased. Interestingly, the changes in PLD activity were inversely correlated with the degree of interaction between PLD 2 and tubulin (Fig. 4B). These results suggest that tubulin forms a complex with PLD 2 in COS-7 cells, and that this interaction and PLD 2 activity can be modulated by changing the cellular monomeric tubulin concentration. Carbachol Increase PLD 2 -Tubulin Interaction when Carbachol-induced PLD Activity Was Decreased-Previous studies have indicated that the muscarinic receptor agonist, carbachol, rapidly causes microtubule depolymerization and translocation of tubulin to the plasma membrane (45). To elucidate whether PLD 2 activity can be regulated by tubulin by stimulating the muscarinic receptor signaling pathway, we monitored the carbachol-induced PLD 2 activity in M3 acetylcholine receptorexpressed COS-7 cells. PLD 2 activity was saturated in the presence of 100 M carbachol (Fig. 5A). Under this condition, FIG. 1. 55-kDa protein precipitated with PLD 2 from COS-7 cells was identified as the ␣,␤-tubulin mixture. A, vector-or PLD 2transfected COS-7 cells growing in media containing 10% bovine calf serum were lysed (5 mg), and immunoprecipitated with anti-PLD antibody. To find the specific binding proteins, PLD 2 and binding proteins were eluted from PLD antibody immobilized beads with antigen peptide of PLD antibody, as described under "Experimental Procedures." The eluted sample were resolved by SDS-PAGE and visualized by silver staining. The PLD 2 -binding protein with a molecular mass of 55 kDa is indicated by an arrow. B, peptide mixtures obtained after in-gel digestion of the excised band with trypsin were analyzed by MALDI-TOF MS. Peptide masses labeled with a black arrowhead were matched with the calculated tryptic peptide masses of ␣-tubulin within 50 parts/ million, and white arrowheads indicate matched masses of ␤-tubulin. C, equal aliquots of the co-immunoprecipitates used in A were separated by SDS-PAGE and analyzed by immunoblot analysis using antibodies directed anti-PLD 2 or anti-␤-tubulin. maximal PLD activity was obtained after 1 min but PLD activity was rapidly inhibited and reached baseline levels 2 min after carbachol was treated (Fig. 5B). Interestingly, the interaction between PLD 2 and tubulin was weak at baseline, but increased after 2 min of carbachol stimulation when PLD activity was inhibited (Fig. 5C). These results suggest that carbachol increases the association between PLD 2 with tubulin to inhibit PLD activity in COS-7 cells. Tubulin Binding Inhibits Muscarinic Receptor-linked PLD Activity-Tubulin directly interacts with PLD 2 via the F3 region of PLD 2 (Fig. 2C). The F3 region of PLD 2 can interfere with the interaction between PLD 2 and tubulin in vitro but the F2 region of PLD2 as a negative control cannot interfere with this interaction (Fig. 6A). To demonstrate muscarinic receptorinduced PLD activity inhibition by tubulin, we transfected the F3 region to mask the interaction between PLD 2 and tubulin in COS-7 cells. We found that the F3 region expression did not affect the basal PLD activity of COS-7 cells, but that it prolonged carbachol-induced PLD activation (Fig. 6B). In vector and the F2 region of PLD2-transfected cells, maximal PLD activity was obtained after 1 min of carbachol stimulation and this was rapidly diminished within 2 min. In cells expressing the F3 region of PLD 2 , maximal PLD activity occurred at the same time, but the later decline in PLD activity was retarded. These results suggest that carbachol-stimulated PLD activity is inhibited by tubulin-PLD 2 interaction and that the F3 region of PLD 2 inhibits the interaction between PLD 2 and tubulin. Carbachol Induces PLD 2 -Tubulin Interaction in Concert with PLD Activity Inhibition in PC12 Cells-To examine whether the interaction between PLD 2 and tubulin is changed by stimulating endogeneous muscarinic receptor, we used the PLD 2 inducible PC12 cell line (38). PC12 cells have endogeneous muscaric receptor and PLD activation by carbachol stimulation has been reported in PC12 cells (37). In these cells, PLD activity rapidly increased up to 0.5 min and then was reduced to basal level within 1 min after carbachol stimulation (Fig. 7A). Interestingly, interaction between PLD 2 and tubulin was elevated 1 min after carbachol stimulation in PC12 cells, showing the same PLD activity decreasing time as COS-7 cells (Fig. 7B). These results indicate that the PLD 2 activity regulating mechanism via tubulin interaction exists in endogeneous muscarinic receptor possessing cells. Carbachol Stimulation Causes the Translocation and Colocalization of Tubulin with PLD 2 at the Plasma Membrane-Next, to confirm whether co-localization between PLD 2 and tubulin can be induced by activating muscarinic receptors, we analyzed the cellular localization of tubulin by confocal laser microscopy. Because endogenous PLD 2 was not seen in PC12 cells, we transfected the PLD 2 gene into wild type PC12 cells and found that in these cells, PLD 2 was localized at the plasma membrane (Fig. 8A2). Tubulin was not seen at the plasma membrane, and most of the tubulin was localized in the cytosol (Fig. 8A1). However, after carbachol stimulation, some tubulin redistributed to areas along the plasma membrane and colocalized with PLD 2 (Fig. 8B). To test whether carbachol-induced tubulin redistribution was caused by microtubule depolymerization, PC12 cells were treated with either nocodazole or Taxol and tubulin localization was checked. In nocodazole-treated PC12 cells, tubulin colocalized with PLD 2 at the plasma membrane region (Fig. 8C), but in Taxol-treated cells, tubulin was absent at the plasma membrane and was not colocalized with PLD 2 (Fig. 8D). To clarify whether tubulin was translocated toward the membrane in response to carbachol stimulation, we FIG. 5. PLD 2 -tubulin interaction was increased after 2 min of carbachol stimulation when PLD activity was decreased in COS-7 cells. COS-7 cells were co-transfected with M3 muscarinic receptor and vector (VEC) or M3 muscarinic receptor and the F3 region of PLD 2. A, after serum starvation for 24 h, cells were stimulated with various carbachol concentrations (1-1000 M) for 5 min, and PLD activity was measured for 5 min, as described under "Experimental Procedures." B, for real-time PLD activity time course carbachol stimulation, cells were stimulated with carbachol (100 M) for 0, 0.5, 1, 2, or 5 min, then 1-butanol was added, and incubation was continued for an additional 30 s, as described under "Experimental Procedures." The data shown are the mean Ϯ S.E. of three independent assays. C, after serum starvation for 24 h, cells were stimulated with 100 M carbachol for 0, 0.5, 1, 2, or 5 min. Cells were lysed and sonicated in extraction buffer containing 0.5% Triton X-100 and 1% cholic acid, as described under "Experimental Procedures." After isolating the precipitates, the proteins were resolved by SDS-PAGE and visualized by immunoblot analysis using antibodies directed anti-PLD or anti-tubulin. Data are representative of two independent experiments. I.P., immunoprecipitate. isolated membrane fractions and quantified membrane-associated tubulin by immunoblotting. Carbachol was found to induce a rapid and time-dependent increase in tubulin recruitment to the membranes of the PC12 cells (Fig. 8E), and this recruitment increased after 1 min of carbachol stimulation. Taken together, these results suggest that carbachol stimulation induces rapid microtubule depolymerization and tubulin translocation to the plasma membrane. DISCUSSION Although the precise time frame of PLD activation is dependent on stimulus and cell type, transient PLD activation has been commonly observed. PLD activation has largely been studied in the context of the activation mechanism of PLD; however, the inhibition mechanism of agonist-induced PLD activity has not been elucidated. Although, some inhibitors of PLD activity have been reported, the signal-dependent inhibition of PLD activity by its negative regulator after agonist stimulation has not been previously reported. In the present study, we report that tubulin dynamically interacts with PLD 2 to inhibit the carbachol-induced PLD 2 activation. This is the first example of the inhibition of agonist-induced PLD signal-ing mediated by its dynamic and rapid association with a negative regulator. Muscarinic acetylcholine receptor signaling is inhibited by the uncoupling of this receptor from its G protein and receptor internalization to intracellular compartments (46 -48). This type of down-regulation is usually mediated by the phosphorylation of the activated receptor by members of the G proteincoupled receptor kinases. Phosphorylated receptors then interact with cytoplasmic proteins termed ␤-arrestins, which interfere with receptor-G protein interaction, favoring receptor endocytosis, thus inhibiting the signal (49,50). Generally muscarinic receptor-linked PLD activity is inhibited rapidly within 2 min, but the completion of muscarinic receptor phosphorylation and internalization events are required for a longer time (51,52). Therefore, muscarinic receptor-linked PLD activity might be inhibited by another mechanism. In this work, we report for the first time that tubulin acts as a negative regulator on the muscarinic receptor in association with PLD 2 signaling. Several lines of evidence support this notion. First, tubulin purified from bovine brain directly interacted with PLD 2 and inhibited its activity in a concentration-dependent manner in vitro (IC 50 Ͻ 10 nM) (Fig. 3). Second, the interaction between PLD 2 and tubulin increased the carbachol-induced PLD 2 activity decreasing time in both PLD 2 -expressing COS-7 and PC12 cells (Figs. 5 and 7). Third, changes in the interaction between PLD 2 and tubulin by nocodazole or Taxol were inversely correlated with PLD activity (Fig. 4). Finally, the expression of amino acids in the 476 -612 region of PLD 2 , the region responsible for tubulin binding, prolonged carbachol-induced PLD 2 activity by inhibiting tubulin binding to PLD 2 (Fig. 6). These results suggest that tubulin plays an inhibitory role in the inhibition of carbachol-induced PLD 2 activity. Several studies have suggested that PLD activity is regulated by negative regulators. Many negative regulators of PLD activity have been identified, including fodrin (16), ␣-actinin (17), actin (19), gelsolin (18), amphiphysin (21), aldolase (20), ␣-/␤-synuclein (24), synaptojanin (22), clathrin assembly protein 3 (23), and collapsin response mediator protein-2 (25). However, the roles of these inhibitors in signal-dependent PLD activity has not been demonstrated. Recently our group reported that Munc-18-1 directly inhibits PLD activity (53). In this report, epidermal growth factor treatment was found to trigger the dissociation of Munc-18-1 from PLD, and the inhibitory role of Munc-18-1 upon basal PLD activity, in a signaldependent manner, was suggested. However, until now, no negative regulator for the inhibition of signal-dependent transient PLD activation has been reported. In the present report, we suggest that tubulin is the first identified inhibitor to inhibit signal-dependent PLD activity by dynamic interaction. PIP 2 has been established as an allosteric activator of PLD in vivo and in vitro (41,54). Although many proteins inhibit PLD activity via direct interaction, some inhibitory proteins, such as fodrin and synaptojanin may sequester or hydrolyze PIP 2 to suppress PLD activity (16,22). The inhibitory effect of tubulin on PLD 2 is affected by the presence or absence of PIP 2 (Fig. 5). In PIP 2 -dependent PLD activity assays, tubulin inhibits PLD 2 at lower concentrations than in oleate dependent assays. These results are consistent with the inhibition by tubulin being caused, at least in part, by PIP 2 sequestration. However, this may not be the case, because nanomolar tubulin is insufficient to sequestrate PIP 2 in assay vesicle (2.33 M). Tubulin interacts directly with amino acids 476 -612 (between CR II and CR III including a part of CR III) regions of PLD 2 (Fig. 4B). Previously, it has been reported that the Arg 545 and Arg 558 motifs between II and III are important for PIP 2 binding (54). In the report, PLD 2 mutants R545G and R558G cannot interact with PIP 2 and cannot be activated by PIP 2 . From this result, we speculate that interaction of tubulin with PLD 2 may block to the PIP 2 binding of PLD 2 so tubulin more potently inhibits PLD 2 activity in the presence of PIP 2 . In cells, tubulin exists in a polymerized form (microtubule) and monomeric tubulin in an ␣,␤-heterodimer form, and PLD 2 can bind to both the polymerized form and the monomeric tubulin form in vitro (data not shown). However, microtubule is a cytosolic structure protein, and PLD 2 a membrane-bound protein. Although, membrane-and phospholipid-associated tubulin have been reported (55)(56)(57), it appears that this membrane tubulin is similar to the monomeric form (55,57,59). This notion is supported by observations of the COS-7 cells treated with the microtubule stability regulating pharmacological agents nocodazole and Taxol. Nocodazole promotes microtubule depolymerization and increases monomeric tubulin concentrations, whereas Taxol induces microtubule assembly and stabilizes the microtubule structure, reducing monomeric tubulin concentrations. In nocodazole-treated COS-7 cells, as the interaction between PLD 2 and tubulin increased, basal PLD 2 activity was inhibited, and in Taxol-treated cells the opposite results were obtained (Fig. 4). In fact, in unstimulated PC12 cells, PLD 2 shows minimal colocalization with tubulin at the plasma membrane (Fig. 8A), but nocodazole treatment induced its translocation to the plasma membrane and colocalization of tubulin and membrane PLD 2 , whereas in the case of Taxol treatment no tubulin translocation to the membrane or colocalization with membrane PLD 2 was observed (Fig. 8, C and D). These data suggest that in cells, PLD 2 interacts with mono- were treated for 20 min. To localize tubulin (A1-D1, green) and PLD 2 (A2-D2, red), immunocytochemical staining was performed, as described under "Experimental Procedures." Fluorescein isothiocyanateconjugated anti-mouse antibody and rhodamine-conjugated anti-rabbit antibody were detected under a confocal microscope. Co-localization is indicated by yellow color and arrowheads. Scale bars, 10 m. E, wild type PC12 cells were treated with carbachol (100 M) for 0, 1, 2, or 5 min. Cells were lysed and crude membrane fractions were prepared, as described under "Experimental Procedures." Total cell lysates (Total) and membrane fractions (Memb.) were subjected to SDS-PAGE and immunoblotted (I.B.). The data shown are representative of three independent experiments. meric tubulin and that its activity is regulated by changes in cellular monomeric tubulin concentration. It has been reported that the acetylcholine muscarinic receptor can regulate microtubule dynamics (45,49). These studies show that microtubule depolymerization and rapid tubulin translocation to the plasma membrane occur within 1 min of carbachol treatment in SK-N-SH cells. In PC12 cells, we also observed an increased translocation of tubulin to the membrane after carbachol treatment (Fig. 8, B and E). It is suggested that G␣ s and G␣ i , G-proteins under the control of muscarinic receptors, bind tubulin and stimulate GTPase activity to destroy the GTP cap on microtubules (60). Moreover, neurotransmitter-mediated activation of PLC would increase local Ca 2ϩ concentrations, which in turn would cause microtubule depolymerization and increase the local monomeric tubulin concentrations (58,61,62). These reports explain why the interaction between PLD 2 and tubulin increases after carbachol stimulation. In conclusion, the present study identifies a novel signaling pathway between PLD 2 and the microtubule structure, it is also the first example of the inhibition mechanism of agonistinduced PLD activation. Although the precise cellular meaning of this action remains to be elucidated, these findings may provide new insight into the regulation of PLD activity in a variety of cellular processes related to microtubule structure.
2018-04-03T02:21:23.225Z
2005-02-04T00:00:00.000
{ "year": 2005, "sha1": "a3cabc9f1bf5652db4c0b30d77fc43ad24fccda0", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/5/3723.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "ca99615ddbe2779aaabf54523b2674c968d3e95d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269326602
pes2o/s2orc
v3-fos-license
Mesenchymal stem cells promote ovarian reconstruction in mice Background Studies have shown that chemotherapy and radiotherapy can cause premature ovarian failure and loss of fertility in female cancer patients. Ovarian cortex cryopreservation is a good choice to preserve female fertility before cancer treatment. Following the remission of the disease, the thawed ovarian tissue can be transplanted back and restore fertility of the patient. However, there is a risk to reintroduce cancer cells in the body and leads to the recurrence of cancer. Given the low success rate of current in vitro culture techniques for obtaining mature oocytes from primordial follicles, an artificial ovary with primordial follicles may be a good way to solve this problem. Methods In the study, we established an artificial ovary model based on the participation of mesenchymal stem cells (MSCs) to evaluate the effect of MSCs on follicular development and oocyte maturation. P2.5 mouse ovaries were digested into single cell suspensions and mixed with bone marrow derived mesenchymal stem cells (BM-MSCs) at a 1:1 ratio. The reconstituted ovarian model was then generated by using phytohemagglutinin. The phenotype and mechanism studies were explored by follicle counting, immunohistochemistry, immunofluorescence, in vitro maturation (IVM), in vitro fertilization (IVF), real-time quantitative polymerase chain reaction (RT-PCR), and Terminal-deoxynucleotidyl transferase mediated nick end labeling(TUNEL) assay. Results Our study found that the addition of BM-MSCs to the reconstituted ovary can enhance the survival of oocytes and promote the growth and development of follicles. After transplanting the reconstituted ovaries under kidney capsules of the recipient mice, we observed normal folliculogenesis and oocyte maturation. Interestingly, we found that BM-MSCs did not contribute to the formation of follicles in ovarian aggregation, nor did they undergo proliferation during follicle growth. Instead, the cells were found to be located around growing follicles in the reconstituted ovary. When theca cells were labeled with CYP17a1, we found some overlapped staining with green fluorescent protein(GFP)-labeled BM-MSCs. The results suggest that BM-MSCs may participate in directing the differentiation of theca layer in the reconstituted ovary. Conclusions The presence of BM-MSCs in the artificial ovary was found to promote the survival of ovarian cells, as well as facilitate follicle formation and development. Since the cells didn’t proliferate in the reconstituted ovary, this discovery suggests a potential new and safe method for the application of MSCs in clinical fertility preservation by enhancing the success rate of cryo-thawed ovarian tissues after transplantation. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-024-03718-z. Background With rapid advancements in cancer therapy, children and reproductive-age women are benefitting from overall improved survival rates [1].It is well-documented that treatment of girls and women for cancer with radiation, chemotherapeutic drugs, or a combination of the two therapies can result in significant, and often irreversible, side-effect damage to the reproductive system [2,3].Anti-cancer therapy is often a cause of premature ovarian insufficiency (POI) due to the high sensitivity of the ovarian follicle reserve to chemotherapy and radiotherapy [4,5].Thus, there is an increased number of patients who received a gonadotoxic treatment and who later face fertility issues [6].Overall, compared to the general population, women who undergo cancer treatment are 38% less likely to become pregnant.This reduction in the likelihood of subsequent pregnancies has been observed in nearly all types of cancer [7].Consequently, the preservation of the ovarian reserve and prevention of infertility have become the primary quality of life concerns for patients and their physicians. Fertility preservation refers to the use of surgical, pharmacologic, or laboratory techniques to provide assistance to women or men at risk of infertility in protecting and preserving their ability to have genetically derived offspring [8].For women, common fertility preservation methods currently used include egg, embryo, and ovarian tissue freezing, whereas for unmarried or prepubertal women, freezing of the ovarian cortex is more appropriate [9][10][11].After the patient's condition has improved or resolved, the preserved ovarian cortex can be thawed and transplanted back to the patient.However, to minimize the risk of reintroducing cancer cells, an alternative approach involves culturing and developing the primordial follicles within the frozen cortex in vitro.This process can also incorporate biomaterials to construct an artificial ovary which can be transplanted into the patient's body with the capability to produce mature eggs [6,12].Scientists have been trying to fully realize the in vitro culture and maturation of human follicles [13].Telfer et al. applied a two-step culture method: ovarian tissue culture followed by follicle culture, and successfully obtained meiotic-capable eggs within a short period of time.However, further confirmation is still needed to determine whether these eggs have the ability to fertilize and support embryonic development [14]. Ovary, as a female reproductive organ, has a nonrenewable nature.In order to delay ovarian aging or provide fertility preservation services for people with POI, artificial ovary has always been a difficult and hot spot of research in the field of reproduction and regenerative medicine [15,16].3D-printed hydrogel scaffolds are being widely explored for the development of artificial ovaries that can support the growth and development of primordial, primary, and secondary follicles [17].However, this approach does not address the fertility losses caused by issues such as gamete deficiency, impaired follicle formation, and development.To completely solve these problems, it is necessary to reconstruct the ovarian tissue at the single-cell level.The development and maturation of the oocyte is highly dependent on the follicular structure, which is therefore crucial in the process of artificial ovary construction [18].Folliculogenesis is a long and complex process that involves a series of changes in the oocyte and its surrounding somatic cells [19].Under physiological conditions, the primordial follicle is formed and enters a resting phase, providing an egg reserve for the entire reproductive cycle.Activation of this resting follicle and its entry into the growth and development phase is dependent on local signals originating from the ovary [20,21].It has been shown that only ovarian somatic cells at a specific developmental period have the ability to form follicles.However, when these cells are reconstituted with germ cells to create a recombinant ovary, a significant proportion of oocytes undergo apoptosis.Moreover, due to the disruption of the internal regulatory mechanisms, the resulting follicles experience accelerated activation and development which ultimately results in the loss of the ovary's function very soon after the reorganization [22].In order to achieve the complete three-dimensional reconstruction of ovarian tissue, how to form follicular structures with high efficiency and ensure the normal development of follicles becomes the primary problem to be solved. Mesenchymal stem cells (MSCs) are a class of multipotential adult stem cells with the ability to differentiate directionally into adipose, bone, cartilage, and other cell types [23].It has been the most widely studied cell type in the field of regenerative medicine because of its easy accessibility, self-replication, directed differentiation, and low immunogenicity [24,25].The results from animal models have been successfully applied in clinical practice.One example is the transplantation of bone marrow mesenchymal stem cells (MSCs) from patients to repair damaged endometrium, which can restore normal uterine function during conception [26].Studies investigated the use of MSCs to enhance ovarian function in mice with premature ovarian failure, showing a significant improvement in the internal environment of the ovaries [27][28][29][30][31].While these studies have highlighted the potential of MSCs in restoring and protecting ovarian function, most researchers believe that this is achieved through the regulation of factors secreted by MSCs.The differentiation of MSCs into intra ovarian cells has not been extensively explored and it requires further investigation to better understand the mechanisms involved in the reconstruction of ovarian function.In this study, we established an artificial ovary model by mixing newborn ovarian single cells with MSCs at a 1:1 ratio.This artificial ovary model improved the survival rate of oocytes to form primordial follicles and then the following follicular development and oocyte maturation.Mesenchymal stem cells may be a potential cell resource in remodeling follicular structure for fertility preservation in future. Experimental animals Mice were obtained from Vital River Laboratories (Beijing, China) and housed in the animal facility at Nanjing Medical University.Mice were maintained under a 12/12-h dark-light cycle at 22 °C with free access to food and water.All animal protocols were approved by the Committee on the Ethics of Animal Experiments at Nanjing Medical University.Ovaries of P2.5 ICR females were used for reconstituted ovaries and transplanted into kidney capsules of the same strain of female mice at 8-10 weeks of age.Control oocytes for IVF were collected from P23 ICR female mice by superovulation and adult male B6D2F1 mice (10-14 weeks old) were chosen as sperm donors.Female ICR mice at 2-to 3-week-old were used for isolation of MSCs.All efforts were made to minimize the number and suffering of the animals used in the study.A total of 250 mice were used.The number of mice that contributed data for analysis in each experiment was indicated in figure legends.All animal experiments adhere to the ARRIVE guidelines. Isolation and culture of MSCs from mouse compact bone MSCs were isolated as previously described [32].Briefly, 2-to 3-week-old female mice were sacrificed by cervical dislocation.Rinse the animal liberally in a beaker with 100 ml of 70%(vol/vol) ethanol for 3 min.Place the mouse in a 100-mm sterile glass dish and incise the skin, disassociate the muscles, sever the femurs below the femoral head, and disconnect hindlimbs from the trunk.Dissect the humeri by excising the forelimb at the axillaries.Pull the skin down.Clean the muscles and tendons from the humeri, tibiae, and femurs.Place the bones on sterile gauze and then carefully rub them to remove the attached soft tissue from the bone.Before further processing, store the bones in a 35-mm sterile glass dish with 5 ml minimum essential medium α (α-MEM) supplemented with 0.1% (vol/vol) penicillin/streptomycin and FBS 2% (vol/vol).Insert a 0.45-mm syringe needle into the bone cavity and flush marrow out with 3 ml of α-MEM.Wash the bone cavities thoroughly at least three times using a syringe until the bones become pale.For MSCs GFP labeling, the virus was stored and used in strict accordance with the instructions(GPLVX-CMV-ZsGreen1).Viral transfection for labeling GFP was performed on ICR mouse MSCs cultured to P4 generation.The virus was added when the cells were cultured to 50% fusion, and after 48 h of infection, the medium was changed to normal cell culture medium.After 48 h of culture, the expression of green fluorescent protein in the cells was observed by fluorescence microscopy. Flow cytometric analysis and multilineage differentiation Cells cultured to P5 generation were harvested for flow characterization of MSC-specific markers.Trypsindigested cells were washed twice with phosphate buffer saline (PBS) (4-8 °C) and then suspended in cold PBS at a concentration of 100 µl of 1×10 6 cells per EP tube and incubated with PE-conjugated anti-mouse CD29, CD31, CD34, CD44, Sca-1 antibodies and incubate the cells at 4 °C for 30 min, protected from light.To identify cellular activity, PI was added and the cells were incubated for 15 min at 4 °C, protected from light.After the antibody incubation, the cells were washed twice with cold PBS and resuspended with fresh PBS, and the sample and preparation were completed.The prepared samples were analyzed by analytical flow cytometry to detect the expression of each antibody in the cells and mouse MSCs differentiation analysis was performed with Passage 5 mouse MSCs in osteogenic (MUXUC-90021, Caygen, USA), adipogenic (MUXUC-90031, Caygen, USA) or chondrogenic differentiation medium (MUXUC-9004, Caygen, USA) and differentiated cells were identified by Alizarin Red, Oil O Red, and Alcian Blue. Aggregation of ovaries with mouse bone marrow mesenchymal stem cells P2.5 newborn ovaries were harvested by carefully removing oviducts and ovarian bursa in L-15 medium containing 3 mg/mL bovine serum albumin ( BSA)/PBS (0.01 M, pH 7.4, Hyclone, USA).The ovaries were further digested in 1mL PBS supplemented with 0.25% trypsin, 1 mM ethylenediaminetetracetic acid (EDTA), and incubated at 37 °C for 6 min with gentle agitation every two minutes.Then centrifuge at 3000 g and stop at maximum speed.Aspirate the trypsin from the EP tube and add 1mL collagenase type IV,0.01%DNase I and incubated at 37 °C for 9 min with gentle agitation every three minutes.To stop the digestion, 10% fetal bovine serum (FBS) was added and the cell suspensions were centrifuged at 3000g and stopped at maximum speed at 4 °C.The digestion solution was removed, and cells were suspended by adding α-MEM with 50 µg/ml phytohemagglutinin (L1668, Sigma-Aldric) and incubated at 37 °C for 10 min followed by centrifugation at 3000gfor 10 s to aggregate the cells.The tubes were then rotated 180° and centrifuged again for approximately 30s.This double centrifugation protocol we used was modified from a previous study [22]. In vitro culture and grafting of reconstituted ovaries Reconstituted ovaries (rOvaries) were gently removed from the tubes and cultured on inserts (PICM03050, Millipore, USA) with 1.5 ml culture media added in the bottom of each well.The culture medium was α-MEM supplemented with 0.23 mM pyruvic acid, 50 mg/l streptomycin sulfate, 75 mg/l penicillin G, 0.03U/ml FSH and 3 mg/ml BSA.Ovarian cells were reconstituted as controls, and MSCs were added to reconstitute them as treatments, and grouped for culture.Reconstructed ovaries were cultured overnight in a 37 °C incubator at 100% humidity and 5% CO2 awaiting transplantation.The samples were collected after 24, 48 and 96 h of culture and fixed in neutral formalin and paraformaldehyde (PFA) to detect apoptosis and other conditions.For long-term culture, the culture medium was changed every other day and a half.For tissue transplantation, the aggregated artificial ovaries were surgically implanted beneath the renal capsules of bilaterally ovariectomized host females, and the same receptor mice with the left side as the treatment group and the right side as the control group.Some animals were sacrificed at 14 days after transplantation to assess follicular development, some animals were sacrificed at 18 days to collect GV oocytes for IVM, and some animals were injected with 5 IU human chorionic gonadotrophin(hCG), (Sansheng Bio Tech, China) at 24 days to collect MII mature oocytes 12 h later for IVF randomly. IVM and IVF In IVM studies, germinal vesicle (GV) oocytes were obtained from grafted MSC-rOvaries and cultured in M2 medium (Sigma, USA) at 37 °C and 5% CO2 under mineral oil (Sigma, USA) conditions.Following 16 h of culture, the ratio of MII mature oocytes/GV oocytes was assessed and MII oocytes were collected for morphological and immunofluorescence examination.For in vivo studies, recipient mice transplanted for 24 days were given a single injection of 5 IU hCG to induced ovulation, and the transplanted MSC-rOvaries were collected into M2 medium containing 0.1% hyaluronidase (H3506, Sigma, USA) 14 h later.MII oocytes were harvested directly by mechanical puncture with a fine needle.The control GV and MII oocytes were collected from P25 ICR mice after injection of pregnant mare serum gonadotropin (PMSG) and hCG for superovulation.At the time of the IVF study, donor sperm was collected from B6D2F1 male mice, which were capacitated by incubation in human tubal fluid medium (HTF) (MR-070-D, Millipore, USA) in oil for 1 h at 37 °C, 5% CO2.The MII oocytes were then incubated in 250 µl of medium containing spermatozoa (2-3 ×10 5 /ml) for 6-8 h.Upon fertilization, zygotic cells with clear pronuclei were transferred into fresh HTF medium overnight until the 2-cell embryonic stage.Then 2-cell embryos were cultured in small droplet KSOM medium (MR-020P-5 F, Millipore, USA) until blastocyst stage.Embryo development was assessed as the ratio of 2-cell embryos to zygotes and the ratio of blastocysts to zygotes. Real-time PCR The total RNA of the rOvaries was isolated using Trizol Reagent (Invitrogen, USA) according to the method provided by the manufacturer.rOvaries were measured for RNA concentration using a spectrophotometer (Nano-Drop 2000c, Thermo Scientific, USA).500 ng of RNA/ reaction from each sample was reverse transcribed using the FastQuant RT Kit (Tianyuan Biotechnology, China) to produce cDNA.cDNA was then analyzed on an ABI Step One Plus platform (Thermo Scientific, USA) using a SYBR-Green mix (Applied Biological Materials, Canada) on the ABI Step One Plus platform (Thermo Scientific, USA) for real-time PCR analysis, using the actin amplification signal as an internal control.The specificity of the PCR products was assessed by melting curve analysis and the amplicon size was determined by 2% agarose gel electrophoresis. Immunohistochemistry Reconstituted ovaries were collected and fixed in 4% buffered formalin for paraffin embedding and sectioning.To detect the expression of AMH and PCNA, 5 μm sections were deparaffinized and rehydrated, and endogenous peroxidase activity was blocked by incubation in 3% hydrogen peroxide in methanol for 15 min.The sections were then boiled in 0.01 M citrate buffer to retrieve the antigen.After blocking by goat serum (ZSGB-Bio, China) for 1 h, primary antibodies were incubated overnight at 4 °C, and 3,3'-Diaminobenzidine (DAB) reagent was used for coloration on the second day.Non-immune IgGs were applied as negative controls. Follicle counting The reconstituted ovaries from operated mice were collected and fixed in 10% buffered formalin overnight for serial sectionst(5 μm) and hematoxylin and eosin staining.To evaluate follicular development in operated mice, all follicles were counted at every fifth section using the fractionator and nucleator principles [33].All sections were counted by two independent individuals for comparison. TUNEL assays TUNEL assays were performed on 5 μm sections with the TUNEL Apoptosis Detection Kit (Alexa Fluor 640)( Yeasen, catalog.no 40308) according to the manufacturer's instructions. Statistical analysis The software of GraphPad Prism 5.0 and SPSS 20.0 were used to do the chi-square test, or one-way ANOVA and Mann-Whitney U-test to evaluate differences between groups.Data are showed as mean ± SEM.P < 0.05 was considered to be statistically significant. Reconstruction of ovarian function with mouse BM-MSCs and ovarian cells To see the effects of MSCs on the reconstruction of ovarian tissue, mouse BM-MSCs were isolated and passaged from mouse compact bone (Figure S1A).They were positive for the mesenchymal stem cell markers CD29, CD44, and the progenitor cell marker Sca-1 but were negative for the hematopoietic marker CD45 and the endothelial cell marker CD31 (Figure S1B).Moreover, these cells could differentiate into adipocytes, chondrocytes, and osteoblasts after in vitro induction (Figure S1D-F).In mouse ovaries, follicle assembly starts at E17.5 (embryonic day 17.5), and primordial follicle formation is nearly completed by P2.5 (postnatal day 2.5) [34,35].Next, we collected P2.5 ovaries and dissociated them into single cells.The aggregation of ovaries were obtained from ovarian cells with phytohemagglutinin.After transplanting beneath the renal capsules of bilaterally ovariectomized host female mouse, follicular structure formed and continued to develop in these aggregations.Based on this, we introduced different concentrations of BM-MSCs into the aggregated system (Fig. 1A).We divided the MSC-reconstituted ovaries into a low concentration group, a medium concentration group, and a high concentration group according to the amount of MSCs added, and the ratios of MSCs to ovarian cells were 1 to 2, 1 to 1, and 2 to 1, respectively.We found that by adding different concentrations of MSCs in the reconstituted ovaries, follicular development was significantly different after transplanting these reconstituted ovaries to the recipient mouse kidney capsule for 14 days.The results of H&E staining (Fig. 1B) and oocyte counting (Fig. 1C) showed that the addition of mouse MSCs at medium concentrations could promote follicle formation and follicular development.However, in reconstituted ovaries with high concentrations of mouse MSCs, exogenous cells hinder the formation and development of follicles in the aggregates.The reason may be that too many exogenous cells dispersed ovarian cells so that broke up the communications and interactions of ovarian cells which are essential for ovarian follicle assembly. Safety evaluation after BM-MSCs being incorporated in the reconstituted ovary As we know, the medical application of stem cells refuses safety risks.In order to evaluate the safety of BM-MSCs participated in ovarian reconstruction, we transplanted reconstructed ovaries for long-term tracking and obtained samples at 4 weeks, 6 weeks, and 8 weeks after transplantation to observe whether there were tumors in reconstituted ovaries.After completing histological analysis of ovarian samples from all transplant recipient mice, we found no tumors or other adverse conditions.The transplanted tissue was absorbed and formed a small white spot-like shape under the renal capsule after 8 weeks of transplantation.Moreover, H&E staining of ovarian tissue after 4 weeks and 6 weeks of transplantation showed lots of corpus luteum on the ovarian section, suggesting normal ovulation occurred during follicular development (Fig. 1D).The results indicate the safety for the application of MSCs in ovarian reconstruction. Mouse BM-MSCs benefit ovarian development in reconstituted ovaries. To further evaluate the effects of MSCs on follicular growth and development in reconstituted ovaries, we used the best ratio of ovarian cells and BM-MSCs as 1:1 to reconstruct ovaries.After 24 h of in vitro culture, the reconstructed ovaries with or without BM-MSCs were transplanted under the renal capsule of the same ovariectomized adult recipient mouse in pairs, and the samples were collected after 14 days of transplantation.The collected ovaries were shown as Fig. 2A and the volume of MSC-ovaries was greater than that of control-ovaries.Histological analysis further revealed accelerated follicular development with more large antral follicles being observed in the MSC-ovaries (Fig. 2B).After counting follicles by means of serial sections, the results showed that the proportion of secondary and antral follicles in total follicles of MSC-ovaries was higher than that in the control group (Fig. 2C).RT-PCR results demonstrated the increased expression of oocyte development genes, Bmp15, Gdf9 and Kit and follicle growth-related genes Amhr2, Fshr, Star, Lhr, Cyp17a1 and Cyp19a1 (Fig. 2D).Proliferating cell nuclear antigen (PCNA) staining showed the stronger signals in the BM-MSCs reconstituted ovaries (Fig. 2E).Immunohistochemistry with AMH also revealed more secondary follicles and antral follicles (Fig. 2F).These results show that mesenchymal stem cells can promote the survival and development of follicles. The aggregated ovaries with mouse BM-MSCs achieved normal oocytes maturation To evaluate the developmental potential of follicles and oocytes after BM-MSCs being incorporated into the aggregated ovary, germinal vesicle oocytes were collected from reconstituted ovaries for IVM after transplantation for 18 days.Since it was difficult to develop the aggregated ovaries without BM-MSCs at the same stage, GV oocytes from normal female ICR mice were used as negative controls.The morphology of MII oocytes was shown by β-tubulin staining (Fig. 3A) and no difference on the oocyte maturation rate was observed between negative control and reconstituted ovary groups (Fig. 3B).We also collected mature oocytes from BM-MSCs reconstituted ovaries after transplantation for 24 days with a single injection of hCG.Mature oocytes from normal 24-day ICR mice after superovulation were served as normal controls.As shown in Fig. 3C, MII oocytes obtained from reconstituted ovaries have normal morphology and immunofluorescence of β-tubulin revealed normal spindle distributions in mature oocytes.After in vitro fertilization using donor sperm, mature oocytes from reconstituted ovaries developed from 2-cell embryos to blastocysts after 96 h of culture, but the 2-cell rate (63% vs. 78%) and the blastocyst rate (49% vs. 60%) obtained from oocytes in reconstituted group were lower than those from normal ICR mice (Fig. 3D and E).The above results show that BM-MSCs reconstituted ovary can achieve the entire process of follicular development and produce normal functional oocytes. Mouse mesenchymal stem cells reduced apoptosis in reconstituted ovaries Next, we used the TUNEL assay to assess apoptosis in reconstituted ovaries with or without BM-MSCs and the aggregated ovaries were collected after 24, 48, and 96 h of culture.As shown in Fig. 4A and B, the reconstitution of ovary induced a lot of apoptosis in both somatic cells and oocytes (TUNEL + , VASA + ).The highest cellular apoptosis (∼12%) was observed after 24 h of aggregation and then the apoptosis rate decreased gradually with less than 5% apoptotic cells in aggregated ovaries after 96 h of culture (Fig. 4C).However, nearly no apoptotic signals were found in BM-MSCs reconstituted ovary even at the 24-hour time point when a lot of apoptosis was observed in the control group.Thus, adding BM-MSCs promoted the survival of ovarian cells and avoided a large loss of oocytes shortly after reconstitution. Dynamic tracking of mouse MSCs during the growth of reconstituted ovaries Mesenchymal stem cells have a strong ability to self-replicate in the process of in vitro culture, but this uncontrolled intensity of proliferation is risky for in vivo applications [25].We reconstituted ovaries with green fluorescent protein (GFP)-labeled BM-MSCs (Figure S1B, 83% of GFP positive cells) and followed them to observe the proliferation and location of these cells involved in reconstituted ovaries.There was a clear downward trend in the density of green fluorescent cells from the first 24 h to 96 h after reconstitution (Fig. 4D).Statistical analysis of green fluorescence intensity further verified the result (Fig. 4E).We then labeled proliferating cells with PCNA staining and we found the PCNA signals did not coincide with green fluorescence labeled MSCs.This means that MSCs didn't proliferate in the aggregated ovary, but the numbers of oocytes and ovarian somatic cells exhibited a gradual increase within the reconstituted ovary.(Fig. 4F).BM-MSCs function to maintain oocyte survival and accelerate the proliferation of ovarian somatic cells during the assembly of follicles.Western blot then revealed the increased phosphorylation of Akt, mTOR and RPS6 in BM-MSCs reconstitute ovaries after 96 h of culture.It suggests BM-MSCs may promote follicle formation and follicle activation through the activation of PI3K/mTOR signaling pathway (Fig. 4G). To further explore how MSCs behaved and contributed in reconstituted ovaries, GFP labeled BM-MSCs were tracked in the reconstitution of the ovary and oocytes were labeled with VASA staining.In the first 96 h of ovarian reconstruction, the MSCs distributed evenly with ovarian cells, however, they didn't involve in the reassembly of follicles.After aggregated ovaries being transplanted into recipient mice, follicular development was observed and we found MSCs mainly distributed in the stroma of reconstituted ovaries around growing follicles.Together, we also noticed a significant decrease of MSC cell numbers and fluorescence in the reconstituted ovary (Fig. 5).To see if the MSCs participate in the differentiation of theca cells, we first stained the reconstituted ovary with the MSC cell marker CD44 after transplantation for two weeks.While the majority of CD44-positive cells also exhibited GFP fluorescence, we observed a subset of cells that exclusively expressed GFP (Fig. 6A).Theca cells were then labeled with CYP17a1 and we found although in most cases, GFP and CYP17a1 positive cells were not colocalized with each other, double positive cells were still observed in the theca layer (Fig. 6B).Together, our result revealed the pivotal role of BM-MSCs in the restoration of the ovarian function.These cells aid in the survival of aggregated ovarian tissues during the first several days of reconstruction.Although they don't proliferate or contribute to the formation of follicle structure, their presence around growing follicles indicates their involvement in the differentiation of the theca layer. Discussion In this study, we incorporated BM-MSCs into ovarian cells to reconstitute the ovarian structure by using phytohemagglutinin.Although previous studies have demonstrated promising results regarding the ability of MSCs to facilitate the regeneration of bone, blood vessels, skin, peripheral nerves, and alleviate symptoms of local ischemia in various organs like the heart, kidneys, and brain, it is important to note that MSCs have been sometimes viewed as a panacea within the field of tissue repair and regenerative medicine [24,36].Studies also showed that co-cultured of MSCs with follicles could promote follicle survival and development [30,31,37].In the study, when we tried to reconstitute the ovarian structure with different concentrations of BM-MSCs, we found overloading BM-MSCs hindered the interaction between oocytes and ovarian somatic cells and inhibited follicle formation and development.Therefore, only when BM-MSCs and ovarian cells were mixed at a ratio 1:1, the most promoting effect was found in the reconstituted ovary. When ovarian reconstitution was performed with the optimal number of MSCs, we found that the involvement of MSCs promoted the development of the reconstituted ovaries.First, at the initial stage of ovarian reconstitution, the incorporation of MSCs reduced or even avoided a large loss of ovarian cells including both oocytes and ovarian somatic cells caused by the preparation of ovarian aggregates.Existing studies have shown that MSCs have a significant role in repairing damage and inhibiting apoptosis.They create a repair microenvironment by releasing transforming growth factor-beta 1(TGF-β1) after tissue injury and reduce apoptosis, inflammatory response, and immune response at the site of injury [24,38,39].Therefore, MSCs have a reparative effect in the reconstituted ovary.Secondly, by utilizing kidney capsule transplantation of recombinant ovaries, we observed that the presence of MSCs resulted in enhanced follicle formation and development compared to ovaries without MSCs.This enhancement was evident through various evaluations, including morphological analysis of follicular growth, follicle counting, PCNA staining, as well as labeling of secondary and antral follicles with AMH.The activation of the PI3K/mTOR signaling pathway and the results of the expression of oocyte development-related genes, such as Bmp15, Gdf9 and Kit, and the expression of hormone production-related genes in the recombinant ovaries further verified the above conclusions.It has been shown that MSCs can increase the expression of transforming growth factor-beta (TGF-β) family members in follicles during co-culture with MSCs [37].We know that the TGF-β family including Gdf9 and Bmp15 plays an important role in the development of preantral follicles [20,40,41].This confirms the experimental results we obtained that follicle development in MSCs reconstituted ovaries was better than in the control group. To evaluate whether the artificial recombinant ovary, which was constructed with the participation of BM-MSCs, has normal folliculogenesis and oocyte maturation.Oocytes at GV stage were removed from the recombinant ovary and matured in vitro (IVM), and mature MII oocytes were also collected for IVF to evaluate embryonic development from two-cell embryos to blastocysts.These results showed that the recombinant ovary with BM-MSCs could complete normal follicular development and oocyte maturation.In other words, this recombinant ovary model with MSCs we established is a new method to constructing artificial ovaries for rebuilding normal ovarian function.It is well known that the expansion and improvement of fertility preservation methods for female cancer patients have consistently posed challenges and remained as prominent areas of research in the field of reproduction.Artificial ovaries and various follicle culture methods have particularly garnered significant attention in this context [6,16,42]..Among them, 3D-printed artificial ovaries consisted of primordial, primary, secondary follicles and 3D-printed hydrogel scaffolds have gained wide attention [17].However, such artificial ovary based on follicle units cannot solve the problem of fertility loss due to folliculogenesis disorders or gamete deficiency.The regeneration of reproductive organs by inducing pluripotent stem cells and embryonic stem cells to generate gametes in female mice have been reported [43].And with this cell-based artificial ovary model, our study demonstrated adult stem cells can also be used for constructing artificial ovary with stem cell-induced artificial eggs. Through dynamic tracking of BM-MSCs in reconstituted ovaries by GFP labeled MSCs, we found that MSCs didn't proliferate in the reconstituted ovary and they didn't participate in the formation of follicles after reconstitution.In the recombinant ovary, BM-MSCs accompanied the growth and development of the follicle, and their distribution gradually gathered around the periphery of the growing follicle.Previous studies have demonstrated that the follicular theca layer consists of theca endocrine cells located internally, and an outer layer of fibrous connective tissue derived from fibroblast-like cells.The theca exterior also contains vascular tissue, immune cells, and stroma [44]. Therefore, the follicular theca layer serves crucial roles in maintaining the structural integrity of the follicle, as well as providing nutrients to the granulosa cell layer and oocytes.Furthermore, the theca cells are responsible for producing important endocrine regulators such as androgens (testosterone and dihydrotestosterone), as well as growth regulators like morphogenetic proteins (BMPs) and TGF-β.These regulatory molecules are essential for facilitating follicular growth and development [45][46][47].In the study, although most GFP positive cells are expressed MSC marker CD44, we still found some theca cells showed GFP and CYP17a1 double positive signals.This means the potential that BM-MSCS can differentiate into theca cells, however, more experiments are needed to explore this conjecture.In the study, we also concerned about the safety of MSCs in ovarian recombination, and we did not find the proliferation of MSCs and no tumors and other undesirable conditions was observed after a long-term tracking.Therefore, we conclude that MSCs have a good application prospect for constructing artificial ovaries. Conclusions In summary, we established a reconstituted ovary model by using newborn ovarian cells including oocytes and somatic cells with BM-MSCs.We found the addition of BM-MSCs increased the survival rate of oocytes in the recombinant ovary and promoted follicular growth and development.The MSC recombinant ovary has normal folliculogenesis and oocyte development and maturation.During the growth and development of the recombinant ovary, BM-MSCs gradually spread around the growing follicles.After immunostaining with theca cell marker CYP17a1, some theca cells showed double positive signals with GFP fluorescence of BM-MSCs.Whether they can differentiate into theca cells remain to be studied further.Our results reveal MSCs may be a good resource to reconstruct ovarian tissue in female fertility preservation. Fig. 1 Fig. 1 Reconstruction of ovarian function with mouse mesenchymal stem cells and ovarian cells.(A) Scheme for reconstitution procedure.(B)HE staining of different density of MSCs in reaggregated ovaries after transplanted 14 days.(C)Numbers of oocyte in reaggregated ovaries with different densities of MSCs.n = 5/per group.(D) Long-term observation of intra-tissue morphology in reconstituted ovaries with MSCs.HE staining of reaggregated ovaries with MSCs after transplantation 4 weeks and 6 weeks.Data are shown as mean ± SEM. *p<0.05.**p<0.01***p < 0.001.Scale bars,200 μm. Fig. 2 Fig. 2 Mouse MSCs benefit ovarian development in reconstituted ovaries.Reconstituted ovaries were obtained after 14 days of in vivo development.(A) Morphology of control-rOvaries and MSC-rOvaries at 14 days after transplantation.Scale bars, 1 mm.(B) HE staining was performed to observe the internal growth of the reconstituted ovaries, and it was seen that the follicular development in the MSC-rOvaries group was better than that of the control group.Scale bar,50 μm.(C) Distribution of follicles in r-Ovaries without and with MSCs.n = 3/per group.Primo, primordial follicle; Prima, primary follicle; Sec, secondary follicle; Ant, antral follicle.Scale bar, 50 μm.(D) Relative expression of follicular growth and development related gene in the rOvaries were assessed by real-time PCR.n = 3/per group.Data are presented as means ± SEM.*P < 0.05 and ***p < 0.001.(E) Immunohistochemistry analyses the expression of PCNA in reaggregated ovaries after transplanted 14 days.Scale bar, 50 μm.(F) Immunohistochemistry analyses expression of AMH in aggregated ovaries after transplanted 14 days.Scale bar, 50 μm. Fig. 3 Fig. 3 Evaluation of oocyte quality by IVM, IVF and early embryonic development.Aggregated ovaries with mouse MSCs were transplanted into the kidney capsules of recipient mice for 18 days.GV oocytes for IVM were collected by directly puncturing fully grown follicles under the microscope and mature MII oocytes were retrieved after a single injection of hCG into recipient mice after transplanted 24 days (IVO).(A) Morphology of MII oocytes after IVM.Oocytes from superovulated ovaries of 3-week-old mice were used as negative controls (Negative ctrl).And Immunofluorescence of β-tubulin on spindle of MII oocytes.Green, β-tubulin; Blue, Nuclear staining with Hoechst 33342.(B) Percentages of MII oocytes with aberrant spindles after IVM treatment.(C) Morphology of MII oocytes obtained from MSC-rOvaries after a single injection of hCG into recipient mice after transplanted 24 days (IVO).And Immunofluorescence of β-tubulin on spindle of MII oocytes.Green, β-tubulin; Blue, Nuclear staining with Hoechst 33342.(D)Representative 2-cell embryos and blastocysts after IVO oocytes being fertilized in vitro (IVF).(E) Efficiency of early embryonic development.Percentage of IVO mature oocytes in negative controls and MSC-rOvaries group capable of developing into 2-cell embryos and blastocysts.All experiments were performed with at least three replicates.Data were shown as mean ± SEM. *p < 0.05.Scale bar, 50 μm. Fig. 4 Fig. 4 Mouse mesenchymal stem cells reduce apoptosis in reconstituted ovaries.(A) TUNEL assay of reaggregated ovaries after in vitro culture.(B) The Control group of rOvaries cultured in vitro for 24 h, TUNEL co-stained with immunofluorescence of germ cell marker VASA antibody.The apoptotic signal was present in VASA-positive oocytes.(Scale bar: 50 μm.).(C) The statistical analysis of TUNEL was performed to detect apoptosis within the tissues of rOvaries at various time points in the early post-recombinational period.***p < 0.001.n = 4/each group.GFP-labeled MSCs were used to track the growth of reaggregated ovaries.(D) GFP-MSCs in reaggregated ovaries after recombination.(E) Analysis of scanning intensity statistics of green fluorescent signals carried by mesenchymal stem cells in MSC-rOvaries.n = 5/each group.*p<0.05.(F) Immunofluorescence labeling of proliferation by PCNA antibody, PCNA red fluorescent signal did not co-localize with GFP.(G) Western blot showed the increased phosphorylation of Akt, mTOR and RPS6 in BM-MSCs reconstituted ovaries after 96 h of culture.Scale bar, 50 μm. Fig. 5 Fig. 5 Dynamic tracking of green fluorescent protein-labeled MSCs in recombinant ovaries from 24 h to 3weeks after aggregation.Scale bar, 50 μm. Assisted Reproduction Unit, Department of Obstetrics and Gynecology, Sir Run Run Shaw Hospital, Key Laboratory of Reproductive Dysfunction Management of Zhejiang Province, Zhejiang University School of Medicine, 310016 Hangzhou, China 6 Women's Hospital of Nanjing Medical University (Nanjing Maternity and Child Health Care Hospital), 21003 Nanjing, China 7 State Key Laboratory of Reproductive Medicine and Offspring Health, Center of Clinical Reproductive Medicine, Jiangsu Province Hospital, The First Affiliated Hospital of Nanjing Medical University, 210029 Nanjing, China 8 Prenatal Diagnosis Department, First Affiliated Hospital, Nanjing Medical University, 210029 Nanjing, China
2024-04-25T05:04:04.812Z
2024-04-23T00:00:00.000
{ "year": 2024, "sha1": "c80cdeb51f2804e09363a82e1e8ae39123924333", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c80cdeb51f2804e09363a82e1e8ae39123924333", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53280848
pes2o/s2orc
v3-fos-license
An Acoustic Tracking Approach for Medical Ultrasound Image Simulator Ultrasound examinations are a standard procedure in the clinical diagnosis of many diseases. However, the efficacy of an ultrasound examination is highly dependent on the skill and experience of the operator, which has prompted proposals for ultrasound simulation systems to facilitate training and education in hospitals and medical schools. The key technology of the medical ultrasound simulation system is the probe tracking method that is used to determine the position and inclination angle of the sham probe, since this information is used to display the ultrasound images in real time. This study investigated a novel acoustic tracking approach for an ultrasound simulation system that exhibits high sensitivity and is cost-effective. Five air-coupled ultrasound elements are arranged as a 1D array in front of a sham probe for transmitting the acoustic signals, and a 5 × 5 2D array of receiving elements is used to receive the acoustic signals from the moving transmitting elements. Since the patterns of the received signals can differ for different positions and angles of the moving probe, the probe can be tracked precisely by the acoustic tracking approach. After the probe position has been determined by the system, the corresponding ultrasound image is immediately displayed on the screen. The system performance was verified by scanning three different subjects as image databases: a simple commercial phantom, a complicated self-made phantom, and a porcine heart. The experimental results indicated that the tracking and angle accuracies of the presented acoustic tracking approach were 0.7 mm and 0.5°, respectively. The performance of the acoustic tracking approach is compared with those of other tracking technologies. Electronic supplementary material The online version of this article (doi:10.1007/s40846-017-0258-9) contains supplementary material, which is available to authorized users. Introduction Medical ultrasound examinations have become essential for diagnostic, therapeutic, and surgical procedures in hospitals and other clinical environments. An ultrasound system can provide high-resolution cross-section images of the abdomen and its internal organs that are useful in many clinical applications, such as cardiology, obstetrics, and gynecology, as well as of the breast, thyroid, and vascular and musculoskeletal systems. However, the efficacy of an ultrasound examination is highly dependent on the skill of the operator. Misjudgment can occur if the operator lacks sufficient experience, particularly when detecting rare disorders. It is therefore very important that junior sonographers receive a training course in ultrasound examinations [1]. According to the recommendations from the American College of Cardiology and the American Heart Association, trainees who intend performing transthoracic echocardiography in adult patients need 12 months of The original version of this article was revised due to a retrospective Open Access order. Electronic supplementary material The online version of this article (doi:10.1007/s40846-017-0258-9) contains supplementary material, which is available to authorized users. training to achieve proficiency, which includes a minimum of 300 examinations and the interpretation of 450 Doppler examinations [2,3]. The requirement for such a large number of examinations represents a large burden on both hospitals and trainees, and moreover it may still be insufficient for ensuring that trainees are able to detect cases of rare disorders. This situation has prompted proposals for the development of an ultrasound simulation system for assisting ultrasound examination training and education. An ultrasound simulation system can provide very effective training without impacting patient safety because it can simulate highly accurate scenarios, uses very realistic tools, and affords opportunities to manage patient complications [4]. The high dependence of the outcome of an ultrasound examination on the probe-handling skill of the operator means that the system for tracking the probe position is a very important component of an ultrasound simulation system [5]. Several tracking approaches have been developed in the last decade, including optical tracking (OT), electromagnetic tracking (EMT), and inertial tracking (IT). Any tracking system can be characterized by its sampling rate, sensitivity, precision, working range, and degrees of freedom (DoF), and each tracking approach has its own advantages and disadvantages. The OT approach for ultrasound simulation systems provides high accuracy, high update rates, and a relatively large workspace [6][7][8]. For instance, an ultrasound education ultrasound simulation system for the abdominal region that integrates Web cameras and several planar optical markers printed on paper sheets has been reported [9]. However, OT requires the line of sight to be maintained between the tracking device and the probe, and so the presence of any obstacle, ambient light, or infrared radiation between the detector and markers degrades the performance when using the OT approach. In EMT, a magnetic field sensor placed on a probe measures electrical currents induced as the probe moves in a magnetic field generated by an electrical field generator [10][11][12][13][14][15][16][17]. However, EMT has the disadvantage that signals from sources such as power cables and surrounding instruments can interfere with the tracking system signals and thereby impair the tracking accuracy. This makes it challenging to use EMT in an environment where various metallic objects are moving around in the magnetic field generated by the electrical field generator. However, EMT is still the most popular approach for tracking the probe position in ultrasound simulation systems [18][19][20][21][22]. The IT approach is a navigation technique that employs accelerometers and gyroscopes to track the position and orientation of an object relative to a known starting point. While IT is cheaper than other approaches, the large measurement error that accumulates over time is a major disadvantage for an ultrasound simulation system [23]. The acoustic method has been also used to track the probe position in a 3D space [24][25][26][27][28][29][30][31][32]. In this method, sound-emitting devices were mounted on the ultrasound probe, and the fixed microphones were mounted above the patient. The microphone was used to receive the sound signals from the emitting devices as the probe is moving. The position and orientation of the probe then was determined by measuring the speed of sound in air between each emitter and microphone. However, the microphone must be placed over the patient and must close enough to the emitter in order to obtain a good signal-to-noise ratio. In addition, many studies used several tracking approaches for 3D ultrasound imaging [33][34][35][36]. The present study investigated a novel acoustic tracking approach for use in a medical ultrasound simulation system. Air-coupled ultrasound elements are embedded in the front of a sham ultrasound probe for transmitting the acoustic signals, and the position and orientation of the sham probe are tracked by receiving the acoustic signals using 2D air-coupled ultrasound elements. After the position of the sham probe is identified by the acoustic tracking approach, its corresponding ultrasound image is displayed according to the position of a real ultrasound examination in the image database as obtained previously via the mechanical scanning of subjects. The validity of this approach was verified in phantom and in vitro porcine heart experiments, and the system performance was compared with those of several commercial ultrasound simulation systems. Figure 1 shows a block diagram of the ultrasound simulation system based on the acoustic tracking approach. Aircoupled ultrasound elements were used for transmitting and receiving the acoustic signals (respectively 400ET080 and 400ER080, Prowave, Taipei, Taiwan). The ultrasound frequency of the air-coupled element is 40 kHz. The bandwidth of the transmitting element is 1.5 kHz. Five transmitting elements were embedded as a 1D array in front of a sham ultrasound probe, and a five-channel continuous-wave generator with an output voltage of 15 Vpp was designed for exciting the elements. The sham ultrasound probe was made using a 3D printer according to the shape of a commercial ultrasound probe (12L5, Terason, Massachusetts, USA). Moreover, each element was isolated by pieces in order to reduce the interference from adjacent elements, as shown in Fig. 2(a). A 5 9 5 2D array of receiving elements was used to receive the acoustic signals from the moving transmitting elements (in the sham ultrasound probe), as shown in Fig. 2(b). Each received signal passed through an independent amplifier, peak detector, and low-pass filter. The peak detector converted the input sine-wave signal into a DC voltage and thereby decreased the frequency of signal changes for the subsequent processing by an analog-to-digital converter. A firstorder low-pass filter was used to remove the high-frequency noise. The 25 receiver elements were integrated on Fig. 3 The procedure of the acoustic tracking approach a single circuit board. A data acquisition module (USB 6343, National Instruments, Texas, USA) with 32 analog input ports was used to record the 2D DC voltage map in a personal computer (PC) via a USB interface; the sampling rate and resolution of the USB 6343 module are 500 kS/sec and 16 bits, respectively. A graphical user interface was designed in LabVIEW software (version 2013, National Instruments) to allow real-time observations of variations in the 25 received acoustic signals. The user interface also displayed 2D ultrasound images corresponding to the position of the sham probe. The delay time between the motion of the sham probe and the image displayed in the software is 0.05 s. The distance between receiving elements and the transmitting elements is 10 mm in this system. Polyurethane film covers the receiver unit, and the sham probe can be moved freely over it by the operator. System Description Since the probe was contacted with the film during the scanning, the distance between receiving elements and the transmitting elements is a stable in this system. Position Tracking Procedure The sham probe was fixed on a four-axis motor platform and placed above the 2D array of receiving elements. As the sham probe is moved in the four-axis directions (X, Y, and Z directions, and the angle of inclination), the received acoustic signals and position information are recorded simultaneously in the position calibration database. Therefore, the spatial relationship between the estimated positions and orientations and their corresponding images was obtained as the image database. Figure 3 shows the procedure of the acoustic tracking approach. The digital input is the 5 9 5 DC voltage map that was transferred from the received acoustic signals. Since the digital input was obtained at a sampling rate of 500 kS/s, 100 frames of 2D array data were averaged as the position information. The height of a color bar (voltage map) in Fig. 3 represents the amplitude of the received acoustic signal at the defined position (X). The position information was used to match the position calibration database (Y i ) for orientating the position of the sham probe. In the figure, N means the total position situation that was obtained from the motor platform. Each X was subtracted from Y i , with the results squared and summed across all values to find R i . The minimum value of R i corresponds to the correct position in the calibration database, which means that the position of sham probe could be tracked effectively. The minimum value of Ri is close to 0. After R i was calculated, minimum value search was used to find the value that closes to 0 between R 1 and R N . As R min was found, the system would find the image that was assigned in this position. For example, if the minimum value happened in R 50 , the system would capture image assigned in position 50 and display it on the monitor. After the position and orientation of the sham probe are identified using the acoustic tracking approach, the real ultrasound image from a specific cross section of an organ is displayed on the system screen. The image database was obtained using a clinical ultrasound scanner (t3000, Terason) with a linear array probe (12L5). The probe was fixed on a four-axis motor platform so that it can be moved freely in 3D space by a motor controller. Open-source software is available for the Terason t3000 scanner to allow applications to be developed that run on the Windows operating system, which allows real-time ultrasound images to be acquired on a frame-by-frame basis in the PC as the probe is moved across the area of interest. The position information from the motor and image information from the scanner were integrated together in the image database. When the minimum value of R i is determined, the corresponding ultrasound image is displayed on the user interface. Sample Preparation Three different subjects were used to verify the performance of the acoustic tracking approach for an ultrasound simulation system. The first subject was a commercial ultrasound phantom (040GSE, CIRS, Virginia, USA) which allowed for evaluations over ultrasound frequencies ranging from 2 to 15 MHz. Since the structure of this commercial ultrasound phantom is quite simple, a second phantom was constructed that had a complicated structure. Figure 4(a) shows a photograph of the self-made phantom, which contained several PVC tubes surrounded by 5% gelatin (type A, Sigma-Aldrich, Missouri, USA). The third subject was a porcine heart obtained from a local slaughterhouse, which was used for an in vitro experiment, as shown in Fig. 4(b). The gelatin solution was injected into the heart chambers in order to remove air from inside the atria and ventricles. After the gelatin solution had solidified, sticking and stitching were used to maintain the position of the porcine heart during the ultrasound examination. Results and Discussion The size of the receiver unit is 55 9 55 mm, including the 25 elements. The receiving elements were fixed into an acrylic holder and soldered on a circuit board. The transmitting elements were connected to the continuous-wave generator. Figure 5 shows photographs of the ultrasound probe and how it is used. Polyurethane film covers the receiver unit, and the sham probe can be moved freely over it by the operator. Since the characteristic of each transmitting element is slightly different due to the fabrication procedure, the amplitude of the exciting voltage for each element needs to be adjusted individually to ensure that all five elements have the same output energy. Similarly, the gains of the receiving amplifiers were adjusted individually for each receiving element. The high sensitivity of the receiving elements means that even trembling of the hand causes variations of the received DC voltage map. Therefore, 100-times averaging is applied to the 2D voltage map before the position is identified. While increasing the number of averages makes the digital input more stable, this can also cause a delay due to the time required for the calculations, and so 100-times averaging was the best trade-off for our system. There are gaps between the elements in the receiver unit. However, these do not influence the acoustic signals received by the sham probe because the elements are not focused, and so the effect of acoustic wave dispersion means that the adjacent elements still can receive the signal from transmitting elements. The size of the receiving 2D array is 5.5 9 5.5 cm in current system. The predefined size of receiving unit depends on the Fig. 5 Photographs of the operation of the ultrasound simulation system, showing the receiver unit covered by polyurethane film Fig. 6 User interface of the ultrasound simulation system applications. For example, the scanning area should be larger for abdominal ultrasound examination but smaller for carotid imaging. In present study, a 5 9 5 2D array of receiving elements were used for this feasibility study. However, the scanning area can be increased by adding more receiving elements. Figure 6 shows the user interface of the simulation system. As the sham probe is scanned over the receiver unit, the real time image of the object is displayed on the screen. Figure 7a-c show video screenshots from the simulation system for the commercial ultrasound phantom, self-made phantom, and porcine heart, respectively. In the videos displayed on the screen of the system, dynamic images are presented on the left side as the sham probe is swept over the receiver unit, while the corresponding real images obtained from the clinical ultrasound system by scanning a real subject are displayed on the right side. In short, the operator uses the sham probe to perform an artificial ultrasound examination of the receiver unit, while real images obtained from the subjects are displayed in real-time on the simulation system. The images in the simulator are of high quality since the image database comes from a clinical ultrasound system. While slight shaking sometimes appears on the left side of the videos, which can be attributed to the acoustic tracking approach being too sensitive in some cases, the images displayed in simulation system are generally well matched with those of the clinical system, as shown in Fig. 7. Use of porcine heart as the test organ because of it has an obvious structure in anatomy which including myocardium and four chambers. In the proposed ultrasound simulation system, the images were displayed as the ultrasound probe position was determined by the acoustic tracking approach. Since the images were recorded previously according to different clinical situations, the frame rate is not determined by the tracking system. In other words, as the position and orientation of the probe was determined by proposed system, the corresponding pre-recorded images were displayed on the screen. The amount of database depends on the application as well. Take the porcine heart experiment for example, the amount of images collected are 78 images per lateral and axial motion, and 720 images per rotation motion. 3D ultrasound image can also be applied into this system. But we used 2D image first for testing system performance in this study. While the experimental results showed that the simulation system performs well, comparison with commercial ultrasound systems revealed that the acoustic tracking approach still has the following limitations. The tracking accuracy is determined by measuring the minimum horizontal distance that causes the array to be indistinguishable between two adjacent positions. The sham probe was fixed on the motor to sweep the receiver unit at a step setting from 0.001 to 1 mm. The experimental results show that 0.7 mm is the minimum horizontal distance between two adjacent positions that can be tracked using the acoustic approach. In other words, the simulation system may display the same image even when the sham probe is moved by up to 0.7 mm. The accuracy of the probe angle was also measured, by inclining the probe from 0.2°to 2°. The experimental results show that a resolution of 0.5°is achievable. In a trial involving sonographers with 10 years of experience in ultrasound examinations, they considered that the moving speed and sensitivity of the sham probe kept up with the images display on the simulation system, and hence provided an accurate simulation of a real examination. Since many studies have used the OT [24] and EMT [18][19][20] approaches for ultrasound simulation systems, the performance of the presented acoustic tracking approach was compared to those of the two other approaches ( Table 1). Most of the OT and EMT systems used in these studies were developed by the same company (Northern Digital Inc., Waterloo, Canada). The Polaris Spectra, an OT system, provides the best tracking accuracy (of 0.25 mm), and the distance from the markers to the camera varies between 95 and 240 cm. The Polaris Vicra also provides a tracking accuracy of 0.25 mm, with distances from the markers to the camera of between 55.7 and 133.6 cm. Two types of EMT system based on an electromagnetic field generator are used in medical ultrasound simulation systems. For a planar field generator, the tracking accuracies were 0.7 and 0.48 mm and the angle accuracies were 0.2°and 0.3°for sensors with five and six DoF, respectively; the corresponding values for a tabletop field generator were 1.2 and 0.8 mm and 0.5°and 0.7°. The accuracy of the presented acoustic tracking approach falls between those of the OT and EMT approaches. However, as mentioned above, the acoustic approach overcomes some important disadvantages of the OT and EMT approaches, such as the line-of-sight problem and electromagnetic interference. Acoustic tracking methods may be influenced by the environment. For example, the sound velocity was influenced by the temperature and humidity in the air. However, we did not use sound velocity as the parameter to determine the probe position. On the contrary, the ultrasonic attenuation (we detect the amplitude of receiving signals) was used in our system. If the operation of this system was performed under a room temperature with a stable humidity, the effect of environment for the acoustic tracking accuracy is limited in this study. Therefore, combination of acoustic tracking approach and other approaches (OT or/and EMT) for developing ultrasound simulation system should be considered in the future works. Conclusion This study investigated a novel acoustic tracking approach for an ultrasound simulation system. Air-coupled ultrasound elements are key components in this approach. Based on the acoustic signals received from a sham ultrasound probe, the position and angle of the moving sham probe can be detected precisely in this system, and the corresponding ultrasound image is displayed simultaneously on the screen. The system performance was verified using three different subjects, with the results showing that the dynamic images from the simulator perfectly match those from an actual clinical ultrasound system. The tracking and angle accuracies of the presented acoustic tracking approach were 0.7 mm and 0.5°, respectively. Future studies should focus on constructing a database of clinical ultrasound images, particularly for rare disorders.
2018-11-15T22:16:15.533Z
2017-06-21T00:00:00.000
{ "year": 2017, "sha1": "48d7c8b07b55e967cb2e01fcb0056616ceb5bfc6", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40846-017-0258-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "48d7c8b07b55e967cb2e01fcb0056616ceb5bfc6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering", "Medicine" ] }
11077474
pes2o/s2orc
v3-fos-license
Portal of medical data models: information infrastructure for medical research and healthcare Introduction: Information systems are a key success factor for medical research and healthcare. Currently, most of these systems apply heterogeneous and proprietary data models, which impede data exchange and integrated data analysis for scientific purposes. Due to the complexity of medical terminology, the overall number of medical data models is very high. At present, the vast majority of these models are not available to the scientific community. The objective of the Portal of Medical Data Models (MDM, https://medical-data-models.org) is to foster sharing of medical data models. Methods: MDM is a registered European information infrastructure. It provides a multilingual platform for exchange and discussion of data models in medicine, both for medical research and healthcare. The system is developed in collaboration with the University Library of Münster to ensure sustainability. A web front-end enables users to search, view, download and discuss data models. Eleven different export formats are available (ODM, PDF, CDA, CSV, MACRO-XML, REDCap, SQL, SPSS, ADL, R, XLSX). MDM contents were analysed with descriptive statistics. Results: MDM contains 4387 current versions of data models (in total 10 963 versions). 2475 of these models belong to oncology trials. The most common keyword (n = 3826) is ‘Clinical Trial’; most frequent diseases are breast cancer, leukemia, lung and colorectal neoplasms. Most common languages of data elements are English (n = 328 557) and German (n = 68 738). Semantic annotations (UMLS codes) are available for 108 412 data items, 2453 item groups and 35 361 code list items. Overall 335 087 UMLS codes are assigned with 21 847 unique codes. Few UMLS codes are used several thousand times, but there is a long tail of rarely used codes in the frequency distribution. Discussion: Expected benefits of the MDM portal are improved and accelerated design of medical data models by sharing best practice, more standardised data models with semantic annotation and better information exchange between information systems, in particular Electronic Data Capture (EDC) and Electronic Health Records (EHR) systems. Contents of the MDM portal need to be further expanded to reach broad coverage of all relevant medical domains. Database URL: https://medical-data-models.org Introduction Medical data models describe data structures of information systems in medicine. For example, a medical history form of a clinical trial contains data elements regarding previous diseases like myocardial infarction. This list of data elementsincluding properties like element name, element description and data type-can be considered a data model. These models are of key importance to build study databases, because they determine what kind of data analysis is possible for any medical topic of interest. Despite many initiatives for transparency in clinical research [such as AllTrials (1)], most medical data models are not available to the scientific community, neither in medical research nor in routine healthcare. The search space for medical data models has astronomical dimensions: A typical documentation form consists of approximately 40 data elements. The Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) (2) contains >300 000 non-synonymous concepts, i.e. there are at least 300 000 options for a data element. This corresponds to 300:000 40 ! 1; 5E171 possible documentation forms, many more than atoms in the universe (1E80). The subset of medically useful data models is certainly much smaller, but still very large: In the field of medical research, approximately 200 000 clinical studies are registered (3). The average amount of case report forms (CRFs) per patient in a clinical trial increased from 55 to 180 pages in recent years (4). Therefore >10 million different CRFs were used in these clinical studies. Because of this variability and complexity, information systems in medicine constitute a big data challenge. Eligibility criteria are available on the Internet, but cover only 1-2 pages out of approximately 100 pages per trial, therefore the vast majority of those forms is not directly available to the scientific community. In routine healthcare a disease-specific data model is needed to address all relevant patient attributes. The current international classification of disease [ICD version 10 (5)] lists >13 000 diagnoses. Approximately 400 data elements (6) are needed per diagnosis in routine healthcare, corresponding to more than 5 million data elements. However, data models in routine healthcare are not yet standardised and multilingual-on a global basis patients report their symptoms in 200þ languages -, therefore much more than 5 million data elements are actually being used. Regarding routine healthcare most data models are not available to the public, because they are implemented within commercial software products. Because medical data models are not accessible to the scientific community, re-use of data models is very limited and 'the wheel is re-invented' worldwide in medical information systems. The objective of the Portal of Medical Data Models (MDM) (7) is to overcome this lack of transparency. MDM is a registered German and European information infrastructure (8,9), i.e. it provides shared and sustainable access to scientific services. Specifically, it provides a multilingual platform for exchange and discussion of data models in medicine, both for medical research and healthcare. In the following, a short overview of the technical approach is given and a detailed analysis of currently available contents for the scientific community is provided. IT architecture and software tools The technical approach of the MDM portal has been described previously (10). In summary, medical data models are stored in CDISC ODM (11) format on a web server. ODM structures are parsed and transferred to a MySQL database. Converters for several export formats of data models (12,13) are integrated into the portal (see Table 1). Semantic annotation with Unified Medical Language System (UMLS) codes (14,15) is provided for the majority of data elements. Software components of the portal are written in Java, Ruby on Rails and R. Registered users can search (Figure 1), view (Figure 2), download and comment data models. Dedicated users can upload new data models with version control. A web-based editor for data models is integrated into the portal. Analysis of portal contents The MDM portal database was analysed using R scripts (16) with the library RMySQL. The time course of available data models was analysed, i.e. the cumulative number of data models from the start of the system until 2015. In CDISC ODM data items are structured by item groups which are organised in forms. Each data item is characterized by a name, e.g. 'patient age', a data type, such as 'integer', and optional translations as well as one ore more UMLS codes. Each data model can be updated (via upload or integrated editor), for instance by creation of a new version. Only the latest version of a data model was counted to determine the total number of models. The time course of created and updated data models was analysed. Number of versions per data model was described with a frequency distribution. Most frequent keywords and their combinations were analysed with an UpSet plot (17). Keywords are based upon medical subject headings (MeSH) (18) with custom extensions. Data models were categorised into the following domains: clinical trial, electronic health record (EHR), registry, quality assurance and other (e.g. can be used in more than one domain). Most frequent data model types were determined in general and specifically for clinical trial-related forms. UMLS codes are used for semantic annotations in the MDM portal. Descriptive statistics for semantic annotation were generated: (i) Number of semantically annotated data items, itemgroups and code lists; (ii) number of unique UMLS codes; (iii) overall frequency distribution of UMLS codes and number of UMLS codes per data item; (iv) number of UMLS coded items per data model. MDM is a multilingual system, therefore most frequently used languages of data items were determined. Figure 3 presents the total number of data models between 2011 and 2015. In the third quarter of 2012 a large set of models was uploaded. These models were available from Internet sources and were processed using custom-built converters. In the first quarter of 2015 a large proportion of these models was updated, e.g. typing errors were corrected and UMLS codes were modified. In total, 4387 data models were available (as of November 2015). In a period of three months (August-October 2015) 78 266 data models were viewed by portal users and 354 models were downloaded. Figure 4 shows the frequency distribution of data model versions: Overall, there were 10 963 model versions, median 3 per model (range 1-24). These model versions contained 62 327 item groups, 397 403 items and 111 891 code lists. Most frequent data types of items were text (55.6%), boolean (14.2%), date (10.4%), integer (10.0%) and float (9.4%). Keywords Each data model can be tagged with one or several keywords from the MeSH thesaurus. Figure 5 presents most frequent keywords and their combinations as an UpSet Plot. Clearly, most contents of the MDM portal were derived from clinical trials. Most frequent diseases were breast cancer, leukemia, lung and colorectal neoplasms. Because eligibility forms of clinical trials are available on the Internet, 'Eligibility Determination' is a frequent keyword. Table 2 presents the number of data models by major disease area. The majority of data models belonged to oncology. In addition, there were disease-independent models, e.g. regarding discharge letters. In 2015 75% of data models were updated. In total 4387 data models were available. 201 data models were derived from EHR systems. Top EHR model types were patient discharge, medical history taking and clinical conference. Most frequent disease-specific EHR models were related to prostatic neoplasms, breast cancer and leukemia. 114 models were derived from registries, predominantly from oncological and neurological registries. Quality assurance was addressed in 71 models, mainly derived from German AQUA forms (19). These forms cover all domains of mandatory quality assurance in Germany[(>4 million documented cases (20)]. In addition, there were 176 models which can be used both in a clinical and a research setting. Semantic annotation Regarding current model versions, semantic annotations were available for 108 412 items, 2453 item groups and 35 361 code list items. Overall 335 087 UMLS codes were assigned with 21 847 unique codes. Most frequent medical concepts were Laboratory Procedures (C0022885) and Physical Examination (C0031809). Figure 6 shows the frequency distribution of UMLS codes. The median number of occurrences per UMLS code was only 1, with a wide range (1-7,685). This is an indicator for the semantic richness of medical data items: there is a long list of UMLS codes which was used infrequently. The frequency of UMLS codes per annotated element (items, item groups and code list items) is presented in Figure 7. The median number of codes per element was 1, with Discussion At present, most medical data models are not available to the scientific community, but there are important advantages of model sharing and Open Metadata (21). Compatible data structures are of key importance for data exchange and integration in medicine. Medical data models should be harmonised as much as possible to enable data integration and analysis for research purposes and to avoid duplicate data entry in healthcare. As outlined in the introduction, there are a huge number of medical data models. Therefore an information infrastructure is needed to support sharing and discussion of data models in medicine. The portal of medical data models started with approximately 250 models in 2012 (9). As of November 2015 it contains more than 4300 models, in most cases derived from clinical trials. In general, a large proportion of data models is related to oncology. More than 330 000 UMLS codes are assigned to data items, item groups and code lists. UMLS codes were chosen because they provide the largest coverage of medical concepts. Most codes are assigned by human experts. A small set of semantic codes is used very often, but the frequency distribution has a very long tail: i.e. there are many different UMLS codes which are used only once. 4300 models is a considerable number, but there are >13 000 diagnoses in the international classification of diseases (ICD-10 (5))-and each diagnosis will probably have disease-specific data elements: The ICD-10 disease category, e.g. diabetes mellitus type I (E10) is too granular. For each diabetes complication-such as coma (E10.0) or eye complications (E10.3) -additional data items are required. This indicates that much more data models are needed to provide a broad coverage of all medical domains. In general, copyright laws regarding data models need to be respected. In our experience the copyright status of many data models is not clearly specified. This impedes re-use of models in research and routine care. From our perspective more widespread use of standardised licenses like creative commons (22) would be very helpful to foster sharing of data models. Several Electronic data capture (EDC) systems started to provide CRF libraries to facilitate re-use of data collection instruments. For instance, REDCap (23) provides such a CRF library. It is a popular EDC system from Vanderbilt University with >1500 institutional partners worldwide. The REDCap library started with 128 instruments (24) and now expanded to 930 data collection forms (as of September, 2015). Since REDCap version 6.5.0 (released May 2015) the MDM portal is a directly linked external instrument library of REDCap. The PhenX toolkit (25), funded by the U.S. National Human Genome Research Institute, is another This list of data model resources regarding EDC and EHR systems is not complete. But almost each system is using its own technical format for data structures (REDCap-, OpenClinica-, caDSR-, CIMI-, HL7-CDA-, openEHR-format). The MDM portal intends to foster data model sharing between systems with different technical formats: Each data model can be exported in several formats (see Table 1). The MDM portal applies CDISC ODM (11), which is an open standard, supported by regulatory authorities: CDISC ODM/Define XML is part of FDA's Data Standards Catalog, which was announced to become mandatory for new drug applications by end of 2016 (36). The MDM portal leverages several data model converters from ODM to other data structures. Another important feature of the portal is semantic annotation. Based on UMLS coding, data elements are semantically enriched to avoid ambiguities due to synonyms and homonyms within the biomedical domain. Semantic codes enable comparative analysis of data models: For instance, what data elements are identical or similar between two data models? (37). Potential key data elements for specific medical domains can be identified by systematic analysis of most frequent concept codes (6,38). >335 000 codes are already assigned to items, item groups and code list values in the MDM portal. Certainly manual curation and validation of these codes is needed. Semi-automatic methods, i.e. expert-based semantic annotation with computer-based suggestions, will stay important in the future (despite fullyautomated approaches) (39). However, semantic annotation will be even more complicated for weakly structured, nonstandardized and probabilistic data sets in personalised medicine (40). At this stage it became evident that few codes like 'Date in time' are used very often, but there is a long tail of rarely used semantic codes. Future work Given the semantic complexity of medicine, much more data models need to be processed to reach a broad coverage. It is planned to deliver another 20 000 data models in the next three years with guidance from an external advisory board of domain experts. A close collaboration with the University Library of Mü nster was established to make the MDM portal sustainable-both from a technical and a contents perspective. Regular user surveys are planned to guide further development accordingly. A single institution is certainly not capable to provide all relevant content; therefore the MDM portal applies a community-based approach. We encourage medical researchers worldwide to contribute their data models and use the MDM portal as a platform for collaboration.
2018-04-03T00:22:37.001Z
2016-02-11T00:00:00.000
{ "year": 2016, "sha1": "712dad2ada96067a9e06f8b62c349008ebcab1df", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/database/article-pdf/doi/10.1093/database/bav121/8222314/bav121.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "712dad2ada96067a9e06f8b62c349008ebcab1df", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
233326039
pes2o/s2orc
v3-fos-license
Development of cancer support services for patients and their close ones from the Cancer Society of Finland’s perspective ABSTRACT Purpose: This study examined what support cancer patients and their close ones need and how this support should be organized when developing cancer care pathways. The study focused on the opinions of professionals of the Cancer Society of Finland (CSF), who play a central role in presenting the third sector’s perspective on care pathways. Method: Six semi-structured group interviews were carried out with counselling nurses (n = 12) and managers (n = 9) of the CSF during summer 2017. The results were analysed using content analysis. Results: Both patients and their close ones need more information, psychosocial support and financial counselling after diagnosis, during rehabilitation and follow-up, at relapse and during the palliative care phase; additionally, close ones require support after the patient’s death. Participants emphasized close collaboration between public healthcare and the CSF to meet the needs of patients and their close ones. Conclusion: Psychosocial support can—and should—be provided as part of the care pathway. This support can be provided by organizations in the third sector, such as the CSF, which have resources in this area. Introduction About 35,000 people are diagnosed with cancer every year in Finland, and the Finnish Cancer Registry estimates that there will be 43,000 new cancer diagnoses in the country by 2030 (https://www. cancerregistry.fi/). At the same time, the challenges that municipalities face in covering healthcare costs are increasing-a situation that will not improve due to Finland's ageing population and rapidly falling birth rate. While the quality of medical treatment is high in Finland, psychosocial care for cancer patients and their close ones remains underdeveloped. Thus, there is a need to develop care pathways and support services for both groups. As a patient's close ones and caregivers often act as the patient's most important emotional supporters, development of support for the patient's close ones is also important. Developing patient care pathways and strengthening the roles of third-sector organizations in patient care are among the key goals of the renewal of Finland's social and healthcare system (Brax, 2018). This includes streamlining patient care pathways in order to increase out-patient care and reduce hospital stays. This foreseeable change increases the need for infor-mation and support for both patients and their close ones, and here the third sector could participate. Treatment of cancer patients in Finland is guided by national and international treatment guidelines. Regional care pathways for cancer patients have been developed for different hospitals and districts, and they define the responsibilities of different actors. However, the care pathways and rehabilitation of cancer patients are still not optimal. This has been stated in other countries as well, and according to an international policy report , inefficiencies can exist at the system, institution and individual levels and support for caregivers is poor. Integrated, patient-centred care should be coordinated across professionals and support systems, and it should be based on the patients' needs and preferences. At present, there is a need to improve integrated care by focusing on care transitions, outpatient care and palliative care (Evans et al., 2015). There is also a need to identify gaps in patient care and a need for innovation and a systematic set of priorities . When developing care pathways, it is also important to understand the views of different target groups . Further, the international cancer community emphasizes the value of patient engagement in the development of cancer care programmes and policy efforts. In this context, the role of third-sector organizations is significant in encouraging patients and laypeople to improve care systems and pathways and, thus, conditions for patients (Schear et al., 2015). Previous studies Previous studies involving cancer patients have shown that timely information (i.e., information about the disease, treatment and financial assistance and various forms of support are important at different stages of the cancer trajectory (Hartzler et al., 2016;Newby et al., 2015;Zebrack et al., 2007). Close ones and caregivers, who often act as a patient's most important emotional supporters, may experience stress (Gustavsson-Lilius, 2010;Teixeira & Pereira, 2013), emphasizing the importance of recognizing the link between anxiety and depression among those who are close to the patient (Jacobs et al., 2017). There are also studies showing that close ones need more psychosocial support and information about the disease than the patients themselves (Gustavsson-Lilius, 2010;Niemelä et al., 2010). In Finland, relatively little research has investigated the experiences and support needs of cancer patients' close ones. Studies have focused on the experiences of mental health professionals who use structured family-centred interventions to support the children of cancer patients (Niemelä et al., 2010), quality of life of patients with breast cancer (Salonen, 2011), perceptions of electronic social support (Yli-Uotila, 2017) and marital relations and health-related quality of life questions in patients with prostate cancer and their spouses (Harju et al., 2018). Lehto-Järnstedt et al. (2004), Lehto et al. (2015Lehto et al. ( , 2018 have performed research, for example, on cancer patients' social support. Research in Finland from the perspective of cancer organizations remains limited. International research on the experiences and support needs of close ones has increased, focusing, for example, on caregivers (Teixeira & Pereira, 2013), peer support groups (Landry-Dattée et al., 2016), spouses (Pauwels et al., 2012), the support needs of children (Bultmann et al., 2014;Wong et al., 2010), patients with hereditary cancer (Rudkjøbing et al., 2015;Vodermaier & Stanton, 2012) and palliative care patients (Brazil et al., 2014;Lundberg et al., 2013). However, there are many people who have no family or whose close ones do not provide support. Previous studies have structured care pathways during the stages of screening, diagnosis, specialized care, follow-up and (if needed) palliative care . According to Khan et al. (2017), there is increasing emphasis on the use of care plans to guide the organization and delivery of care for cancer patients. However, researchers are still in the early stages of identifying components and key facilitators of integrated care plan uptake. We were unable to identify a single integrated care plan spanning all stages of the care pathway from diagnosis to survivorship or palliative care. Basically, there is information about the kinds of support patients and their close ones would need, but in practice, the resources either focus on clinical elements of care (Bao et al., 2016) or are targeted at individual stages of the cancer trajectory (Mayer et al., 2015). Information about activites and roles of different organisations appear to be in general fragmented. However, the use of care plans in general, across various diseases and settings, has helped to reduce in-hospital complications (Rotter et al., 2010), enhanced communication between providers and improved the quality and efficiency of care (Vanhaecht et al., 2010). Aim of the study Even though many support services are well established in several sectors, the roles of third-sector actors should be clarified, particularly in the development of patient-centric healthcare-that is, in amplifying the patient's voice. The focus of this research was on finding new insights to develop cancer support services and to strengthen and streamline the role of the third sector, including cancer societies. Cancer Society of Finland (CSF) is an umbrella organization, which covers the whole country with its 12 regional and 6 patient organizations, with about 200 professionals and 3,500 volunteers. CSF is a non-profit organization, which funding is based on donations and national competitive funding sources for different projects. It is a strong actor in the cancer support sector. Organizations under the CSF umbrella operate alongside and in collaboration with the public sector and provide information and support for cancer patients as well as their close ones. They provide rehabilitation and psychosocial support, crisis therapy, peer support and support-person services. They also provide support during palliative and endof-life care and for close ones after the patient's death. In addition, they offer art therapy, physical exercise, nature experiences and themed events. Services are provided according to needs, either as face-to-face meetings or via phone, email or chat though different portals. Our goal was not to develop formal care pathways but to explore how the CSF could support public healthcare according to the needs of cancer patients and their close ones. Professionals in the third sector, such as the CSF, have deep insight into the needs of patients and their close ones and problems within the public care system. However, the perceptions of CSF professionals have not been studied in detail in Finland before. The aim of this study was to investigate what kind of support and information cancer patients and their close ones need and how cancer care pathways should be developed from the perspective of the Cancer Society of Finland (CSF). Additional aim was to investigate how collaboration between the CSF and public healthcare organizations could be developed. The research questions were as follows: (1) What kind of support do cancer patients and their close ones need during the different phases of the cancer trajectory from the perspective of professionals at the CSF? (2) How should cancer care pathways for patients and support services for close ones be developed to better integrate the CSF's services into those of public healthcare? Participants The aim was to recruit a representative sample of experts working at cancer societies in Finland under the CSF umbrella. To select participants, two key selection criteria were applied: the principle of maximum variability and the principle of homogeneity (Patton, 1990, p. 171, 182-183). This was done by inviting key experts from various societies of the CSF for interviews. Assisting in the selection of key personnel was a supervisor who, through her recommendations, was able to ensure the representation of a variety of general managers of cancer societies. Individuals were emailed with an invitation to participate in the study, and all but one participated. Counselling nurses from three different cancer societies across Finland were also asked to participate in interviews. Their supervisors promised to arrange the interviews. All counselling nurses agreed. Data collection Finally, 12 counselling nurses from three different organizations (3 interviews) and nine general managers (3 interviews) of different organizations were interviewed once. For professional details, see Participant characteristics in result section. The main researcher served as moderator (Julius & Waterfield, 2019). The interviews were conducted in Finnish as semi-structured group interviews and included four main themes: (1) care pathways, (2) specialist care and counselling, (3) economic troubles and (4) co-operation between public and non-public organizations. For selection of questions we utilized previous studies on the different stages of the illness and forms of support received and needed (Girgis et al., 2011;Teixeira & Pereira, 2013;Turner et al., 2013). All interviews were performed in a consistent manner. To ensure that everyone was familiar with the interview environment, the counselling nurses' interviews were conducted at their workplaces and interviews with general managers were conducted at the head office of the CSF. The interviewer tried to make the interview situation relaxed, which was very successful, because the discussions were open and lively. The duration of each interview was 60-90 minutes. Transcription of the interviews was done by a professional editor. The transcript consisted of a total of 112 pages: 58.5 pages of counselling nurse interviews and 53.5 pages of manager interviews. Data analysis The interviews were analysed by means of theorybased content analysis. The data analysis commenced with a data-based approach, but towards the end of the analysis, it proceeded in a theoretical way, combining the research results with earlier theory (Elo & Kyngäs, 2008). Heli Tiirola conducted the analysis and Veli-Matti Poutanen and Liisa Pylkkänen commented on it. Initially, the transcribed interviews were systematically read several times, as it was important to get a good overall picture of the collected material. Next, words, phrases and repeated themes were underlined with different coloured pencils while taking note of the material's themes. This was done to all of the material. The analysis proceeded on a data-driven basis with the aim of creating an abstract description of the empirical data. During the analyses the answers were read, reduced and then merged into categories of different levels of abstraction. The analysis was then returned to the research questions and topics corresponding to these questions were selected. (Elo & Kyngäs, 2008) Subsequently, the material was transformed into subcategories (the lowest level category) relating to upper level categories, which were merged into the main categories (the highest level category). The following main categories were identified: the support needed by those diagnosed with cancer and their close ones; counselling concerning referral to special professionals (such as psychologists, social workers, physiotherapists etc.); support needs of patients and close ones at different stages of the care pathway; and development of co-operation between cancer societies and the public sector. Interpretation of the categorized data is the final step of the content analysis. We present a summary of the research results and suggest a new path of care on the basis of the interviewed professionals views. Ethical issues This study adopted good scientific practices, including care and honesty, from the research design to data preservation. The study followed the principles of the Finnish National Ethical Board (Finnish National Board on Research Integrity TENK, 2019), which indicated that since the study would not directly target patients or close ones, it did not require the opinion of the Research Ethics Committee. Issues addressed in the study, such as ideas for better forms of support, are not sensitive topics and do not interfere with human integrity. The organizations in which the participants worked had approved their participation in study and the participants were informed that they were allowed to participate in interviews. Nonparticipation did not affect the treatment of employees (Fouka & Mantzorou, 2011). The study protocol included obtaining informed consent. Participation in the study was voluntary and participants had the right to terminate their participation at any time. The participants received in-depth information about the implementation of the study as well as the opportunity to discuss with researchers any matter related to the study. Analysis of the data and compilation of the results were carried out according to strict principles of research ethics including obscuring any identifiable data. All information collected in the study is confidential and handled in accordance with the Personal Data Act (523/1999, current Data protection Act 1050/ 2018). There were no physical, mental, chemical or other risks involved in participating in the study. The participants were also informed of the possibility of discussing participation after the study, which no one requested. Participant characteristics The participating healthcare professionals were counselling nurses and were all female and aged 27-61 years at the time of the interviews. They were nurses from their background education and most had acquired further education, such as specialist nurse or occupational nurse. Some of them also had a special competence in cancer nursing. The counselling nurses had been working at cancer organizations, on average, for 10 years, ranging from less than a year to 22 years. All of them had worked previously in public healthcare and the majority for several years in cancer care. They worked at three different organizations with different catchment areas. The general managers of regional and diseasespecific cancer societies were selected from different parts of Finland to obtain a representative cross section. Six of the interviewed managers were females and three were males ranging from 39-58 years of age. They all, except one, had a university degree; many had healthcare education, but some also had economics or varied educational backgrounds. The managers had been working in their current positions, on average, for four years, ranging from a few months to eight years; a proportion of them had worked in different positions in cancer organizations prior to their current position. Overall, both healthcare professionals and managers were experienced and had been working with cancer patients approximately 16 years and within cancer organizations 9 years. Professional views on the support needs of cancer patients and their close ones According to counselling nurses and managers, people diagnosed with cancer and their close ones are generally satisfied with the medical treatment they receive, although patients and close ones do not receive enough psychosocial support, and there are insufficient opportunities for discussion with healthcare professionals in hospitals. According to participants, patients and their close ones would need clear information on cancer and its treatments as well as information concerning psychosocial reactions and support needs throughout the cancer trajectory, and they should also be encouraged to share their thoughts and feelings about cancer. Participants also indicated that financial challenges due to cancer have increased. Patients often do not know what kinds of financial benefits they are entitled to. Information about these benefits often comes too late, such as during rehabilitation courses, which occur late in the cancer trajectory and in which only a minority of patients participate. Managers spoke about the importance of healthcare professionals' ability to identify persons in challenging economic situations and the importance of referring them to social workers or to Finland's social insurance institution, Kela, to discuss potential financial support and benefits. The CSF and some organizations under its umbrella also provide limited financial support for cancer patients in the form of a non-recurring financial contribution. This kind of support is currently needed more often than in the past. However, managers emphasize that providing financial support is the government's responsibility, as the CSF has very limited financial resources. Professional and peer support and advice on other services, including appointments with a psychologist or a social worker, were considered important, along with information on rehabilitation services. Those diagnosed with cancer and their close ones often feel the need to meet special professionals, such as social workers and occupational therapists. It can be of the utmost importance to assess the individual needs of each patient and include the patient's close ones in this process. Many people do not seek expert help because they are not aware that it is available. Therefore, participants said that expert services should be better and more clearly integrated into care pathways. Manager: ". . . the cycle of care should include the consultation and conversation services of psychologists, psychiatrists, occupational therapists, as part of which the mental wellbeing of the individual and their relatives is assessed." Counselling nurses considered it important to take into account the individual needs of cancer patients, which is not always done. In addition to the cancer diagnosis, other crises may arise and the person may also need to discuss these. It is fundamentally important to identify the individual needs of each person and provide timely support. To do so, mapping of needs and coordination of support are required. According to participants, the support of close ones is very important for cancer patients. Further, close ones may be more concerned and need more support than the cancer patients themselves. These kinds of situations can place an extra burden on both patients and their close ones. According to our study, cancer patients can perceive the diagnosis and its consequences differently from their close ones and have different coping strategies. The timing of support needs may also be different for patients and their close ones. For example, the patient may not want to speak about cancer, whereas close ones would like to share their thoughts with others, such as friends. Manager: "Situations like these, as a relative, when you somehow end up there, when these are developing, the needs of the relative are probably very close to the patient's needs, and these kinds of reactionsvery often the relative is panicking more than the patient." Better marketing of services and collaboration with public healthcare Many patients and their close ones would have needed CSF services much earlier, according to participants of this study. Some cancer societies of the CSF do have the resources to serve more clients, but the key problem is that those diagnosed with cancer, their close ones and professionals within public healthcare do not know about the services that the CSF can offer, or they have limited or incorrect ideas about the quality of services or qualifications of the service providers at the CSF. It is necessary to note that many people need different forms of support, and public healthcare does not have the resources to provide them in a timely manner. Moreover, in most cases, public healthcare cannot serve close ones, only patients. Although participants advertise CSF services, more accurate marketing should be directed both at patients and their close ones, and to public healthcare. To bolster this effort, new marketing channels should be developed. One essential and anticipated change is that professionals within public healthcare would refer patients and their close ones to the CSF more often. However, some counselling nurses said that public healthcare professionals can even consider the competencies of the CSF a threat. Counselling nurses emphasized that actors in the public sector should provide information about these services both in written and verbal forms, and not only after the diagnosis but also throughout the cancer care pathway. By doing so, people would be able to decide for themselves whether and when they need these services. Manager: "And the information must be given at the right time, and in fact it can be said that it has to be repeated, it's not enough that you give one leaflet at the stage when they learn about having cancer; that's the diagnosis phase. The point is to continue that along the path. And then, some take advantage of this; some need those services, others don't need them." The role of managers of cancer organizations is of vital importance in developing new processes and fostering collaboration between the public sector and the CSF. Regarding collaboration between the public sector and the CSF, themes of clarification of roles and improvement of information sharing surfaced in the interviews. Regular meetings and common campaigns were proposed as concrete steps towards these goals. In addition, strengthened collaboration between the CSF and healthcare and social education organizations was proposed to increase awareness of the CSF services during education. Information technology could also be utilized more in service coordination. Participants indicated that collaboration between the CSF and public healthcare has improved and become closer in recent years, despite the geographical differences that would threaten the quality and quantity of this collaboration. Participants strongly expressed their hope that collaboration with public healthcare and the role of the CSF would be further strengthened and clarified throughout the cancer trajectory. Only in this way could the CSF's resources be best directed to those in need of services. They also proposed that CSF managers, in collaboration with public healthcare professionals, should define the roles and positions of CSF professionals throughout the cancer trajectory. Thus, a clear process should be developed that leads to the advent of policy-level strategy and steering mechanisms which integrate the third sector into the new structures of social and healthcare. Support needs for patients and their close ones throughout the cancer trajectory Participants identified several phases of the cancer care pathway during which support is very much needed. These different phases are discussed separately. Support during the diagnosis phase Any suspicion of cancer is a great shock for most people. During this phase, in which diagnostic procedures can take several weeks, support from public healthcare is limited or even non-existent. The CSF professionals can provide information, advise on and discuss support during that phase. Counselling nurse: "And then when you think about the phase when you start the tests and they suspect that you might have cancer, you don't have the kind of support network at this point; you're at the testing phase. So, we could give support at that point because waiting is hard." The CSF can provide psychosocial support and information for both patients and close ones even as early as before diagnosis, during the period when cancer is only suspected. Some people may need only peer-group support, which cancer professionals recommend from the diagnosis phase. In this study, the treatment phase did not receive any kind of prominence; rather, the focus was on targeting CSF services to patients and their close ones at the time of diagnosis. Support during follow-up and rehabilitation At the end of cancer treatment, when patients are transitioning to the follow-up and rehabilitation phase, they and their close ones generally expect a return to normal life including the resumption of their other normal roles. Psychological reactions during this phase can be surprisingly strong and both counselling nurses and managers spoke a lot about support services during this phase. If the CSF could support public healthcare particularly at this time, this could also produce financial savings. According to participants, a portion of patients feel abandoned without any services. Treatment visits are usually dramatically reduced to infrequent control visits. At the same time, the psychological reactions of these patients can be strong, and they may have a strong need to discuss their feelings. Some patients and their close ones can be surprised and embarrassed by their psychological reactions, and in these cases, it can be reassuring if a healthcare professional normalizes these reactions. Counselling nurse: "Patients often say that when you're in care, in that cycle, you don't think about those things so much; you think that you're safe, you trust that someone is controlling the values and knows what is happening. When you're not in that system anymore, then you think, 'oh no, how do I now know what I'm supposed to do, and I feel horrible; I can't trust that I'm healthy now.'" During the follow-up phase, many patients are also afraid of a cancer recurrence and questions and contacts with public healthcare can be numerous, which can increase the burden on the system. Here, contacts with the CSF services could help to decrease that burden. Counselling nurses could address the questions that patients may have and also provide psychosocial support to help ease the fear of recurrence. Support at cancer recurrence According to participants, cancer recurrence is a phase during which patients and their close ones usually need a lot of support. Some people have already been using CSF services, and in such cases, the path to contacting support services is smooth. However, not everyone is aware of these services. Counselling nurses and managers hope that public sector staff would once again inform patients about these services at this stage, especially as this can be a very difficult period for many people, who may receive diagnoses of incurable cancer. Manager: "Yes, I also think this is a very important matter, that it doesn't have to be professional help, even though I've mentioned psychologists and psychiatrists, but equally that the path of care includes peer support services. Some just need peer support, not a psychologist or psychiatrist or anyone else like that-just that peer-group support." A recurrence of cancer can, in turn, restart the psychological processing of cancer. For some, it is easier to accept this situation particularly when they have already previously processed the cancer diagnosis. However, some people can have very strong psychological reactions, as their psychological processing must start from the beginning. The reactions of close ones can also vary. Support during palliative and end-of-life care and beyond According to participants, the palliative and end-oflife period is understandably very difficult both for patients and their close ones, and the way that these two groups deal with this difficult situation can vary greatly. In this situation, the service needs are also different for patients and for their close ones, although both need psychosocial support and information. Counselling nurse: "So, in palliative terminal care, when the person close to you dies, it is also important to have answers, because you have so many questions, like did I do the right thing and did I support them and what about the pain, and why did they end up dying in this way or that way. If the relationship had been discussed with someone, you would know that it is possible to call or visit them and ask because the threshold is then lower." During end-of-life care and after the patient's death, close ones can feel that they have been left alone, as they do not have a care connection to the patient's healthcare unit. Subsequently, support is often very limited. In addition, psychosocial reactions can be very difficult when everything is over. Close ones do not have a care relationship with the healthcare system. Here, again, participants emphasized the professional and peer support that close ones can receive from the CSF. The key results of this study are summarized in Figure 1, showing support services throughout the key phases of the care pathway. Participants emphasized that the CSF can support the public sector's work throughout the whole patient pathway, according to patients' and close ones' individual needs, for example, through psychosocial support, peer support, counselling on benefits and treatment, rehabilitation services, guidance as to the various support services and grief support. A key message from the CSF: placing the needs of cancer patients and their close ones at the centre Participants in this study were experts from cancer societies in Finland with long experience of working with patients, their close ones and their networks. According to the results of this study, these professionals evaluate cancer patients and their close ones as being generally satisfied with the medical treatment for cancer in Finland. However, they emphasize the need for opportunities to reflect their thoughts and for psychosocial support, especially relating to the thoughts and feelings caused by the illness. These results support earlier findings (Evertsen & Wolkenstein, 2010;Molassiotis et al., 2011;Wootten et al., 2014). According to informants, patients primarily need psychosocial support, discussion with professionals, peer support and information on illness and rehabilitation. Close ones often need information about the disease and its treatment, psychosocial support and discussion with professionals, as previously demonstrated (e.g., Hodgkinson et al., 2007;Lehto et al., 2015;Wootten et al., 2014). Participants emphasized the variety of support needs and how they differ depending on the phase of the cancer trajectory taking individual needs into account. Patients need a great deal of support once they are diagnosed, during the follow-up and rehabilitation phase, if cancer returns and during palliative care. Close ones also need support during these phases, but they additionally benefit from support after the patient's death when psychosocial reactions can be very difficult. Close ones do not have a care relationship with healthcare professionals and are often left alone. However, it was surprising that the treatment phase did not get any predominance in relation to support needed, either in patients or close ones. However, it is obvious that during the treatment phase patients have close contacts with healthcare professionals, and during the follow-up only few visits to the clinic take place. This may cause unmet support needs, and here CSF professionals could provide help and support. Our results point towards that psychosocial needs are insufficiently addressed in clinical cancer care, and there is a need for collaboration with other service providers, such as cancer associations. The need for information is constant during the illness trajectory. The results also show that knowledge of the CSF services is limited. It is especially important that the CFS and public healthcare professionals inform patients about these services at the time of diagnosis, when the CSF may provide opportunities for discussion and psychosocial support both for those diagnosed and for their close ones. According to participants, many of the needs of patients and their close ones identified in previous studies are being used as part of services provided by the CSF. Cancer patients and their close ones have received help from a variety of interventions, such as psychotherapy and peer support (Landry-Dattée et al., 2016;Newby et al., 2015), online peer support services (Hartzler et al., 2016), information and support based on empowering interactions (Lundberg et al., 2013) and patient care-related information (Brazil et al., 2014). Some benefit from a variety of technical and social media tools (Yli-Uotila, 2017), which the CSF uses in many ways. Patients could benefit even from as small an amount of support as one phone call a week after diagnosis (Salonen, 2011). However, while information technology can be utilized in developing activities, increasing digitization could run counter to the need expressed by patients and family to meet a professional face to face, as our participants stated. Participants reflected on the role of public healthcare in Finland. They stressed the important role of public healthcare professionals, who should identify those instances when patients need more support and information and refer them to the CSF and other special professionals, such as social workers, occupational therapists or psychologists. Participants in this study also emphasized the non-simultaneous needs of patients and their close ones as a critical factor in the cancer care pathway, and that it is of critical importance to provide support in a timely fashion. In sustainable cancer care, the focus has to be on what matters most to patients-a principle that should guide the entire cancer care process. One key area for improvement is the marketing of CSF services to healthcare professionals in the public sector, promoting the development of personalized support and, while doing so, better utilizing information technology. In line with Wait et al. (2017), the needs of cancer patients and their close ones lie at the heart of support services throughout the different stages of the care pathway depicted in Figure 1. Some cancer patients utilize a wide range of services from various specialists (Evans et al., 2015). However, as our results show, unclear operating practices and poor availability of specialist staff are barriers to support for all patients and close ones (e.g., Absolom et al., 2011). According to participants, the support of close ones is very important for cancer patients, with those patients who receive support from their close ones needing less psychosocial support from professionals, as Heins et al. (2016) also noted. Although close ones can feel worse than the patients, they can be deprived of the help they need because the focus of caring is on the patients (Mosher et al., 2013). According to our study, cancer patients can perceive the diagnosis and its consequences differently from their close ones and have different coping strategies. The timing of needs is non-simultaneous and support, therefore, must be tailored accordingly. For this reason, the CSF should take a more active role in supporting family, as our participants described. Our participants also talked about the serious financial consequences that cancer can have, as told to them in many discussions with unemployed people, pensioners, single parents and entrepreneurs. Many people do not know what financial benefits they are entitled to, even though some of them have difficulties in paying for food and medicine. According to our participants, patients should receive better guidance on financial support, for example, from social workers-a conclusion that has also been drawn in research into other long-term illnesses (Aaltonen, 2017;Cahallan & Brintzenho-Feszoc, 2015). As a solution, participants suggested that healthcare professionals adopt a more holistic view of patients (Evertsen & Wolkenstein, 2010) and refer them more often to special professionals. Looking to the future: third-sector organizations to play a stronger role in the cancer care pathway In re-organizing services, a stronger role for nongovernmental organizations, such as the CSF, has been proposed in Finland (Brax, 2018). Considering the results of this study, we suggest that one way to do this is to have different stakeholders to participate in a thorough discussion on health policy. It is contradictory that in a welfare state such as Finland, people with cancer are left without support and information, while organizations such as the CSF exist and can provide exactly these services. Although the role of the CSF as a major private donor to cancer research has been recognized for years, professionals in the public sector remain somewhat sceptical about cooperation with this kind of third-sector organization. Professionals within the CSF speculated that one reason for this is that their support services are not well known, as shown by Pavolini and Spina (2015). Professionals within the public sector should see the CSF as a co-operative facilitator of their own healthcare work. The identified challenges to this are communication and support during the transitions from one phase of the disease pathway to another and from one service provider to another. As emphasized in the international policy report as cited in Wait et al. (2017), attention should be focused on continuity of care and on other services. Patients and their representatives as well as patient organizations should have a role in national-level planning including the planning of care pathways. The European Cancer Organization suggests improving innovation in cancer care by using "a whole-patient approach." This means that the strategy is guided by the patient's needs and patient-relevant outcomes. Research should be carried out in co-operation with patient organizations to have a better understanding of what clinical and psychosocial needs are unmet. Both the need for continuity of care and co-operation between public healthcare and cancer societies, were clearly expressed also in our study results. Transitions in the care pathway are also a challenge elsewhere in the world. Therefore, there has been a strong emphasis on the use of care plans (Vanhaecht et al., 2010) to support care management in cancer patients (Baker et al., 2008;Mayer et al., 2014;Michael et al., 2013). Also peers and patient advocates could help in care transitions. The CSF trains its peers, but this is not always sufficient to convince professionals in the public sector of their qualifications. According to Jones and Pietilä (2018), peer supporters know the requirements for their tasks. Cancer patient advocacy aims to strengthen the roles of cancer patients and caregivers together with third-sector organizations and to provide information about the gaps in the system of care (Schear et al., 2015). Our results suggest a need to develop a care pathway, and they are similar to those of Evans et al. (2015), who reported that integrated care models have demonstrated a range of positive outcomes. The delivery of integrated care requires coordination and collaboration across various organizations, care settings and professionals to ensure that patients receive the right care in the right place at the right time, such as case-managed multidisciplinary team care, organized provider networks and financial incentives. This is in line with our model of the care pathway in which the patient and close ones are treated individually in a multi-professional manner and as economically as possible. Across European healthcare systems, it is estimated that 20% of spending is currently wasted on ineffective interventions. Ultimately, improved efficiency will contribute to more equal access to, and affordability of, healthcare . According to our results, along with increased financial challenges, the need for services also increases, especially among the elderly, at least some of whose needs could be fulfilled by non-profit organizations and volunteers. Strengths and limitations The strength of this study is that, to date, very little research has been carried out on the role of non-profit organizations, such as the CSF, in supporting patients, and there is even less research concerning support for the patients' close ones. Furthermore, our participants, who were all employed by the CSF, had a great deal of experience with cancer patients and their close ones and can, thus, be considered well aware of their needs and concerns. Particular emphasis was given to the regional spread of participants, who were selected from all over Finland. The sample size for this study is considered adequate, and an appropriate qualitative methodology was applied (Gray et al., 2017). When evaluating the development of a care pathway, it is important to look at it from multiple angles and to make estimates and recommendations using versatile information. This study focused on the views of counselling nurses and managers at the CSF, whose views are considered to be patient-centric because they take into account the whole disease trajectory rather than focusing only on the treatment period, which is understandably the main focus of public healthcare. This qualitative data obtained from semi-structured group interviews can be considered rich and interesting information, which allowed us to identify important key concepts related to support needs of cancer patients and their close ones. However, these topics need to be further investigated with other methods, including larger questionnaire studies, and there is a need to expand research to other key informants, including patients, close ones and professionals within public healthcare. A care pathway needs to be built in participation with patients and close ones, and these results provide a basis for further collaborative research, which is underway to directly examine the perspectives of these groups as well as healthcare professionals within the public sector. Conclusion and implications for the future According to the professionals in cancer associations, cancer patients and their close ones need more information and support to deal with the disease, including more referrals to specialists, such as those who provide psychosocial and financial support. This should happen right from the beginning of the care pathway. The greatest need for improvement is during the transitions between different stages of the care pathway, at transitions from one service to another and in directing the right people to the right services. The CSF has an opportunity to provide professional and peer support to patients and their close ones. Such support services are often not sufficiently resourced by the public sector. Participants expressed a willingness to co-operate, and this co-operation should increase both in general and during specific phases of the care pathway. The results of this study could initiate a dialogue on how we can better identify the needs of patients and their close ones, reduce duplication of work, rationalize care pathways, clarify the role of the third sector and, thus, also generate financial savings. More research is needed on the financial impact of cancer on the patient, on streamlining the care pathway and on the role of non-profit organizations working with the public sector. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This work was not supported by any organisation. Notes on contributors Heli Tiirola is a university lecturer in social work at the University of Eastern Finland with a special interest in the meaning of disease and wellbeing and qualitative health research. Veli-Matti Poutanen is a university lecturer in social policy at the University of Eastern Finland with a special interest in social and healthcare services and quantitative methodologies. Riitta Vornanen is a professor in social work at the University of Eastern Finland with a special interest in social services, child and family research and interorganizational collaboration. Liisa Pylkkänen is an adjunct professor in the Department of Oncology at the University of Turku with a special interest in clinical cancer research and cancer support services.
2021-04-22T06:18:59.018Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "509518d28c5f20642c12fc0926977bf702804b58", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17482631.2021.1915737?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "644c598273046a7562f25ef3c5111b9d7ab7b68e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252310713
pes2o/s2orc
v3-fos-license
Learning Introductory Biology: Students’ Concept-Building Approaches Predict Transfer on Biology Exams Previous studies have found that students’ concept-building approaches, identified a priori with a cognitive psychology laboratory task, are associated with student exam performances in chemistry classes. Abstraction learners (those who extract the principles underlying related examples) performed better than exemplar learners (those who focus on memorizing the training exemplars and responses) on transfer exam questions but not retention questions, after accounting for general ability. We extended these findings to introductory biology courses in which active-learning techniques were used to try to foster deep conceptual learning. Exams were constructed to contain both transfer and retention questions. Abstraction learners demonstrated better performance than exemplar learners on the transfer questions but not on the retention questions. These results were not moderated by indices of crystallized or fluid intelligence. Our central interpretation is that students identified as abstraction learners appear to construct a deep understanding of the concepts (presumably based on abstract underpinnings), thereby enabling them to apply and generalize the concepts to scenarios and instantiations not seen during instruction (transfer questions). By contrast, other students appear to base their representations on memorized instructed examples, leading to good performance on retention questions but not transfer questions. Examples of Retention and Transfer Questions Classification of questions as retention or transfer depended on the question's relation to the particular content that was presented in class. As an illustrative example of the type of active learning in the classroom and how exam questions were categorized in relation to context, here we briefly describe the components of the lesson on sexual selection (adapted from Kalinowski et al., 2013) followed by the corresponding transfer and retention questions that appeared on the exam. In the class lesson, the instructor introduced the hypothesis that male peacocks have elaborate trains because females prefer to mate with males that have elaborate trains. Students were shown two figures: one showing a positive correlation between number of eyespots on train and number of matings and the other showing positive correlation between brightness of eyespots and number of matings. In small groups, students discussed whether these data definitely supported the hypothesis and submitted a response (via an electronic polling system). The instructor then led a discussion about causation and correlation, illustrating that correlative data cannot fully support a hypothesis about causation. Students were asked to design a study that would be able to illustrate causation (and responded via the electronic system). Afterwards, the instructor told the students about an experimental design that a researcher used, and students had to sketch results that would support the hypothesis (using a figure with given axes). Then students were shown the actual data and asked to compare the data to their predictions and decide whether the data supported or refuted the hypothesis. After further discussion, the instructor reminded the students of the three principles of natural selection discussed in a prior class and showed them how sexual selection is a special case of natural selection because all three principles apply to the example of peacock trains. Corresponding Retention Question The following question required recalling a principle that was explicitly stated at the end of the classroom lesson described above, and accordingly only requires retention of class content. Which of the following statements about sexual selection and natural selection is true? A. Sexual selection is a special case of natural selection. B. Sexual selection and natural selection always oppose each other. C. Sexual selection requires differential reproduction whereas natural selection requires differential survival D. Sexual selection affects reproductive organs and natural selection affects everything else. Corresponding Transfer Questions 1. To correctly answer the next question, students must evaluate data displayed in a graph and interpret those data to draw a conclusion about a hypothesis. These processes were practiced in the classroom lesson, but the question focuses on data reflecting the more general principle (natural selection) than the sexual-selection data analyzed in the classroom lesson, the contextual elements have changed (more than one species interacting, different species present, changing context for the target species) and the question is different from that analyzed in class. Thus, transfer from the lesson is required. The invasive cane toad was introduced to Australia in the 1930s. The toads are extremely poisonous to red-bellied black snakes, and there was some concern that the snake population might be decimated. However, the snakes and toads seem to be co-existing. Imagine a biologist began investigating the situation by surveying red-bellied black snakes in areas with and without cane toads and found one major difference between the two populations. The results are depicted in the hypothetical graph below. Which of the following scenarios is most likely given the data? A. Where cane toads are present, red-bellied black snakes evolved a tolerance to cane toad toxins that allowed them to survive and reproduce. B. Where cane toads are present, red-bellied black snakes with large heads died while those with small heads survived and reproduced. C. The presence of cane toads caused mutations that resulted in smaller heads. D. The red-bellied black snaked mutated in response to the cane toads so that they could tolerate cane toad toxins. This question requires analyzing data and integrating multiple concepts: (a) the trait increases lifetime reproduction even though it decreases survival and (b) adaptations should increase in frequency within a population. In the classroom lesson, students discussed a similar idea but with a different species (peacocks) and focusing on the phentotype. The question requires transfer of the class lesson to integration of multiple concepts, a different organismal context, and a focus at the genotype level. Orange sulfur butterflies (Colias eurytheme) typically live up to 4 weeks as adults and females lay hundreds of eggs. Imagine a mutation that causes females to produces eggs more quickly. However, producing so many eggs requires a lot of resources and the mutation also decreases lifespan. A researcher measured the number of eggs produced each week by females with the original and the mutated allele. Average # of eggs produced Week Original allele Mutated allele 1 75 100 2 50 75 3 50 75 4 25 0 (dead) Which of the following is the best prediction given this data? A. The mutated allele will decrease in frequency because it reduces fitness. B. The mutated allele will decrease in frequency because it reduces survival. C. The mutated allele will increase in frequency because it will be favored by mates. D. The mutated allele will increase in frequency because it increases lifetime reproduction. Protein-Structure Lesson with Retention Questions Retention questions could also be more "process-oriented" such that the answer depended on analyzing or interpreting information given in the question (as did the above transfer questions). In the lesson students learned about protein structure as well as how mutations can change structure and function in the context of a scenario about Type 1 diabetes. The particular mutation led to a single AA change in the insulin receptor that resulted in poor binding of insulin to its receptor. To understand this, students became familiar with AA structure by examining individual AAs, including identifying the N and C termini and which part of the AA was most important to focus on when it is part of a polypeptide. Students drew tripeptides and practiced identifying the alpha carbons and the peptide bonds, but had not worked with or seen the particular dipeptide on the exam during class. The following are questions classified as retention items because the students directly practiced identifying the carbons and bonds in class. Use the following image to answer questions 17-19. Note that some of the carbons are numbered. 18. Look at the numbered carbons. Which is/are the alpha carbon(s)? a. 1 b. 1 and 2 c. 1 and 3 d. 1, 2, 3, and 4 19. Which of the following is a peptide bond(s)? a. The bond to the left of carbon 1 b. The bond between carbons 1 and 2 c. The bond to the right of carbon 2 d. The bond to the left of carbon 4
2022-09-17T06:16:43.240Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "cebe53eff6952cb4e60517c4b02277e37e5cb1fa", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1187/cbe.21-12-0335", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "91ad5cd685420a6f6802babb7539c6f346682ebb", "s2fieldsofstudy": [ "Biology", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
203406336
pes2o/s2orc
v3-fos-license
Impact of succinate derivatives on oxidant/ antioxidant balance system in the rat pancreas and liver The succinate derivatives are widely used as a basis for the metabolite type drugs development which possess various pharmacological effects. The start of drugs production and their medical use depends directly on their safety for health. The xenobiotic safety assessment involves the identification of such changes that may be associated with mechanisms of adaptation or damage in organs and systems of the organism. Aim. To determine the nature of succinate derivatives’ impact on the indices of the lipid peroxidation and antioxi- dant system in pancreas and liver of rats. Materials and methods. The succinate derivatives were administered to rats orally 30-fold. The biological material studied were the homogenate of pancreas and liver. We determined the content of the dienic conjugates, lipid hydroperoxides and thiobarbituric acid reactive substances. We also verified the activeness of catalase, superoxide dismutase, glutathione peroxidase and glutathione S-transferasees. Results and discussion. We found the influence of succinate derivatives in sub-toxic doses characterized by increased intensity of the lipid peroxidation in the pancreas interconnected with the changes in the activity of these reactions in the liver. These effects were determined due to the antioxidant enzymes activity decrease. The compounds’ impact differed in the de - gree of disorders in the oxidative-antioxidant organs homeostasis . We have registered that the influence of relatively low doses of the succinate derivatives caused less severe and multidirectional changes in the lipid peroxidation indices and signs of the enhanced the antioxidant protection in the studied organs. Conclusions. The succinate derivatives course changes in the activity of the free radical oxidation and antioxidant system in pancreas and liver thus can affect the state of the metabolic processes in organism. INTRODUCTION An important strategic direction of modern pharmacology lies in the development of drugs based on synthetic analogues of nature metabolites and their derivatives capable (without any side effects) to reduce disorders in certain links of metabolism via stimulation of the adaptation mechanisms. The succinate derivatives (SD's) possess different kinds of such biological activity as antioxidant, detoxifying, membrane-stabilizing, energy-and immune-stimulating ones, which play a significant part in mechanism of their regulatory impact on liver functional state, pancreas, adrenal glands, cardiovascular system, hemostasis e.a. Today the succinic acid and its metabolites are often made the basis of the renowned drugs with the anti-hypoxic, anti-diabetic, anti-inflammatory, cardio-and hepatoprotective actions [1][2][3][4]. The expressed anti-diabetic properties were identified in β-phenylethylamide of 2-oxysuccinanyl acid (β-PhEA-OSAA) synthesized by the State Institution "V. Danilevsky Institute of Endocrine Pathology Problems of National Academy of Medical Sciences of Ukraine". The mechanism of its metabolic action is associated with stimulation of energetic metabolism, oxidative stress suppression in mitochondria as well as with reduction of the non-enzymatic glycosylation [5]. The following are The first phase of β-PhEA-OSAA biotransformation brings out its two metabolites, 2-hydroxyphenylsuccinamid (2-HPhSA) and β-phenylethylsuccinamid (β-PhESA), that also belong to SD's and may affect specific effects and toxic potential of their parent compound. β-PhEA-OSAA plays an active component of the antidiabetic medicine with effectiveness established by various diabetes models and confirmed by the clinical tests. One of the main requirements for new promising drugs covers their safety for health therefore presupposing the toxicological expertise of their ability to damage separate organs and organism systems. One of the initial pathobiochemical mechanisms of all xenobiotics impacts is believed to lie in the pro-/antioxidant homeostasis disorders that lead to changes in various links of metabolism [6]. The biochemical processes are most active in the liver which is involved in detoxification of xenobiotics including most medicines. However, the drug biotransformation forms their toxic metabolites capable to disrupt some functions of the liver so forth affecting the functional state of many organs and systems. The normal as well as pathological liver has a close functional connection with the pancreas, and changes of these central organs of the digestive system state can essentially affect the activity of metabolism in the organism. Damages in liver and pancreas, regardless their genesis, may be initiated by an oxidative stress due to generation of an active form of oxygen and nitrogen with a decrease in level of the pro-inflammatory cytokines. On the other hand, it is known that the pancreas is more susceptible to intoxication than the liver as it differs by a low activity of the antioxidant enzymes and glutathione-synthetic processes that contribute to an intensive advancement of the oxidative stress in this organ [7]. The interconnection between changes of prooxidant-antioxidant system in the liver and in the pancreas under influence of the succinic acid derivatives has not been exposed. The study of this issue is useful for delineation of mechanisms of the biological action and safety of all compounds like these, and especially it is important for the promising antidiabetic medicines developed on their base. The aim is to investigate the succinate derivatives' effects on the activity of the antioxidant enzymes and intensity of the lipid peroxidation reactions in the pancreas and liver tissue in the sub-acute experiment conditions. MATERIALS AND METHODS The sub-acute experiment was conducted over 48 outbred white male rats with body weight 190-210 g. The study complied with the "General Ethical Principles of Animal Experiments" (Ukraine, 2001). β-PhEA-OSAA was orally administered to rats 30-fold in doses 100 mg/kg and 25 mg/kg (1/100 and 1/400 DL 50 resp.). HPhSA was applied in doses 68 mg/kg and 17 mg/kg, and β-PhESA was dosaged 72 mg/kg and 18 mgг/kg. These doses are equimolar to the above mentioned amounts of their original compound. The control and each test groups consisted of 8 animals. After finishing of the experiment the rats were decapitated under the light ether anesthesia, and their liver and pancreas homogenate was obtained for the study of biochemical parameters. Statistical analysis of the obtained results was conducted with the Anova system. Normal distribution of a trait in the sequences was determined using the Shapiro-Wilk (W) criterion. Pair comparison of experimental groups with the control was performed using the Student's t-criterion. The results are presented as the arithmetic mean and its probable statistical error (X ± Sx). We considered reliable all data at p ≤ 0.05, and close to the statistically significant all data at 0.05 < p ≤ 0.1. RESULTS AND DISCUSSION We found that under SD's sub-acute impact the intensity of oxidative processes changed in the pancreas and the liver tissues of rats in parallel. The application of β-PhEA-OSAA at a dose 100 mg/kg as well as action of its two metabolites in equimolar amounts proved to intensify the LPO in the pancreas tissue. β-PhESA (72 mg/kg) and 2-HPhSA (68 mg/kg) caused an increase in content of TBARS by 31 % and 44 % (Р ≤ 0.05) and to а lesser extent, of LHP by 17 % and 10 % (0.05 < Р ≤ 0.1) in the organ homogenate. Our study of state of the antioxidant system (AOS) in the pancreas tissue registered a decrease in the activity of catalase under 2-HPhSA impact (Р ≤ 0.05) and of SOD under β-PhESA influence (0.05 < Р ≤ 0.1). The metabolites of β-PhEA-OSAA to a certain extent impacted the manifestation of this compound's pro-oxidant effect characterized by a significant increase of TBARS content by 22 % and a reduction of SOD activity in the pancreas homogenate (Tab. 1). Under the same conditions of β-PhESA and 2HPhSA exposure, in the liver the changes of the pro-oxidant status were manifested as a tendency to increase in the content of TBARS (0.05 <Р ≤0.1) and were less expressed than in the pancreas. β-PhESA in the liver tissue also caused a significant increase in the activity of catalase, but GPx that is close by functional purpose to this enzyme changed its activity in the opposite direction under the im-Таble 1 pact of that compound. β-PhEA-OSAA, on the contrary, slowed down the rate of the primary and intermediate reactions of the LPO, as evidenced by a significant (23 %) reduction in the content of DC and by more than one third of TBARS content in the organ homogenate. These changes may be connected to the expenditure of resources of the antioxidant system (AOS): a reducing activity of the anti-peroxide enzymes (catalase and GPx) and also of GSТ responsible for the second phase of metabolites detoxification including LPO products (Tab. 1). The deceleration of the free radical oxidation in its turn is reflected in the state of the liver antioxidant protection [15]. PARAMETERS OF LIPOPEROXIDATION PROCESSES AND ANTIOXIDANT SYSTEM STATE IN RAT PANCREAS AND LIVER UNDER SUB-ACUTE IMPACT OF β-PHENYLETHYLAMIDE OF 2-OXYSUCCINANYL With a four-fold decrease of the administrated doses of SD's the increasing rate of generation of LPO products in the pancreas and the liver tissues was only registered under the influence of 2-HPhSA at a dose 17 mg/kg (Tab. 2). The content of TBARS in the pancreas homogenate increased by 23 % (p < 0.05), but this effect was twice less pronounced compared to the same direction changes after the application of that compound in a higher dose. These changes were accompanied by one and half-fold augmentation of the SOD activity in the pancreas homogenate (p < 0.05), which is a sign of the increased resistance of the organ against reactive oxygen and nitrogen species. Under the impact of 2-HPhSA at a small dose in the liver tissue we registered a rising tendency in the content of TBARS and LHP (0.05 < Р ≤ 0.1), but also a substantial boosting in the anti-peroxide defense of the organ due to a compensatory stimulation of GPx and catalase activity by 1.6 and (p < 0.05) 1.4 times (p < 0.05) respectively. Under the influence of β-PhESA at a dose 18 mg/kg, on the contrary, we recorded a reduction of LHP content in the liver homogenate (0.05 < p < 0.1). The application of β-PhEA-OSAA (25 mg/kg) also proved to slow down the LPO reactions, as evidenced by a 26 % reduction of the DC content in the liver homogenate (0.05 < p<0.1) and by a 15 % diminution of LHP content in the pancreas homogenate (0.05 < p < 0.1). In the rat liver after the introduction of β-PhEA-OSAA we noted a significant decrease in SOD activity which perhaps serves as an adaptive reaction linked to a lessening need in dismutation of an excessive amount of superoxide anion radicals. However, the GSТ activity in the liver homogenate increased, and this change shows the activation of the ІІ phase of detoxification (Tab. 2). Toxicological study of SD's effects by criteria of the oxidative and anti-oxidative homeostasis shiftings enabled us to delineate pancreas as their primary target organ. In sub-toxic doses these compounds proved to stimulate the LPO processes through decreasing the resistance of an organ with cells deprived of an effective ferment pro-
2019-09-17T02:59:17.270Z
2019-09-02T00:00:00.000
{ "year": 2019, "sha1": "d910a79fb4c7427ff9cf4eede278ccf49817ce48", "oa_license": "CCBY", "oa_url": "http://ubphj.nuph.edu.ua/article/download/ubphj.19.230/177443", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "aeac1a946f4cbae4a18e6e3efda386e41425374d", "s2fieldsofstudy": [ "Chemistry", "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
264907415
pes2o/s2orc
v3-fos-license
Examination of air pollutants and their risk for human health in urban and suburban environments for two Romanian cities: Brasov and Iasi To detect the spatial differences of atmospheric pollutants in urban and suburban areas is important for observing their aspects on regional air quality, climate, and human health. This study is focused on the evolution of PM2.5, PM10, NOx and SO2, concentrations, and meteorological parameters from 2010 to 2022, at urban and suburban area in the two Romanian city: Brasov and Iasi. The daily patterns of most pollutants in urban and suburban areas, are strongly linked to land-traffic emissions. The seasonal differences were observation of the studied air pollutants displays visible decreasing in warm period and increased concentrations in cold period. Significant higher (25%- Brasov, 28%- Iasi) PM10 were found in urban area concentration probably caused by enhanced vehicular emissions over these areas induced by urban planning and mobility policies. The average relative risk caused by PM10 for all-cause mortality in the urban region was 1.021 (±0.004) in Brasov, and significantly higher in Iasi 1.030 (±0.005). In suburban regions this risk was lower with 33 % 1.014 (±0.006) in Brasov and 30 % 1.021 (±0.003) in Iasi. The main objective of this research was to identify the difference of air pollutants and meteorological parameters in the urban and suburban region of the studied city. Introduction Atmospheric pollution is the first topic related to the environmental issues what need to be mitigate and resolve in our continuously developed century.Airborne particles are a well-done studied air pollutants and several negative effects are associated to them, as increased morbidity and mortality [1][2][3][4][5].Similar to the PM, exposure to NO x and SO 2 can induce several cardiovascular and lung diseases, premature deaths, and carcinogenic effects [6][7][8][9].In addition, these atmospheric pollutants may also have a negative effect on the ecosystems and the agriculture productivity [1,10].Furthermore, damages its built heritage and are related to the climate change to Refs.[11][12][13].The incised level of pollutants concentration are related to the evaluated urbanization level in Romania increased from 34,21 %-to 54,33 % in the last 60 years (1960-2020) [14]. Due to the swift growth in urbanization, it led to a speedy expansion of urban regions [15].The atmospheric factors also have a notable impact on the fluctuation of air contaminants levels in both urban and suburban areas [16,17].The rapid progress of urbanization has resulted in a speedy growth of urban infrastructure.In recent years due he massive consumption of fossil fuels has resulted a fast increase in anthropogenic aerosols concentration [15].Especially in the cold period, where in many cities a frequently fog are present, the particulate matter concentration are very high [3,18].The lifting condensation level also influence the particulate matter distribution in the lower level of the troposphere [3,15].The main objective of the present study is to highlight the differences of air pollutants and meteorological parameters between the urban and suburban environment. Sapling site From the studied cities Brasov is located at an average altitude of 625 m and is situated in the Bârsei depression, in the curvature of the Carpathians.The climate of Brasov is a transitional temperate continental climate with some oceanic influences [19].On the other hand, Iasi is a city in eastern Romania, close to the border with Moldova, with a pronounced continental climate with an average altitude of 60 m [20]. The geographical location of the studied two cities is presented in Fig. 1. Monitoring network and data availability Daily data of PM 2.5 , PM 10 , NO x , SO 2 , from 2010 to 2022 were analyzed, revalidated, and used in this study.Based on the criteria included in the European Directive [21], the monitoring sites used in this work are classified as urban and suburban and rural.The daily average concentration was calculated from the hourly concentrations. The studied monitoring station data were centralized in Table 1.The availability of the monitoring stations data reached 75 % in all cases and therefore allowed averaging.In the case of each city only one suburban monitoring station was available.In the results presentation the urban and suburban background environments will be referred to as U and SU respectively. Monitoring techniques and data quality Air quality measurements performed in Romania are in concordant with the European directives [21].The studied parameters were determined according to the following reference methods: gravimetric method for PM 2.5 and PM 10 concentrations [22]; NO x by chemiluminescence [23]; SO 2 by ultraviolet fluorescence [24].The data were received from the National Air Quality Monitoring Network [25]. The first normality test was carried out using the Kolmogorov-Smirnov test in SPSS statistics, and for the normal distribution data was used two sample t-test, and for the not normal distribution the Mann-Whitney U test was applied in order to find the differences between the urban and suburban area. From the meteorological parameters: daily air temperature, precipitation quantity, relative humidity and the lifting condensation Fig. 1.Sampling site location on the Romanian map. was followed up.The monthly average meteorological parameters were calculated from the daily data, except the precipitation quantity, where the summarized monthly values were used.The lifting condensation level (LCL) was determine based on the following equations (Equations ( 1) and ( 2)): where the: LCL-lifting condensation level, T-air temperature ( • C), RH-relative humidity (%), z 0 altitude, T d -dew point temperature [26,27]. Health risk assessment methodology for short-term effect of PM 10 To evaluate the short-term exposure to PM 10 , the relative risk (RR) for all-cause mortality was established by Ostro in Equation (3) [28].The relative risk for all-cause death was calculated if the PM 10 concentration was higher than the background level (10 g m − 3 ).As the risk function coefficient 0.008 was used (95 % CI: 0.0006-0.0010). where: X represents the background PM 10 concentration (10 g m − 3 ), X 0 the yearly mean PM 10 concentration (g m − 3 ), and the risk function coefficient. 2.4.1. Health risk assessment methodology for short-term effect of PM 2.5 Equation (4) was used to determine the relative risk of lung cancer and cardiopulmonary disease deaths in people over 30 [28]. Descriptive statistics In order to represent the differences between urban and suburban air pollution and meteorological indicators, the averages values have been recorded in Table 2.The urban environment is the dominant source of PM 2.5 , PM 10 , NO X and SO 2 , mostly due to vehicular emissions.The differences between the urban and suburban region were obvious.In the case of the studied two cities in Brasov and Iasi the PM 10 concentration measured in the urban environment was higher with 32 % and 41 %, respectively compared to the suburban level.One PM 2.5 measurement station was available for each city, so we were unable to detect differences between urban and suburban environments but the difference between the PM 2.5 and PM 10 between the two studied city are significant: PM 2.5 concentration higher with 27 % in Iasi compared to the Brasov city.In the case of Brasov, a difference of 7.5 % in SO 2 concentrations was observed between the two different locations, the higher value was in the urban region, and the lower in the suburban.The most significant differences were found in the case of NO x concentration, where difference was 4.4 (Brasov) and 2.8 (Iasi) times higher (Table 2.). In the case of the studied meteorological parameters small differences were detected: higher air temperature in the urban area, on average 0.6-0.8• C differences in the multiannual yearly concentration.These differences are related to the urban heat island effect caused by concrete blocks. The average relative humidity was higher in the case of suburban areas in Brasov with 17.31 % as in urban area.In Iasi the opposite trend was observed. Normality test According to the Kolmogorov-Smirnov normality test where the significance level was lower than 0.05 the following studied parameters show not normal distribution: relative humidity in both cities, SO 2 in Brasov suburban and Iasi urban area, and NO x and PM 10 in Iasi urban and suburban area (Table 3.). Mann-Whitney U test and two sample t-test According to the Mann-Whitney U test and two sample t-tests, the differences between the urban and suburban area were quantitatively detected in both studied cities.After the results in case of the following parameters was found differences between the urban and suburban area in both studied city: NO x , PM 10 , SO 2 , RH, and air temperature.In Iasi the precipitation quantity did not vary significantly between the urban and suburban region.In Brasov only the precipitation shows differences (Table 4.). Temporal variation of the air pollutants In Fig. 2 the multiannual air pollutant concentration was plotted.As the PM 2.5 trends show a slowly decreasing trend are found in Brasov and slowly increased trend in Iasi.In the case of PM 10 concentration the maximum concentrations were measured in 2012 and in 2018.The concentration in Iasi suburban is in closer to the Brasov urban level.According to the WHO guideline the annually acceptable limits for PM 10 is fixed in the 20-μg m − 3 .Except for the Brasov sub-urban region, this limit value was broken every year in Iasi and the urban region of Brasov. The SO 2 concentration in Iasi was not varied significantly in Iasi (1.18 %), both on the other side in Brasov significant variation was detected between the two monitoring station types (7.4 %).Based on the NO x concentration huge concentration differences were found.The suburban station trends show very similar trends in the two cities, both in case of the urban site cumulative in the 5 years the differences between the two studied sites were 2.27.In 2015, the same concentrations were measured, and in the last 7 years the differences are 1.2 times.Between the meteorological parameters the air temperature, the relative humidity and the precipitation quantity was followed up.In the case of the temperature on average in the urban area with 0.8 • C (Brasov) and 1.5 • C (Iasi) were higher than in suburban region.In the case of the relative humidity in the suburban area was detected higher relative humidity values on average 17.31 % in Brasov (Fig. 3.). Box plot analysis The air pollutant concentration shows a very high seasonal difference.In the urban region the maximum monthly average concentration of NO x was detected in the winter, and the minimum values in summer.The differences between the cold and the warm period concentration varied approximately 3 times in the case of Brasov and 2 times higher in case of Iasi.In suburban region the concentration variation was smaller (Fig. 4.).In the urban environment, the dominant source of NOx is high, mainly due to vehicle emissions.The NO x concentration shows a significantly higher level in Brasov compared to the Iasi urban sites and was quite similar in the two suburban regions.On average, 62 % (Iasi) and 74 % (Brasov) of the urban concentrations of NO x are due to city emissions (Fig. 4.). Based on the PM 2.5 concentration the maximum concentration was measured in the winter period, and the minimum in summer period (Fig. 5.). The differences between the cold and the warm period PM 2.5 concentration were in Brasov 1.7 times and 1.6 times in Iasi.In the transition seasons the concentrations were almost identical. Likewise, the urban region is an active source of PM 10 , as 25.8 % (Brasov) and 28.6 % (Iasi) of the PM 10 concentrations measured in the city are produced within the urban area.The concentration fluctuation (the ration between the minimum and maximum concentration evaluation during the year) over the seasons was 1.52 (Iasi) and 1.7 (Brasov) times in the urban area, and 1.4 (Iasi) and 2.4 (Brasov) in the suburban zone (Fig. 6.). Less marked is the urban increase of SO 2 , only 2 % (Iasi) and 8 % (Basov) higher than in the urban site (Fig. 7.).Among the meteorological parameters, air temperature, relative humidity and precipitation were measured.As shown in Fig. 8 the annual distribution of precipitation shows an uneven distribution, with the highest precipitation occurring in the spring months and June, when a large proportion of the annual precipitation falls, especially in Iasi 38 % and in Brasov 44 % from the annually quantity was falls at this time.The driest months of the year can be placed in the winter period. The monthly average air temperature is higher in urban regions with 0.7 • C in Brasov and 2 • C in Iasi in warm period, and similar 1 • C differences were detected in the cold period (Fig. 9.). The relative humidity distribution shows the opposite trend compared to the air temperature, lower level in summer and higher level in wintertime.In the case of Iasi, the relative humidity was 74 % and 69 % in warm period, and 88 % and 83 % in cold period in urban and suburban region, respectively.In the case of Brasov, the relative humidity was 66 % and 77 % in warm period, and 80 % and 88 % in cold period in urban and suburban region, respectively (Fig. 10.). The lifting condensation level shows a huge fluctuation over the season (Fig. 11). Relative risk calculation The relative risk (RR), were calculated for all-cause mortality in case of the Iasi and Brasov urban and suburban region, separately using the average yearly PM 10 concentration.The average relative risk caused by PM 10 for all-cause mortality in the urban region were 1.021 (±0.004) in Brasov, and significantly higher in Iasi 1.030 (±0.005).In suburban region this risk was lower with 33 % 1.014 (±0.006) in Brasov and 30 % 1.021 (±0.003) in Iasi (Table 5.).Also, high relative risk for cardiopulmonary disease observed which is mainly attributed to PM 2,5 exposure; according to the average values, the relative risk was 1.26 (±0.04) in Brasov and 1.31 in Iasi (±0.03), in respectively (Table 2).For the cancer risk the following relative risk values were detected: 1.42 in Brasov and 1.5 in Iasi. Spearman correlation analysis Based on the samples size the significant correlation level was fixed: ±0.4.Spearman correlation analysis the following significant correlation coefficient were detected in Brasov: significantly higher correlation coefficient was found between the air pollutants: PM 2.5 -PM 10 (r = 0.77 urban, r = 0.72 suburban) The particulate matter concentration shows strong positive correlation with the NO x and SO 2 in urban areas.Negative correlations were found between the air pollutants and precipitation quantity and air temperature (Fig. 12.).The correlation level was higher in urban sites, thanks to the intense emission from the traffic. In Iasi the correlation level between the studied parameters shows significantly in case of PM 2.5 -NO x (r = 0.62-urban, r = 0.48suburban).Between the PM 2.5 and PM 10 the same correlation coefficient was found r = 0.53 (Fig. 13.). Conclusion In this research a 11-year database of different air quality parameters and meteorological data has been investigated to find the differences between the urban and suburban environment.The PM 10 concentration was 25 % higher in urban areas compared to the suburban region.The air pollutant concentration shows a very variable trend over the year: maximum concentration was measured in the winter period, and the minimum in summer period.The differences between the two cities are evident.The higher concentration of air pollutants in Iasi are related to the geopolitical location, are bordered with the non-EU member, where the Air pollution Directives are not applied yet. The calculated relative risk in urban areas was 33 % part higher than in the suburban region.The annual distribution of precipitation shows an uneven distribution, with the highest precipitation occurring in the spring months and in June.Trend calculations reflect a clear improvement of the air quality in Iasi Urban environments, with a considerably drop of PM 10 concentrations.Due to increased exposure to road traffic emissions as a result of urban development rules and mitigation methods, this decreased tendency is less noticeable in suburban areas.Due to the high levels of emissions from transportation, the correlation level was higher in urban areas. The industry operates in accordance with the standards of the European Union, but due to Romania's specific climate and construction structure, it has not been able to solve the transport problem.The suburban zone near the big cities of Romania is combined with the industrial zone and agricultural areas.In Romania, many suburban areas are heated with biomass, and in the poorer parts they even cook with it.In the case of Brasov and Iasi, due to the exceeding of PM and NO X concentrations, an obligation procedure was initiated against Romania. Fig. 2 . Fig. 2. Annually concentration of air pollutants in urban and suburban region. Fig. 3 . Fig. 3. Annually evaluation of meteorological parameters in urban and suburban region. Fig. 4 . Fig. 4. Multiannual monthly box-plot analysis of the NO x concentration. Fig. 8 . Fig. 8. Multiannual monthly box-plot analysis of the monthly summarized precipitation quantity. Fig. 9 . Fig. 9. Multiannual monthly box-plot analysis of the monthly air temperature. Fig. 10 . Fig. 10.Multiannual monthly box-plot analysis of the relative humidity distribution based on the monthly relative humidity distribution. Fig. 11 . Fig. 11.Multiannual monthly box-plot analysis of the lifting condensation level distribution. Fig. 12 . Fig. 12. Spearman correlation matrix of the studied parameters in Brasov urban and suburban region. Table 1 The studied monitoring station detail. Table 4 Statistical -test results. X BV is the same across categories (Urban, suburban) Mann-Whitney U test 0. 00 Reject the null hypothesis The distribution of NO X IS is the same across categories (Urban, suburban) Two sample t-test 1. 97/16.75 Reject the null hypothesis The distribution of PM 10 BV is the same across categories (Urban, suburban) Mann-Whitney U test 0 .00 Reject the null hypothesis The distribution of PM 10 IS is the same across categories (Urban, suburban) Two sample t-test 1. 96/11.83 Reject the null hypothesis The distribution of SO 2 BV is the same across categories (Urban, suburban) Mann-Whitney U test 0 .00 Reject the null hypothesis The distribution of SO 2 BV is the same across categories (Urban, suburban) Two sample t-test 1. 96/2.42 Reject the null hypothesis The distribution of SO 2 IS is the same across categories (Urban, suburban) Mann-Whitney U test 0 .01 Reject the null hypothesis The distribution of SO 2 IS is the same across categories (Urban, suburban) Two sample t-test 1. 96/11.83 Reject the null hypothesis The distribution of Prec BV is the same across categories (Urban, suburban) Mann-Whitney U test 0 .02 Reject the null hypothesis The distribution of Prec IS is the same across categories (Urban, suburban) Mann-Whitney U test 0.534 Retain the null hypothesis The distribution of RH BV is the same across categories (Urban, suburban) Two sample t-test 1.96/9.20Reject the null hypothesis The distribution of RH IS is the same across categories (Urban, suburban) Two sample t-test 1.96/3.78Reject the null hypothesis The distribution of T BV is the same across categories (Urban, suburban) Mann-Whitney U test 0 .04 Reject the null hypothesis The distribution of T IS is the same across categories (Urban, suburban) Mann-Whitney U test 0 Table 5 Relative risk calculation for PM 10 and PM 2.5 .risk for all ages due to PM 10 for all-cause mortality Relative risk for age >30 years due to PM 2.5 for cardiopulmonary disease Relative risk for age >30 years due to PM 2.5 for lung cancer disease
2023-11-02T15:25:02.902Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "b8084462b5b94df6929c65689950ecf6a866cf15", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844023090187/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a188e376503450ef740709ddd4e14510013e971", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
235490380
pes2o/s2orc
v3-fos-license
Filtering states with total spin on a quantum computer Starting from a general wave function described on a set of spins/qubits, we propose several quantum algorithms to extract the components of this state on eigenstates of the total spin ${\bf S}^2$ and its azimuthal projection $S_z$. The method plays the role of total spin projection and gives access to the amplitudes of the initial state on a total spin basis. The different algorithms have various degrees of sophistication depending on the requested tasks. They can either solely project onto the subspace with good total spin or completely uplift the degeneracy in this subspace. After each measurement, the state collapses to one of the spin eigenstates that could be used for post-processing. For this reason, we call the method Total Quantum Spin filtering (TQSf). Possible applications ranging from many-body physics to random number generators are discussed. I. INTRODUCTION Given a complex problem and a set of qubits forming a quantum computer, what is the optimal way to encode the information on the problem in this quantum computer? There is certainly no unique answer to this question. A strong guide is the symmetry properties of the problem under consideration. Typical examples of interest for the present discussion are the combinatorial problems. Let us assume a system corresponding to a set of n elements {e i } with some properties. The answers to some questions about the system sometime do not change if we make permutations between some of the elements. We assume that the property of each element e i is now encoded on a qubit |s i (with s i ∈ {0, 1}). A possible basis is the basis formed by tensor product states {⊗ i |s i }, which we call the natural basis (NB). In this basis, the system can be represented on a quantum computer by a wave function |Ψ = si∈{0,1} Ψ s1,··· ,s N |s 1 , · · · , s n . (1) The invariance of the wave function will then reflect the invariance of the combinatorial problem with the permutation of some of the elements with respect to the exchange of spins. The direct consequence of the invariance, which is well known in physics, are (i) that specific recombinations of the natural states will show up in Eq. (1); (ii) that the problem might be more efficiently treated by considering the proper combination of states prior to the encoding of the problem. The latter is the underlying idea of permutational quantum computing (PQC) introduced in Refs. [1,2] and further discussed in Ref. [3]. The PQC technique uses an alternative basis for the quantum processor unit (QPU) connected to the eigenstates of the total spin and its azimuthal component. * Electronic address: denis.lacroix@ijclab.in2p3.fr The interest in such a basis for combinatorial problems is not surprising. It was indeed realized in the early times of quantum mechanics that these states are intimately linked to the permutation symmetry group S n (for a nice historical overview, we recommend the Ref. [4]). Therefore, this basis is widely used to solve quantum manybody problems using the total spin algebra. Its links to the permutation group are well documented in many textbooks, to quote some of them [5][6][7]. In the following, we will simply use the terminology "Total Spin Basis" (TSB) for the basis to be used in the PQC framework. Finding the TSB is equivalent to construct the complete set of the irreducible representations of the symmetric group. The construction of these representations from the natural basis on a quantum computer has attracted a lot of attention primarily due to its usefulness in quantum many-body problems appearing in physics and chemistry [8]. For example, an efficient quantum algorithm based on the Schur transformation was proposed in Ref. [9] (see also Refs. [10][11][12][13]). We note that a classical algorithm was proposed in Ref. [14] that can compete in computing the amplitudes in the PQC. The possibility of preparing and using states of the TSB on a quantum computer is also of great interest for studying interacting particles with spins when the total spin commutes with the Hamiltonian. In the classical simulations, the use of such symmetry automatically gives a focus on the relevant subspace of the Hilbert space. Significant efforts are being made currently to prepare many-body states in quantum computers that automatically preserve the spin symmetry [15][16][17][18][19], intending to obtain more optimal states that can be used in variational calculations. In the present work, we have a different objective. Given an initial state that is not necessarily an eigenstate of the total spin or of the azimuthal projection of the total spin, we propose a general algorithm to compute the amplitudes of the state on the TSB states. It turns out that this algorithm can also be used (i) to select a specific component of the initial state projected on arXiv:2106.10867v1 [quant-ph] 21 Jun 2021 good total and azimuthal spin, playing the role of spin projection, or (ii) to obtain specific states of the TSB. For this reason, we call it the Total Quantum Spin filtering (TQSf) method. We note that the objective is directly related to the symmetry breaking/symmetry restoration problem, and discussion on its formulation on quantum computers can be found in Refs. [20][21][22][23][24][25]. II. QUANTUM ALGORITHMS FOR THE TQSF METHOD -(METHOD AND NOTATION) We consider here an ensemble of n spins labelled by i with components up or down denoted by {|σ i = |± i } i=0,n−1 . We use the convention |0 i = |+ i and |1 i = |− i to match with the standard notations in quantum computing. The total spin operator of the system is defined as S = i S i , where S i denotes the spin operator associated with the particle i that is linked to the standard Pauli matrices through S i = 1 2 (X i , Y i , Z i ). These three operators are completed by the identity operator I i . We consider a general wave function |Ψ given in the natural basis by Eq. (1). We know that the eigenstates of the commuting variables S 2 and S z form a complete basis for the Hilbert space of n qubits. The possible eigenvalues of S 2 and S z are given by S(S + 1) and M (assuming = 1) respectively, with the constraints S ≤ n/2 and −S ≤ M ≤ +S. We introduce the set of projectors P [S,M ] that projects on the sub-space associated with the eigenvalues (S, M ). Our first objective is to obtain the amplitudes of the initial state decomposition A S,M ≡ Ψ|P [S,M ] |Ψ and eventually extracts one of the projected normalized states given by To achieve this objective, we apply the technique proposed in Ref. [25]. We consider two separate operators U S and U z , allowing for the discrimination of S 2 and S z when used in the Quantum Phase Estimation (QPE) algorithm [26][27][28][29]. As a result, the projection on the states |Ψ S,M is automatic when the ancillary qubits used in the QPE are measured. The operators used to discriminate the different components are taken as U S/z = e 2πiα S/z (n)O S/z , where O S and O z are operators with known eigenvalues. The eigenvalues, denoted by {λ S i } and {λ z i }, are proportional to those of S 2 and S z , respectively. Furthermore, α S (n) and α z (n) should be chosen in a very specific way. These parameters should ensure that, for all eigenvalues, the quantities α x (n)λ x i verifies 0 ≤ α x (n)λ x i < 1 and that these quantities always correspond to a binary fraction with a finite number of terms. Moreover, denoting the number of extra ancillary qubits used in the QPE, by n S and n z respectively for U S and U z , these numbers should be chosen as the minimal values such that 2 nx α x (n)λ x i are positive integers for all eigenvalues. There is some flexibility in the choice of both U S and U z . First, we consider the total S z component. This component verifies: is the operator that counts the number of 0 (resp. the number of 1 in the state). S z , N 0 , and N 1 are commuting operators, and the states of the natural basis are eigenstates of these operators. To select the states with good particle number or, equivalently, eigenstates of S z , we use the QPE applied on N 1 . With the constraint listed above, a convenient choice is The eigenvalues of N 1 range from 0 to n. Accordingly, the minimal possible value for n z is such that n z > ln n/ ln 2. With this choice, the filtering of states with respect to the eigenvalues of S z becomes strictly equivalent to the particle number projection illustrated in Ref. [25]. Therefore, in the natural basis, U z is given by a product of phase operators We now consider the projection on total spin S 2 . For n qubits, the eigenvalues of this operator are positive and verifies λ S ≤ n(n + 2)/4. Depending on the fact that n is even or odd, we propose the following form of U e/o S : The number of ancillary qubits has the constraints n S > ln k(k + 1) ln 2 − 1 (even), n S > ln k(k + 2) ln 2 (odd), (4) respectively for even n = 2k and odd n = 2k + 1 cases. In practice, to compute the U e/o S operators, we use the standard formula [4]: that generalizes the Dirac identity originally derived for two spins in Ref. [30]. The set of operators P ij are the transposition operators given by We have in particular P ij |δ i δ j = |δ j δ i and P 2 ij = P ij . With the formula given in Eq. (5), the link between total spin operator and permutation group becomes explicit. Some aspects of transpositions and their use in directly FIG. 1: Schematic illustration of the circuit used in the present work to filter the states using the total spin S 2 and Sz components. constructing states with good total spins were discussed in Ref. [19]. In the quantum computing context, the transposition operators are nothing but the SWAP operators. In the present work, we implement the U S operators [Eq. (3)] using the Trotter-Suzuki decomposition technique [8,31] based on the expression given by Eq. (5) and by noting that: A schematic diagram of the circuit to perform the simultaneous selection of eigenstates of S z and S 2 is shown in Fig. 1. The method proposed here is tested using the IBM toolkit qiskit [32]. We show in Fig. 2 the amplitudes obtained for a system described on n = 4 qubits by measuring the ancillary qubits of the circuit shown in Fig. 1 for two examples of initial states. For such a small number of qubits, the decomposition in terms of the |Ψ S,M can be obtained analytically. We have checked that the amplitudes obtained with the measurement are consistent with the analytical ones within the errors due to the finite number of measurements. We can further analyze the obtained results. Results displayed in panel (a) of Fig. 2 correspond to an initial state that is completely symmetric with respect to any permutation of the qubit indices. Consequently, it only decomposes on state |Ψ S,M that also has this property. Such states correspond to the states with the maximal possible eigenvalue of S 2 , i.e., in our case, S = 2. In the context of group theory, the irreducible representation associated with the TSB can be represented by the Young tableau [5][6][7] with a maximum of two rows. Fully symmetric states are those represented with a single row. For a general initial state with n qubits given by |Ψ = n−1 k=0 H k |0 , its decomposition onto the TSB will be given by where k is the eigenvalue associated to N 1 . For this specific initial state, the amplitudes p k are equal to C k n /2 n identifying with a binomial distribution (with p = q = 1/2). In the large n limit, this probability will tend to a Gaussian probability. It is interesting to mention that a direct by-product of the approach is the possibility to generate random numbers x k = k/n ∈ [0, 1] on an equidistant discretized mesh according to the set of probabilities {p k }. We now come to the main goal of the present algorithm. When measurements are performed on the two sets of ancillary qubits respectively associated to U S and U z , after each measurement labelled by (λ), according to the Born rule, the total wave function |Ψ Here, S (λ) (resp. M (λ) ) should be interpreted as the binary number obtained by measuring the ancillary qubits associated with U S (resp. U z ) in the event λ. So, after the measurement, the wave function |Ψ is an eigenstate of both the total spin and its azimuthal component. Said differently, the circuit represented in Fig. 1 plays the role of a funnel that lets only one component (S, M ) pass at each event, and therefore, acts as a projector on the TBS basis. The values (S (λ) , M (λ) ) might change at each measurement unless the initial state is already an eigenstate of the total spin operators. In general, the outcome of the circuit can be controlled solely through the initial state. Consecutively, the projected state can be used for postprocessing. A direct application of the present method in physics or chemistry is to study spin systems that encounter spontaneous symmetry breaking associated with a preferred spin orientation. If we assume that the initial state depends on a set of parameters {θ i } i=1,g , the symmetry restored state can then be used in variational approaches both prior (projection after variation) or after the projection (projection before variation) (see for instance Refs. [33,34]). The circuit of Fig. 1 helps in achieving our first objective, which is the preparation of states with good total spin and total z-projection. This technique also works if the initial state is not fully symmetric with respect to the permutation of qubits. An example of such application is given in panel (b) of Fig. 2. As we see, in this case, the state will also have components on total spins with S < n/2, i.e., the states corresponding to a Young tableau with two rows. There is, however, a difference between the Fully Symmetric (FS) case and the other cases. of the different eigenstates, where the mixing coefficients will directly reflect the relative proportion of the degenerated states in the initial wave function. III. CONSTRUCTION OF THE AMPLITUDE ON THE COMPLETE IRREDUCIBLE REPRESENTATION BY MEASUREMENTS Let us now consider a complete basis formed by eigenstates of S 2 and S z . We denote one element of the basis by |S, M g . The indices g = 1, d S,M are introduced to dissociate different states belonging to the space H S,M . The system's initial state |Ψ can be decomposed as When d S,M = 1, the state |S, M 1 will be identical with the state |Ψ S,M introduced previously. Otherwise, |Ψ S,M is an admixture of the states |S, M g . Here, we intend to generalize the circuit proposed in Fig. 1, in order to obtain the amplitudes |c g S,M | 2 , and to obtain directly one of the states of the irreducible representation |S, M g after the measurement of ancillary qubits. For this, we use the same strategy as in the PQC framework. Coming back to the Young tableaux representation, all states that are not fully symmetric have two rows. Let us assume that these states correspond to l 1 and l 2 blocks on the first and second row, respectively (with l 2 ≤ l 1 and l 1 + l 2 = n). The associated total spin corresponds to S = (l 1 − l 2 )/2. The different |S, M g have the same (l 1 , l 2 ) but differ in their symmetries with respect to the exchange of qubits. Each state can be associated to a different sequence of Young tableaux when including each spin/qubit one after the other [3,5,14] (see for instance Fig. 4 of Ref. [14]). The sequence of the Young tableau can be seen as an iterative procedure, where the total spin of n qubits is obtained by coupling one spin at a time. Starting from one spin, a second spin is added and an eigenstate of the operator S 2 [2] is obtained. Here, the index [2] indicates that the operator refers only to the first two spins. Consecutively, a third spin is coupled to find an eigenvector of S 2 [3] , and so on, until the eigenstate of S 2 [n] is obtained [3,14]. For a system with n qubits exactly, S 2 [n] identifies with the total spin S 2 defined previously. In the following, we denote the total spin eigenvalue for the first k qubits by S [k] , such that the eigenvalue of S 2 [k] is equal to S [k] (S [k] + 1). As an illustration, we consider the n = 4 case used in Fig. 2. The states |1, M can be generated by the three sequences of Young tableau given by Omitting S [1] that is always equal to 1/2, the three sequences in Eq. (10) correspond to the set of eigenvalues for [S [2] → S [3] → S [4] ] respectively given by (a) There are several important properties to be recalled here. First, there is a one-to-one correspondence between the Young tableaux sequence and a state of the irreducible representation. Second, the state constructed by a Young tableaux sequence has a "memory" of its path, i.e., it is an eigenvalue of the full set of operators S 2 [2] , · · · , S 2 [n] along with the total S z components. This last property gives us a direct way to generalize the circuit given in Fig. 1 and obtain the amplitudes in Eq. (9). A brute force technique consists of introducing a set of ancillary qubits and perform independent QPEs for all the operators S 2 [j] together with the QPE associated to the total S z component. In practice, the QPE on a specific total spin S 2 [j] is associated to a unitary operator denoted by U [j] , which can be constructed in a similar way as the operators defined in Eq. (3) depending on whether j is odd or even. The operators U [j] are deduced simply by replacing S 2 with S 2 [j] and by optimizing the number of ancillary qubits n [j] according to the accessible eigenvalues of S 2 [j] as prescribed in Eq. (4). The corresponding circuit is shown in Fig. 3. This circuit is implemented to perform calculations utilizing qiskit [32], and the results obtained for the same condition as in panel (b) of Fig. 2 are shown in Fig. 4. We see in this figure that the amplitudes associated previously with the two components |Ψ 1,M with M = −1, 1 have now systematically split into three amplitudes corresponding to the three states |1, M g=1,2,3 . Similarly, the component associated to |Ψ 0,0 is now separated into the two contributions |0, 0 0 and |0, 0 1 . Here, again, the algorithm has been validated by confronting the amplitudes obtained numerically with the analytical ones. Finally, we mention that the outcome of the circuit after each measurement is one of the states of the irreducible total spin representation. A. Reducing the circuit depth of the TQSf method The brute-force generalization of the algorithm to project a given state onto one of the irreducible repre- sentations of the total spin requires a rather larger number of operations and of ancillary qubits. As seen in Eq. (5), the number of transpositions in S 2 [j] is equal to j(j − 1)/2. Therefore, if the Trotter-Suzuki technique is employed to simulate the operator U [j] , the exponential appearing in this operator have a priori also to be split into j(j − 1)/2 terms. To reduce the numerical efforts, first, we note that the states |S, M g are also eigenstates of the difference S 2 [j] − S 2 [j−1] for 2 ≥ j ≥ n. Since we have we finally remark that these states are the eigenstates of the set of simpler operators given by for j = 2, · · · , n. where n [j] is optimally chosen as the minimal value of n [j] verifying for j ≥ 2 The use of H [j] instead of S 2 [j] has two practical advantages. As seen from Eq. (12), these operators contain only (j − 1) transpositions, and therefore the number of terms in the Trotter-Suzuki method will scale linearly with j compared to the quadratic number of terms for S 2 [j] . In addition, the number of ancillary qubits n [j] obtained from the condition given in Eq. (14) will also be much lower than the one obtained from the previous condition given in Eq. (4) when j increases. We have also implemented the TQSf approach based on the operators {H [j] } for the illustration given in Fig. 4 and have obtained strictly the same results (not shown here) but with a less number of operations. B. TQSf method based on sequential measurements technique with minimal quantum resources In the previous discussion, we have explored the possibility of obtaining the amplitudes of any state on the total spin basis by performing the simultaneous measurements of a set of ancillary qubits. These measurements give a snapshot of the paths of each total spin eigenvectors in the so-called sequential construction of the state. As underlined in Ref. [3] and further discussed recently in Refs. [13,14], one can associate a binary number to each path representing directly the increase or decrease of the total spin components or Young tableaux construction (see Fig. 4 ] and (c) [ ] that can be associated with the three binary numbers 110, 101 and 011, respectively. A possible manner to directly encode the increase or decrease of the total spin on a single qubit is to find an appropriate operator to encode this property. Let us assume that we have j − 1 qubits already having a total spin S [j−1] that is known. If we add one more spin, the new total spin that is accessible to the complete set of spins will be S [j] = S [j−1] ± 1/2 (note that S [j−1] = 0 imposes S [j] = S [j−1] + 1/2). A simple analysis shows that the following operator has an eigenstate of the total spin with an eigenvalue equal to 1 (resp. 0) for the eigenstate associated to the spin S [j] = S [j−1] +1/2 (resp. S [j] = S [j−1] −1/2). Therefore, this operator directly encodes the increase or decrease of the total spin when adding the spin j. An important aspect of the application of the operator G [j] is that (i) the mapping to a single binary digit is valid only if the set of j − 1 spins are already projected onto eigenstates of the total spin S 2 [j−1] and (ii) the eigenvalue S [j−1] is known. Assuming that these two conditions are fulfilled, it is worth mentioning that a single ancillary qubit will be necessary to perform the QPE method for the G [j] operator. The unitary operator to be used in the QPE is given by and the QPE reduces to a simple Hadamard test. The two conditions above suggest a modified algorithm with an iterative procedure for the measurements with a successive set of projections on the S 2 [j] with increasing j. We restart from a system |Ψ described on a set of n spins. We introduce a variable S that will be updated at each measurement and equal to the S [j] value at step j. Initially, S = 1/2, i.e., the total spin of a single spin. Consecutively, we make the set of Hadamard tests/measurements iteratively as follows Measure the ancillary qubit M = result of the measurement (0 or 1) One difficulty in the algorithm is that the intermediate step j is triggered by the knowledge of S [j−1] and more generally of the total spins components S [k] with k < j. Assuming ideally that the interface from a quantum to a classical computer works perfectly, the above algorithm can be implemented using sequentially a set of Hadamard tests for the operators V [k] with increasing k. Explicitly, starting from the initial state |Ψ , an Hadamard test is performed using V [2] , after the ancillary qubit measurement, the value of S is updated on the classical computer and the new operator V [3] is constructed. Then, a second Hadamard test is made on the system using V [3] , and so on, until all Hadamard tests are performed. This procedure is nothing but a quantum algorithm with repeated controlled operations by the classical computer. We show in Fig. 5 the circuit for a three spin case to perform this scheme (panel (a)). We start with considering two spins, and after measurement, the resulting value (0 or 1) stored in the classical bit is utilized to control the form of operator V [j] to be considered for the three spin case. Furthermore, by adding one more ancilla qubit for V [3] , S2 = 0 V [3] , S2 = 1 [26], suitable for currently available real quantum hardware. These circuits can be extended for a larger spin system in a similar manner. each spin, and the controlled operations using the values stored previously on classical bits, we can extend this circuit to explore the larger spin/qubit systems. Considering the same initial state as used in Fig. 2 (b), the results obtained from the extension of circuit Fig. 5 (a) to four qubit/spin case are given in Fig. 6. As a straightforward validation, we can see that the contribution of total S is the same as given in Fig. 4. To obtain the irreducible representation, we can project the S z in the same way as performed in the earlier two techniques. We finally mention that conditional operators on the classical register are currently not supported on the available real quantum devices. Therefore, we also explore the possibility to apply the present procedure without requesting classical controlled operations. An alternative procedure is to use the circuit given in Fig. 5 (b), which is based on the principle of deferred measurement [26]. This procedure for the three qubit case essentially leads to the same results. But this circuit has complexities in the form of multi-controlled gates, which need to be further decomposed into single and two-qubit gates. IV. CONCLUSIONS Starting from a general wave function described on a set of spins/qubits, we address the problem of its decomposition onto eigenstates of the total spin. We begin with the methodology proposed in Ref. [25] to obtain a minimal of quantum algorithms that play the role of projection onto total angular momentum and total spin azimuthal projection. The different algorithms have various degrees of sophistication depending on the requested tasks. They can either solely project onto the subspace with good S and M components or completely uplift the degeneracy in this subspace. The measurement of the ancillary qubits gives access to the amplitudes of the initial states on a total spin basis. After each measurement, the Fig. 5 (b). The 0 (1) in the bit strings on horizontal axis represents the increase (decrease) in the spin, and the total spin S is given in parenthesis. The path represented by the bit strings should be read from right to left. state collapses to one of the eigenstates of the total spin. Therefore, the procedure can be used as a filter to prepare such eigenstates. For this reason, we call the method Total Quantum Spin filtering (TQSf). We propose several methods, either performing the operations on a quantum computer only or mixing quantum-classical computation. In the latter case, the quantum resources are minimized. The method can have a wide range of applications. The first one, the original motivation of the present work, is associated with the spin symmetry restoration in many-body systems such as those appearing in quantum chemistry, nuclear physics, or condensed matter physics. In this case, a parametrized spin symmetry breaking state can be used as the initial state, and the TQSf can be performed prior to the state parameter optimization. This variation after projection method [33,34] is known to be rather effective but states obtained in this way are difficult to manipulate on a classical computer. We mention in the article that the method can also be used to generate random numbers on a discretized mesh. This aspect could be further explored in the future by connecting our study with spin random walk theory [36]. Another avenue that could be interesting to explore is the possibility to generalize the present method to construct tensor networks (see, for instance, Ref. [37] for the recent discussion on the connection between the SU(2) algebra and tensor networks).
2021-06-22T01:15:56.167Z
2021-06-21T00:00:00.000
{ "year": 2021, "sha1": "ffc125a74a64eadb7160549eb4ecb32aa07bda43", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2106.10867", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ffc125a74a64eadb7160549eb4ecb32aa07bda43", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
119090924
pes2o/s2orc
v3-fos-license
A discovery of young stellar objects in older clusters of the Large Magellanic Cloud Recent studies have shown that an extended main-sequence turn-off is a common feature among intermediate-age clusters (1--3 Gyr) in the Magellanic Clouds. Multiple-generation star formation and stellar rotation or interacting binaries have been proposed to explain the feature. However, it remains controversial in the field of stellar populations. Here we present the main results of an ongoing star formation among older star clusters in the Large Magellanic Cloud. Cross-matching the positions of star clusters and young stellar objects has yielded 15 matches with 7 located in the cluster center. We demonstrate that this is not by chance by estimating local number densities of young stellar objects for each star cluster. This method is not based on isochrone fitting, which leads to some uncertainties in age estimation and methods of background subtraction. We also find no direct correlation between atomic hydrogen and the clusters. This suggests that gas accretion for fueling the star formation must be happening in situ. These findings support for the multiple-generations scenario as a plausible explanation for the extended main-sequence turn-off. INTRODUCTION Star clusters (SCs) are fundamental building blocks of galaxies. Their physical properties encode valuable information not only on their own formation processes but also on the evolutionary histories of their host galaxies. They were once thought to form in a single epoch of star formation that produced thousands or millions of coeval stars with the same chemical composition. Recent high-quality Hubble Space Telescope (HST) photometric studies of colormagnitude diagrams (CMDs) of intermediate-age (1-3 Gyr) SCs in the Magellanic Clouds uncovered evidence of multiple stellar populations with unexpected characteristics, such as two distinct main-sequence turn-off for NGC 1846 (Mackey & Broby Nielsen 2007). Some of the Galactic globular clusters also reveal multiple stellar populations with different helium and light element abundances (e.g., see Piotto et al. 2015;Milone et al. 2016). NGC 2808 is the most extreme cases where this phenomenon was first discovered (Piotto et al. 2007). These important discoveries have challenged the traditional view of SCs being formed in a single star formation episode (Mackey et al. 2008;Goudfrooij et al. 2015;Bekki & Mackey 2009;Milone et al. 2015). ⋆ E-mail: biqing.for@uwa.edu.au The direct explanation for the extended main-sequence turn-off (eMSTO) is that these clusters experienced prolonged star formation for ∼100-500 Myr (Goudfrooij et al. 2009(Goudfrooij et al. , 2014, though one Large Magellanic Cloud (LMC) cluster (NGC 1783) has recently been reported to have a possible age spread of ∼1 Gyr (Li et al. 2016). These results suggest that LMC clusters that are 1 Gyr old might still show signs of ongoing star formation today. However, such age spreads have recently been disputed by other observational and theoretical studies (Li et al. 2014;Bastian et al. 2013). Stellar rotation or interacting binaries that can mimic an age apread on the CMD have been proposed as an alternative explanation (Bastian & de Mink 2009;Yang et al. 2013;D'Antona et al. 2015 and references therein). It therefore remains unclear if the clusters contain multiple generations of stars. To test the prolonged star formation scenario, we adopt a method that does not rely on isochrone fitting to the CMDs of the clusters, for which results have been refuted due to field star contamination in some cases (see e.g., Cabrera-Ziri et al. 2016). Young stellar objects (YSOs) are stars in their very early stage and have ages of less than 1 Myr (Dunham et al. 2015) so that the detection of YSOs can be considered as evidence of ongoing star formation. We therefore search for YSOs in the clusters with ages in the c 2016 The Authors range of 0.1 to 1 Gyr in the LMC to determine if SCs could host multiple generations of stars. In this letter, we report the finding of YSOs in the center regions of several older SCs. Our finding provides clear evidence for ongoing star formation in SCs that once completed their star formation, and therefore demonstrates the presence of multiple generations of stars in at least some LMC clusters. ANALYSIS AND RESULTS We employ the HERITAGE band-matched (Seale et al. 2014) and star cluster (Glatt et al. 2010) catalogues (hereafter S14 and G10) for the analysis. The S14 catalogue consists of astronomical objects that are detected in multiple HERITAGE images. These objects were positionally cross-matched and then further matched to Spitzer IRAC and MIPS point sources catalogues. The S14 catalogue is dominated by bright YSOs but potentially contaminated by background galaxies at low flux level. The S14 have employed step-by-step cuts to eliminate contamination. They also used Herschel photometry constraints on dust mass for asymptotic giant branch (AGB) progenitors to distinguish ambigious classification between post-AGB and YSOs. To take into account the flux confusion limit, YSOs were classified as having either a high probable or a moderate possible likelihood of being a YSO. The S14 catalogue employed here contains at most 1% evolved stars. The G10 catalogue lists the derived ages and V-band luminosities of young SCs (∼9 Myr to 1 Gyr) in the Magellanic Clouds. The G10 study selected SCs from a general catalogue of extended objects in the Magellanic System (Bica et al. 2008; hereafter B08) and employed the photometric data of the Magellanic Clouds Photometric Survey (MCPS; Zaritsky et al. 2002Zaritsky et al. , 2004. Double or multiple clusters classification (Dieball et al. 2002) is also adopted in the G10 catalogue. We positionally cross-match the 2493 probable and 1025 possible YSOs listed in the S14 catalogue with the 738 SCs of ages in the range of 0.1 to 1 Gyr in the G10 catalogue. We adopt a search radius of 10 pc, which is a typical cluster size (Piatti et al. 2014). This search radius corresponds to an angular size of 41.4 ′′ by assuming the distance of the LMC to be 50 kpc. This radius takes into account the uncertainties discussed in the following sections. We find 15 probable and 6 possible YSOs falling within the search radius. Two of these could also be post-AGB stars based on the classification in the S14 catalogue. In this Letter, we focus on the 15 probable YSOs, although the possible YSOs could also be genuine. Figure 1 shows the locations of the YSO detections in the LMC and Table 1 summarizes the cross-matching results and physical parameters of the SCs. Five out of fifteen young stellar candidates are confirmed to be genuine by photometric and spectroscopic observations, such as detection of ice and water masers (Gruendl & Chu 2009;Sewi lo et al. 2010). The other 10 are highly likely to be genuine YSOs being consistent with the spectral energy distributions and dust properties of YSOs (Seale et al. 2014). Among these 15 YSO candidates, 7 are located at the cluster center. Reliability The reliability of the cross-matching depends on the matching radius, completeness of the catalogues, source density and positional accuracy. The angular resolution and signalto-noise ratio of the data sets affect the positional accuracy. The S14 catalogue adopts the coordinates of point sources in the Herschel PACS 100µm point source catalogue if available. Herschel PACS 100µm images provide the highest angular resolution of all Herschel wavebands. If no detection is documented in the PACS 100µm point source catalogue, the coordinates of the next coarser angular resolution point source catalogue are adopted and so on. We examine the positional errors adopted in the S14 catalogue. The standard deviation of the errors is 0.1 ′′ with a median error of 0.06 ′′ , which is very small as compared to the search radius. Star cluster coordinates adopted from the B08 have an error of 10 ′′ -15 ′′ , which corresponds to a maximum error of 4 pc. The apparent cluster size is generally underestimated due to visual inspection of the photographic plates. The search radius takes into account the positional accuracy of the SC and the uncertainty of the apparent cluster size. Nevertheless, we verify the matched objects by visual inspection using the Spitzer IRAC 3.6µm and Herschel PACS 100µm images for SCs and YSOs, respectively. Completeness There is a selection bias in the G10 catalogue. The catalogue only consists of well-defined and not too extended SCs. It is also limited to SCs with ages between ∼10 Myr and 1 Gyr. Star clusters younger than 10 Myr are generally classified as associations or nebulae. Thus, they are excluded from the G10 catalogue. The upper age limit is due to the limiting photometric magnitude of the MCPS, resulting in the difficulty of resolving MSTO points of intermediate-age and older clusters. The S14 catalogue is complete for sources with fluxes brighter than ∼ 200 mJy in any single Herschel waveband over > 99% of the HERITAGE image area with a surface brightness of < 100 MJy sr −1 in the SPIRES 250µm waveband. Probability Fifteen SCs (N S−Y = 15) have YSOs within 10 pc of their centres. The total number of SCs (N SC = 738) in the G10 catalogue might be smaller than the actual number because of the limited sensitivity of the B08 and MCPS. Also, the total number of YSOs (N YSO = 2493) in the S14 catalogue might be significantly smaller than that of all YSOs, because only bright YSOs have been detected. Therefore, N S−Y is likely to be underestimated. However, it is important for the present study to investigate whether or not the above N S−Y can be explained simply by a high probability of both a YSO and a SC being along the same line-of-sight (i.e., chance of alignment) and/or due to high local density of stars. We analytically estimate (i) the local probability for detecting the YSOs within 10 pc of SCs (i.e., search radius) based on the number of YSOs within the adopted 100 pc (N YSO−100pc ) of 15 SCs, and (ii) N YSO−100pc of 15 SCs with respect to field star density. Notes. Table 1 is published in its entirely as Supporting Information with the electronic version of the paper. A portion is shown here for guidance regarding its form and content. (a) Asterisk next to cluster name indicates possible double/multiple populations (Dieball et al. 2002). The probability of a YSO being detected within the defined radius (R) of a SC in the local YSO density (P S−Y ) is N YSO−100pc ×(R / 100 pc) 2 . The mean probability for R = 10, 3 and 1 pc are 5.6 × 10 −2 , 5.04 × 10 −3 , and 5.6 × 10 −4 , respectively, for 15 SCs with YSOs. If we consider the inclination of the LMC (i = 27 • -45 • ; van den Bergh 2000), then the projected surface area of the LMC is appreciably smaller but it does not affect P S−Y . We note that 3 SCs (LMC 2442, LMC 2519* and LMC 3829) have a large N YSO−100pc (>10). Examining the spatial density of YSOs at 200 pc away from these 3 clusters, we only find a large N YSO−100pc for LMC 2519* and LMC 3829. This suggests that these two SCs are possibly in a very active star forming region and the probability for YSOs that are not physically associated to them is higher. Nevertheless, the low value of the mean probabilities demonstrates that detection of the other 13 SCs with YSOs is unlikely to be a chance coincidence but must be a real physical association of YSOs with the SCs. By comparing the spatial density of YSOs in each of the 15 SCs with the density profile of the region, we can rule out the possibility that the detection of YSOs in these 15 clusters is due to high local field star density. This is because a higher detection rate would be expected in the inner region of the LMC, which is not the case. The density profile, ΣD, can be described as exp(−D/l), where D is the distance between the SCs and the LMC centre and l is the scale length of the LMC disc (1.5 kpc). In Figure 2, we show the theoretical density profile (dashed line) and the number of YSOs within 100 pc of SCs as a function of distance between the SCs and the LMC centre. There is no correlation. . Mean HI column density within 10 pc of 738 star clusters vs. cluster age on a logarithmic scale. Star clusters with ages between 0.1 to 1 Gyr are selected from the catalogue of young star clusters. Circles and squares correspond to the same symbols as shown in Figure 1. It is evident that the HI column density decreases with increasing cluster age. There is no direct correlation between HI gas and the star clusters with detection of YSOs. DISCUSSION AND CONCLUSIONS Accretion of interstellar atomic and/or molecular gas onto existing SCs can trigger second-generation star formation in the clusters (Bekki & Mackey 2009;Li et al. 2016;Pflamm-Altenburg & Kroupa 2009). We look for evidence of an external accretion event by investigating the atomic hydrogen gas (HI) content in the region of the SCs with identified YSOs using the combined single-dish (Parkes) and interferometer (Australia Telescope Compact Array) LMC HI image (Kim et al. 2003). As a SC passes through a region of dense gas, accretion can occur and trigger star formation in situ. As shown in Figure 3, we find that there is no direct correlation between the HI gas and the clusters, which suggests that accretion of interstellar gas onto clusters is not responsible for the formation of YSOs in these clusters. However, we note that four SCs are situated at the rim of two HI supershells (SGS 7 and 11; Kim et al. 1999). These supershells are known to have a relatively high molecular gas concentration (Yamaguchi et al. 2001). Star formation can be triggered by shell expansion or accretion of the cold molecular gas. Interestingly, the detected YSOs in these four clusters reside in the cluster centre. While our sample is small, we find that all SCs with YSOs are 0.4 Gyr old (Figure 4). The lack of YSOs in clusters older than 0.4 Gyr implies that the age difference between multiple-generations of stars in these clusters is less than 0.4 Gyr. This is consistent with the maximum age differences of multiple stellar populations derived for clusters in the LMC ). While 0.4 Gyr age spread is commonly found among intermediateage clusters (Mackey et al. 2008;Goudfrooij et al. 2015;Milone et al. 2009), some clusters with similar age to those in our study are consistent with a single stellar population (Milone et al. 2013;D'Antona et al. 2015) or with relatively small age spread (i.e., 0.08 and 0.15 Gyr; Correnti et al. 2015;Milone et al. 2015). Given the accuracy of age estimation for SCs, these are still consistent with the present results that the age difference can be ∼0.1 Gyr for some clusters. Since accretion of interstellar gas onto clusters does not depend on the ages of clusters, the result strongly suggests that the required fresh gas supply for the second-generation star formation most likely originates from gas ejected by stars inside the clusters. All massive stars (more massive than 8 M ⊙ ) would have already exploded as supernovae in the SCs with ages greater than 0.1 Gyr. The gas ejected from supernovae is highly unlikely to still remain within the clusters. Some intermediate-mass stars in the clusters are currently dying and ejecting gas through their stellar winds. If the ejected gas can be trapped by the gravitational potential of the cluster, star formation from that gas maybe possible. The current star formation rate of the LMC estimated from 299 YSO candidates is 0.06 M ⊙ yr −1 (Whitney et al. 2008). This rate is a lower limit because of possibly missing YSO candidates in the observation. If the star formation rate is scaled to 15 YSOs found in the clusters of our study, it corresponds to 0.003 M ⊙ yr −1 . This implies that the total mass of stars formed in SCs over the last 0.1-1 Gyr can be as large as 2.7×10 6 M ⊙ . This mass is quite significant as compared to the possible total mass of 9.9×10 6 M ⊙ for 0.1-1 Gyr old SCs with an assumed (V-band) mass-to-light ratio of 2.8 in the LMC. The large fraction of later generations of stars formed in clusters is comparable to the observed fractions of second-generation stars inferred from the colour-magnitude diagrams for some LMC clusters . Therefore, the second generation of stars in these clusters could have been formed in the centre regions of their original clusters. Given that dusty YSOs evolve into optically visible pre-main sequence stars (PMS; De Marchi et al. 2013), the present detection of YSOs implies that there could be PMS stars in the older clusters. Pre-main sequence stars have been discovered for several SCs in the LMC and Small Magellanic Cloud (SMC) using deep HST observations (De Marchi et al. 2013;Gouliermis et al. 2012), though the clusters are younger than those investigated in this paper. These PMS stars are very faint (V > 22 mag) and can not be investigated in existing G10 catalogue derived from groundbased observations. If PMS stars were discovered in the old clusters with YSOs, then such findings would present an additional irrefutable evidence for the presence of multiplegeneration of stars in clusters. Since one YSO is detected per SC, the star formation rate is 2.0×10 −4 M ⊙ yr −1 in each cluster. This low star formation rate implies that the number of PMS stars in each cluster is much lower than those detected in young clusters with initial star bursts. Nevertheless, it is worthwhile for future observational studies to investigate the numbers and masses of PMS stars in older clusters. Such observations would better constrain the ongoing star formation rates in older clusters. Our finding also suggests that the gas supply for secondgeneration star formation cannot originate from young massive stars but must be from old AGB stars. The presence of low-luminosity clusters (V ∼ 17 mag) in our sample which contains YSOs does not provide support for theoretical predictions of a threshold mass of globular clusters for secondgeneration star formation (D'Ercole et al. 2008;Bekki 2011). It is unclear how low-mass clusters can retain ejecta from AGB stars for further star formation. However, if low-mass clusters interact with the cold gas from molecular clouds, accretion can occur (Pflamm-Altenburg & Kroupa 2009). Such cold gas accretion might help low-mass SCs to retain some fraction of AGB ejecta, though this process needs to be investigated.
2017-03-08T01:40:52.000Z
2017-03-08T00:00:00.000
{ "year": 2017, "sha1": "dfd3a36cdefbd69e01cca12bbe84cf629f6ed221", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.02661", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dfd3a36cdefbd69e01cca12bbe84cf629f6ed221", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118611414
pes2o/s2orc
v3-fos-license
Effects of the interfacial polarization on tunneling in surface coupled quantum dots Polarization effects are included exactly in a model for a quantum dot in close proximity to a planar interface. Efficient incorporation of this potential into the Schr\"{o}dinger equation is utilized to map out the influence of the image potential effects on carrier tunneling in such heterostructures. In particular, the interplay between carrier mass and the dielectric constants of a quantum dot, its surrounding matrix, and the electrode is studied. We find that the polarizability of the planar electrode structure can significantly increase the tunneling rates for heavier carriers, potentially resulting in a qualitative change in the dependence of tunneling rate on mass. Our method for treating polarization can be generalized to the screening of two particle interactions, and can thus be applied to calculations such as exciton dissociation and the Coulomb blockade. In contrast to tunneling via intermediate surface localized states of the quantum dot, our work identifies the parameter space over which volume states undergo significant modification in their tunneling characteristics. I. INTRODUCTION The interface between a semiconductor quantum dot (QD) and an extended film is an important part of the functionality of many nanostructured devices, with such diverse applications as photovoltaics 1-3 , low threshold lasers 4,5 , single electron memory 6 , and single photon emitters 7,8 . Understanding the physics of this interface is important for the ultimate technological challenges in these applications: increasing energy conversion efficiency of solar cells, controlling speed and volatility in memory devices, and creating reliable on-demand sources of single photons. In novel photovoltaic concepts, absorption of light by QD chromophores can be followed by the transfer of hot carriers to a nearby electrode. This allows the excess carrier energy to be harvested and can fundamentally change solar cell efficiency. Hot carrier transfer was recently demonstrated for optically excited PbSe QDs coupled to a TiO 2 substrate 3 . In laser applications, the reverse process is utilized to design the transfer of cold carriers from a quantum well to a QD, reducing the energy deposition by hot carriers and leading to higher gain and lower thresholds 4,5 . In addition, rapid progress is being made on creating novel memory devices based on QDs. The large tunability of QDs allows flexibility to construct fast volatile memories as well as non-volatile memories with very long retention times, low electrical damage, and the possibility of charge storage based on holes as opposed to electrons 6 . The design strategy of each of these devices must take into account the effects of the local electrostatic environment on tunneling. Optically driven QD emitters are designed such that carrier tunneling and non-radiative energy transfer mechanisms are slower than the exciton radiative recombination rate in order to utilize their fluorescence for single photon generation 9 . The reverse process of non-radiative energy transfer from a quantum well to colloidal QDs has been also been demonstrated as a way of electrically injecting electron-hole pairs into the dot, where they recombine and emit light 10 . Other electrically driven emitters exploit the tunneling of carriers into and out of epitaxial QDs at two different energies to drive the system into electroluminescence 9,11,12 . The local electrostatic environment in both types of emitters plays an important role in determining the energy levels and tunneling rates, which in turn control the frequency of emitted light and the efficiency of these devices. This is also becoming more important as research progresses towards single QD emitters 7,8 . The tunnel coupling that controls the rate of carrier transfer in all these tunneling based devices can be heavily influenced by the image charge effects in the barrier separating the two components between which the carrier transfer takes place. That this classical force on a non-classical electron in the barrier region is essential for accurately determining tunneling rates is well-known in the field of scanning tunneling microscopy 13 . Even when tunneling is not an issue, as in optically driven QD emitters, image charge effects are still important, and serve to lower the exciton energy in the QD, as well as to modify its transition dipole moment. This affects non-radiative energy transfer via the transition amplitudes as well as via resonance with surface excitations that absorb or emit this energy. Therefore, electrostatics within and at the interfaces of sub-systems can be one of the key players in each of the above applications of nanostructures. In this paper, we re-examine a model system consisting of a spherical QD coupled to a planar surface. An exact solution of the Poisson equation, including all image effects induced by the carrier charge distribution, is presented in a form that can be efficiently incorporated into the solution of the Schrödinger equation. Because of this efficiency, a large parameter space can be explored in this model and we exploit this capability to demonstrate the impact of the image effects on the carrier tunneling rates. In addition to barrier lowering effects, we find interesting consequences of the interplay between the induced electrostatic potential and the confinement of the particle inside the QD. The system we have modeled is sketched in Fig. 1, and is represented by a point in the parameter space defined by three dielectric constants (see figure), confinement potentials, and effective masses, in each region. To these material parameters, we add QD radius and its distance from the electrode as two geometrical parameters . We find that this space is best understood when we replace QD by the ratio = QD / b , and L by the contrast . Thus f I = 0 represents no image potential at the electrode interface, and f I = −1 represents the opposite limit of the image potential due to a metal. Similarly = 1 represents the lack of image potential at the QD surface, while → ∞ would represent a metallic nanoparticle as far as electrostatics is concerned. Image Potential Energy z Figure 1: (Left) The physical setup showing coordinates, dielectric constants, and induced charges due to a point charge (red) at position s, and its image (blue). (Right) The confinement potentials and schematically drawn image potential and wavefunctions in the QD and the electrode region. Potential is calculated at a general point r, and we let r → s to obtain the self energy. Having defined the strategy of our study in terms of a parameter space, we now connect the most important parameters to real materials and device systems. In Fig. 2, we show a map of the values of b and over a space spanned by the QD and the barrier materials. Thus for a given choice of these two parameters in the calculations below, a possible set of materials realizing it can be read from the figure. One first identifies the vertical line closest to b to choose a barrier material. The range of QD materials then fall within the overlap of this line with the color representing . The values of the barrier dielectric are indicated below the respective materials. At the intersection of two lines, the color indicates the value takes for the given combination of QD material and the barrier material the dot is embedded in. The values of dielectric constants used are for ω → ∞ since tunneling times are assumed instantaneous. In Table I, we categorize the aforementioned classes of devices in terms of the ratio . Within each class, devices can be found over the entire range of values of the contrast, f I . We present calculations at the two extremes, f I = 0 and f I = −1, for each system studied below. However the choice of f I for a given electrode can be subtle. In systems with low b , f I ≈ −1 for a substrate made of metal or an undoped semiconductor. At high b , as in the case of III-V epitaxial systems, f I ≈ 0 for an undoped substrate while it may be −1 for metallic electrodes. In the latter case, it is important to realize that the metallic electrodes are often doped semiconductors in which the carrier densities are of the order of 10 18 cm −3 , which yields an average distance between dopants equal to 100 Å. For QDs with diameters smaller than 10 nm, the electrode may be best approximated via the static dielectric constant of its host material. Thus setting the value of the parameter f I can be subtle in some systems, and its choice must take into account the properties such as dopant densities and the plasma frequency inside the electrode material. low : III-V and II-VI systems of epitaxial QD layers optically and electrically pumped emitters volatile single electron memory QD lasers high : III-V, II-VI, Group IV colloidal QDs in vacuum, organic, p-Si, SiO2, and MgS matrixes optically and electrically pumped emitters energy transfer pumped emitters non-volatile single electron memory nanocrystal based photovoltaics Table I: Various QD-based systems in the parameter space defined by the dielectric ratio, , between the barrier and the QD. The classification, and the shaded heading in the low and high blocks refers to the most common types of systems. The nanocrystal based systems may be found with both high and low depending on the materials used. Specific devices realized under each system often span the entire range of dielectric contrast f I between the background and the planar medium. Let us consider the device examples of Table I in light of the parameter space of our model. The physical quantities important in the interplay of confinement and the induced potential are the effective mass inside the QD, the barrier potential, and the barrier dielectric. This implies a dependence of tunneling rate on the mass inside the QD, which may at first seem strange since a Fermi Golden Rule calculation of the tunneling rate involves only the wavefunctions within the barrier region 14 ; the dependence of this rate on the mass inside the barrier is well-known 5 . However, the amplitude of the wavefunction in the barrier region depends on the behavior of the wavefunction inside the QD and at its boundaries, and therefore on the mass inside the QD volume. The substantial difference in effective mass often seen in semiconductors (including electrons, heavy holes and light holes) then suggests that the rates for electron and hole tunneling in QD-based structures could also be quite different, as seen for example in models for QD laser structures 5 . In fact, depending on the dielectric contrast between the barrier and the QD, we find that image effects can lead to substantial localization of heavy carriers inside the QD with a significant change in tunneling rate. The trend in tunneling rate with carrier mass can thus be substantially altered from the simpler picture found when image effects are neglected. In particular, we identify a regime in which a heavy carrier tunnels at a similar or much higher rate than a light carrier. Interestingly Mazur et al point to several experimental measurements in QD laser systems showing more modest differences between these rates for the two carrier types 4 than is expected from a simple WKB 15 analysis 5 . While the interpretation of these particular experiments requires a proper accounting of surface states and defect states within the barrier, we find that in a parameter regime somewhat different from the conditions in these experiments, the image effects can also yield similar results. As is known from elementary electrostatics, the polarization induced at an interface of two materials is directly proportional to the dielectric contrast between them 16 . In the experiment of Tisdale et al. 3 , the dielectric constant of the TiO 2 electrode is 8.4 and that of the PbSe QD is 22.4 at high frequencies 17 . On the other hand, the organic materials used to passivate the QDs, hydrazine and ethanedithiol, both have optical refractive index of 1.5 17 . Thus approximating the high-frequency dielectric constant ( ∞ ) within the barrier region to be 2.3, substantial image effects are expected at both the electrode and the QD surface. In our model, this corresponds to high and f I → −1. We find that this system should be in the parameter regime in which heavier particles tunnel faster than the lighter ones. Furthermore, the most interesting materials for the QD chromophore in photovoltaic systems generally have small bandgaps, and therefore large dielectric constants. These are generally colloidal QDs passivated by organic materials with relatively low dielectric constants at high frequencies. Therefore, these systems lie in a regime where the effects of interfacial polarization are maximized. A range of behaviors can thus result from the interplay of confinement due to the barrier potential and the QD dielectric constant relative to that in the barrier. Laser systems based only on III-V semiconductors are in a region where the dielectric contrast between each pair of materials is low, and thus the interfacial polarization is small. In our model, this corresponds to low and f I → 0. On the other hand, non-volatile single electron memory systems generally contain narrow gap nanocrystals (e.g. Si, InAs, GaSb) embedded inside wide gap layers (e.g. III-nitrides, AlAs, silica) for greater reliability. Therefore they lie in a regime where the interfacial polarization may be large, much like that for the photovoltaic systems discussed above. However, unlike photovoltaic systems, these devices often have tunable potentials, with barriers that are often thicker than 10 nm. Therefore only the image effects at the QD/barrier interface are important, or in terms of our model parameters: a high and f I → 0. Quantum dot emitters are often embedded in high dielectric materials, and therefore have little dielectric contrast with their surroundings. This limits the light collection efficiency, and there is thus a growing interest in systems with QDs coupled to optical fibers 18 , or embedded in host materials such as photonic crystals 19 , MgS 20 , and porous silicon 21 . In addition, large confinement potentials in these cutting edge systems would also remove the limitation of operating only at low temperatures. From the foregoing discussion, and the details presented below, these types of systems are expected to show large interfacial polarization at the QD boundary, which corresponds to a high in our model. Furthermore f I ≈ 0 for optically pumped emitters since they do not depend on a nearby charge reservoir. In electrically pumped emitters, f I may take values between 0 and −1, but the electrode properties must first be carefully examined to set a particular value as we emphasized in the above discussion. In novel energy transfer emitters 10 , passivated colloidal QDs lie on the surface of a dielectric, resulting in a value of −1 < f I < 0. Thus we expect QD emitters to span a large region in the parameter space of our model, and to exhibit a rich interplay between polarization and confinement affecting the binding energy, fluoresence, and electroluminescence in these systems. Our paper is organized as follows. In Sec. II, we begin with the Schrödinger equation for a single electron inside the system shown in Fig. 1(a), and derive explicit expressions for its potential energy, and its rate of tunneling between the QD and the electrode. The mathematical details of the derivation are discussed in the Appendix. In Sec. III, we apply this theory to computationally explore the parameter space in the manner described above. In particular, we present the crossover of trends in tunneling rates for various combinations of parameters, and interpret this behavior in terms of the interplay discussed above. Following this section are our conclusions and remarks on the use of our results inside more sophisticated ab-initio calculations of coupled Schrödinger and Poisson equations. A. Electronic states The basic features of many devices can be modeled with the QD treated as a dielectric sphere and with the electronic structure of the device treated via an effective mass approximation. For a one-band model 22 , (1) where a is the QD radius, b is the dielectric constant of the barrier region, m(r) is the ratio of effective mass to bare mass m 0 , V (r) is the confinement potential, and Σ(r) is the electrostatic self-energy in the dimensionless form. The electrostatic self-energy of a charge in an isolated spherical QD was first derived by Brus 23 for the infinite spherical well model, and then by Banýai et al. for a finite well 22 . Here we generalize these results by accounting for the geometric progression of image charges induced on the electrode and the QD surface. The potential energy is derived based on re-expansion of spherical harmonics 24 around a shifted origin. This approach may be found in several sources, including recent work on plasmonics of nanoparticles supported on a substrate 25 . As appropriate for the optical domain, the solutions presented there apply to an electric field uniform over the particle. Solutions to the Laplace equation within the space bounded by axially symmetric charge distributions in the same geometry have also been given 26 . We present a solution applicable to an arbitrary charge density, as needed for the self energy in the Schrödinger equation. We first consider the potential at r of a source charge at s, and write it as a superposition of four sources: the point charge, the multipole moments of the dielectric sphere, the image of the point charge in the plane, and the image of the multipole moments in the plane (see Fig. 1). Subjecting the contribution from surface charge densities to a multipole expansion, we write the potential as where s I = s − 2s ·ẑẑ − h is the position vector of the image charge. The dimensionless function q(s) is fixed by requiring that the total charge in the real half space equal q in units of the elementary charge e. This function allows us to isolate the effects of dielectric screening of the charge in a continuous medium when the point s lies inside the sphere, and is given by, In the second and third terms of (2), the functions Y lp (r) are spherical harmonics written as a function of the direction vector. The second term corresponds to the surface charge density in real space, while the third term is due to the image of this charge density. In this term, we have already accounted for the change in the sign of the multipoles in the image space, which depends on l and m, where the integer l is the total angular momentum and p the azimuthal quantum number. We find the self consistent multipole moments, Q lp (s) by enforcing the electrostatic boundary conditions at the surface of the sphere as shown in Appendix A. Imposing axial symmetry of the complete system uncouples the different p, and the Q lp (s) for each p satisfy a set of linear equations indexed by l: The Q where = QD / b . The matrix elements A p ll depend on the radius of the sphere and the distance between the center of the sphere and its image (Fig. 1); they are given by where, The details of this entire derivation are also provided in Appendix A. The above equation for self consistent multipole moments couple the moments for all angular momenta l to each other. In practice, truncation must be applied by choosing l < l max , and increasing l max until the final result converges. We found that l max > 15 is sufficient in all cases we have applied this model to, and generally set l max = 50. Equations (2)-(6) can be applied immediately to the interaction energy of two charged particles, e.g. for treatment of excitons, by considering one charge placed at s and using Q lp (s) to calculate the potential at r, the location of the second charge. In the following, we focus only on the single particle self-energy Σ and its consequences for tunneling. We write Σ as a sum of two contributions, Σ 0 + Σ 1 , where Σ 0 is the interaction energy without the planar surface (ρ = r/a), and Σ 1 is the correction due to the electrode surface, Note that in the above expression, both series start at l = 1 because the monopole contribution from the induced charge on the image sphere is explicitly taken into account by the last term. In the derivation of the formula for Σ 0 , we followed Bánayi et al 22 to separate the divergent terms of the multipole series and sum them to identify closed form expressions that can be regularized (see Appendix B). The form of our expressions is slightly different from theirs due to the zeroth order multipole contribution written explicitly in our formula, while it is subsumed into the series in theirs. It represents the part of energy stored in the dielectric medium of infinite extent when a charge is embedded inside it. Here this energy is for the QD dielectric relative to the barrier medium. In both the above formulas, S l (ρ) equals ρ l for ρ < 1 and ρ −(l+1) otherwise. Since macroscopic electrostatics is being used in this model, the material parameters at the QD boundary are taken to follow an abrupt profile. In the case of effective mass, we handle the discontinuity numerically using the ghost fluid method 27 . In the case of electrostatic potential, the self energy diverges at the boundary, which could be regularized by the microscopic profile of the potential. Since the present paper does not include defects of the atomic scale potential, we do not compute this potential and instead continuously join the self energy with the confining potentials of the electrode and the QD relative to the background medium 28 . We do so by linearly interpolating Σ between its computed values within one lattice spacing inside and outside the QD interface. At the remaining interface, the planar surface, we set the image charge interaction to where η is chosen to yield a specified potential at the image plane. Results are generally insensitive to η for sufficiently deep surface potential because we focus only on wavefunctions that are decaying exponentially in this region. B. Tunneling rates We now consider tunneling of a single electron from the QD to the electrode. We model the electrode as a thick square well of finite depth with respect to the barrier at z = −h/2, and an infinite potential wall at the opposite end z = −h/2−w, letting w → ∞ in the final calculation to effect a semi-infinite substrate. Thus letting k be the in-plane wavevector and k z the transverse wavenumber, we write a complete set of basis functions of the electrode as, where A normalizes ψ L to the volume of the electrode (in the limit of semi-infinite electrode below, the volume drops out in the final expressions) and tan δ = k z /κ. In this simple model, the numbers k z , κ, and k are all real valued, and a state at given energy E satisfies and is admissible only if k 2 z > 0 and κ 2 > 0. In the expressions above, we have introduced m L as the effective mass inside the electrode. More realistic models, such as those with surface states, will be necessary to obtain accurate rates. Regardless of the model used, the states would still decay exponentially perpendicular to the surface, and would be described by a (lattice) translational symmetry along the electrode surface 29 . In addition, the states enter into our tunneling rate below in such a way that only the wavefunction outside of the electrode is required; above, we have included the simplified form in z < −h/2 region for completeness. Therefore, at the length scale used in the calculations below, the matrix elements between the QD states and the basis states of the electrode, are captured well by the above model. However, significant quantitative differences would arise due changes in density of states, as well as additional modes due to the surface states. Having defined the basis set for the final state of the electron escaping the QD, the tunneling rates are computed using the Fermi Golden rule, where E kz,k = 2 /2m(k 2 z + k 2 ) is the energy of the electrode basis state. The matrix element is calculated using Bardeen tunneling theory, in which it is implicitly assumed that the wavefunctions of the two subsystems are orthogonal. Enforcing this assumption yields, Here H L is the Hamiltonian for the electrode in isolation of the QD, and the eigenstates of which are the above ψ L (k z , k; r). The Bardeen tunneling matrix element is traditionally written as a surface integral. This form can be obtained by the use of the divergence theorem, but the formula (14) is more convenient for our calculations. However, the aspect of Bardeen theory that is essential in this expression is the orthogonality assumption, which yields a matrix element of the difference between energies. We obtained this form by adding and subtracting H L to H, and then exploiting the relation ψ L;kz,k |H L |ψ D = E L;kz,k ψ L |ψ D ≈ 0. The subtraction is advantageous because it naturally restricts the integration domain in (14) to the volume outside the electrode, and thus allows the possibility of computing ψ D and ψ L in separate calculations, in which one would focus on the QD and electrode properties respectively. The subtracted Hamiltonian H L is non-vanishing in calculations in which the termination of the potential at the surface occurs smoothly into the junction, as is the case in atomic scale modeling. In the present calculation, H L describes a square well and thus makes no contribution outside the planar surface. From (12), we see that κ ∝ √ m b , which in turn is proportional to log γ. Remarkably, we find that the tunneling rate also shows a similar dependence on the mass inside the QD. While this is not a fundamental relationship, we find that when the tunneling rates follow a monotonically decaying profile with respect to m, they can be described well by the expression, where α, β, τ depend on the parameters of our physical model. Finally, we remark that in our calculations, we compute the electrostatic potential and the QD states using high frequency dielectric constants for each material (denoted commonly as ∞ ). This is so because we ignore the time-dependence of a single tunneling event, and thus model tunneling as instantaneous. To be consistent with this viewpoint, or this regime of tunneling, we must take the material response to be of high frequency. Thus the initial and final states of the above matrix element are computed using ∞ . We now turn to results of our calculations employing the above model. A. Potential and wavefunctions We begin by plotting the electrostatic potential, and the QD wavefunctions that result from it, to identify the key manifestations of the interplay between confinement (via mass) and the induced polarization. Our choice of parameters in this subsection is made to obtain the clearest possible visualization of the impact of this interplay. Thus we have selected a relatively low b = 2.5, and a barrier potential of 0.8 eV to best capture the qualitative aspects of the self energy and wavefunctions simultaneously within the QD volume and the barrier region. We verify below that the trends in the tunneling rates, based on the physical intuition derived here, continue to hold in more realistic parameter regimes. The distance between the QD and the electrode is chosen such that the effective junction width (the smallest distance between the two surfaces) is about 1 nm. At the microscopic scale, this width may be understood as the distance between the image planes of the two surfaces. Figure 3 displays the contour plot of dimensionless self energy in the equatorial plane of the QD, which lies perpendicular to the electrode surface. We observe that the junction potential is reduced significantly in a narrow region between the two surfaces. The reduction is clearer in the lower panel of Fig. 3, where the potential is plotted along the z-axis for various dielectric ratios . Thus outside the QD, the potential becomes more attractive as increases, while inside the QD, the boundary becomes more repulsive. Within the volume of the QD the slope of the potential, resulting from the image charges on the electrode, reduces as a result of dielectric shielding. The narrow attractive potential straddling outside the QD is due to the image potential at the QD boundary, and it becomes asymmetric with greater depth on the side facing the electrode due to the image charges on the electrode. Below, we will see that the asymmetry partially attracts the charge close to the electrode, and shifts the center of mass of the wavefunctions towards the electrode. Therefore, it could significantly affect tunneling, especially for high which results in a deep potential well. Since, this attractive potential is beyond the atomic scale variations in the potential at the surface of the QD, we must distinguish it from the surface traps formed at the defects. Therefore, in order to keep the terminology clear, we call this the image potential trap (IPT) following Banýai et al 22 . For quickly estimating the self energy effects, it may be convenient to approximate Σ as simply the sum of Σ 0 and V img , the potential of the image charge in (9). The results derived in the previous section allow us to quantify the error in this approximation. In the right hand panel of Fig. 3, we plot this error, which is the contribution by the electrode's correction to self-consistent multipoles corresponding to the total self energy plotted in the left hand panel. It is clear from the plot that the extra contribution is significant only within the junction, and the correction caused by the image effect to the dipolar and higher order field of the QD is mostly quantitative in Σ and less than 30% in most of the space. The size of this correction increases with the QD size as a larger fraction of the charge on its surface becomes close to the electrode and the potential approaches that of a charge between two parallel plates. The contribution decreases as → 1, and saturates as ( − 1)/( + 1) → 1. This may be valuable in design considerations involving charged quantum dots. For example, in systems such as epitaxial III-V QDs, the multipole contribution is small, while for inorganic colloidal QDs on planar surfaces and surrounded by vacuum or organic solvents, the contribution could approach its saturation value. In Fig. 4(a) we plot the contours of the lowest energy electron wavefunctions in the equatorial plane for parameter choices corresponding to the two categories in Table I above: low and high values of . For each , we plot results at two extreme values of f I : matched dielectric at the planar interface boundary with no image effect (f I = 0), and a metallic boundary with the full image effect (f I = −1). The image effect due to the metallic boundary leads to increased amplitude in the junction and an overall shift of the center of mass towards the electrode. The center of mass shift is greater at low shielding, which can also be seen in Fig. 4(a) as the center of mass moves back towards the geometrical center of the dot when increases from 1.2 to 4.0. At the same time the larger yields much higher amplitude outside the QD -a result of stronger barrier lowering and the asymmetry due to the IPT outside the QD. In the lower panel of Fig 4(b), we see that a small peak in the wavefunction forms within the IPT at higher . Increasing further eventually leads to full localization within this region, but at that point our model would lose its physical meaning. We see here that there is a competition between localization within the QD volume and within the IPT, both of which increase with . It is the mass of the particle that determines which of these effects wins. The intermediate regime of the two competing effects occurs when the Bohr radius, 4πε 0 2 e −2 ( b /m), matches the length scale of the IPT. The basic effect of the effective mass on localization is discussed and illustrated in the next section (also Fig. 6 below), and it can be briefly stated as follows. At low m, we expect less localization within both the QD and the IPT. At higher m, which is still small enough for significant amplitude outside the QD volume, partial localization in the IPT would increase, which would raise the tunneling rate due to the larger amplitude close to the junction. On the other hand, for sufficiently large m, ψ D would become either highly localized inside the QD volume, or completely within the IPT. Therefore, focusing on the volume states at first, the tunneling rate would be expected to decrease beyond a certain m. This suggests that a maximum in tunneling rates must exist, which is confirmed by the calculations below. If we also consider raising , the deeper IPT is able to sustain the upward trend in tunneling rates up to larger m, and would thus push the maximum to larger m, and saturate the rates as a result of complete trapping on the surface. The latter case is beyond the present model and must include atomic scale contributions to the potential. B. Tunneling rates We now discuss calculations of tunneling rates for the ground QD states at low barriers (1.0 and 1.5 eV), and for both the lowest and excited states at high barriers (4.5 eV). The low barriers are chosen mainly to illustrate how the physical intuition gained in the previous subsection affects tunneling, while the high barrier cases are more realistic. Note that one can always consider higher energy states to effectively reduce the barrier height, but as discussed below, this leads to complicated profiles of wavefunctions and obscures the trends that exist for the lowest states. Figures 5-7 show plots of γ 0 as a function of the effective mass of the particle at low and high values of , where the figures correspond to barrier heights (in eV) of 1.5, 1.0, and 0.6 respectively. In each figure the value of barrier dielectric constant is b = 2.5, and the effective mass in the barrier is m b = 1. We calculated the rates assuming an isotropic parabolic energy band within the electrode surface having an effective mass of 0.067, which roughly mimics the conduction band of GaAs 30 . The rates would scale in magnitude for different density of states for the electrode, as was discussed previously. Low barriers In Fig. 5, V b and b are both high enough that any partial localization in the IPT is suppressed. Recalling our phenomenological expression (15) we find that at low ratio, = 1.2, the rate decays as β ≈ 1/2, and τ is given approximately by an effective reduction in the junction width due to the image plane of the QD. The deviation in the trend is very slight when the matched electrode is replaced by a metallic one. But the total rate increases by two orders of magnitude due to the significant lowering of the junction potential by the image potential of the electrode. However, when increases to 4 and beyond, as in panels (c) and (d), the trend also becomes closer to β ≈ 1. This is essentially the regime where heavy particle is repelled by the self energy barrier inside the QD much more effectively than the localization by the IPT. When we lower the barrier height to 1.0 eV a qualitative change occurs as shown in Fig. 6(d) for = 8: the escape rate is increasing instead of decreasing with an increasing effective mass. In fact, there is a maximum at m = 0.85 beyond which the rate decreases slowly. This result is the numerical example supporting the discussion at the end of the previous section where we argued that, for sufficiently deep IPTs, the escape rate can increase with mass, or show a maximum. When the V b and b are both large enough that the effect of IPT on the wavefunction is negligible, we expect the dependence of γ 0 on m to follow that of the energy of the lowest state. This can be understood from the binomial expansion of (12), which gives log where d is an effective junction width and ∆E n is the energy measured from the bottom of the QD potential well. In Fig. 8 we plot the energy of the lowest state with respect to the bottom of the potential well as a function of effective mass for three barrier heights. Comparison of the plots in this figure, with those in Figures 5, 6, and 7 shows that the trend of monotonically decreasing log γ 0 coincides with that of the energy E 0 of the QD state. Thus we see that for tunneling, the barrier potential is an important parameter determining the significance of the interplay of electrostatic self energy and effective mass. The barrier dielectric is also significant as it scales the magnitude self energy within the barrier. The most striking result above is a crossover in the dependence of tunneling rate on the effective mass -switching from decreasing with m to increasing as the barrier height is lowered. In order to visualize the extent of this crossover, we plot in Fig. 9 the tunneling rate as a function of mass for a range of barrier potentials and a dielectric matched as well as a metallic electrode. The existence of crossover can be clearly identified in the figure on the right side. High barriers It appears from the previous calculations that a potential energy of 1 eV yields a sufficiently high confinement to suppress any interesting behavior such as the crossovers in the tunneling rates. However, in these cal-culations the barrier dielectric is 2.5. In systems, such as in emerging photovoltaics, the barrier is much closer to a vacuum than to a solid state material, even when QDs in these systems are passivated by organic solvents with low refractive index. In the case of vacuum, the self energy in the barrier increases by almost a factor of 3 compared to its value in the previous calculations, and accounting for both the overall scaling and the increased for a fixed QD material. This allows the crossover phenomenon to exist for barrier heights of up to 4-5 eV, which are roughly the work functions of the semiconducting QD materials. In Fig. 10, we plot the tunneling rates over the plane defined by ( b , m) at a fixed = 8.0. In the absence of image effect by the electrode (left plot), the rate decreases with mass at all b , and the decrease becomes more rapid as b increases. However, in the presence of the metallic electrode (right plot), the rate crosses over from decreasing to increasing with mass as b → 1 from above. In fact, as b decreases, the rate first exhibits a weak maximum, characteristic of the competing effects of localization inside and outside the QD volume, as was discussed previously. The slow rates in these figures are due to the low density of states (DOS) in the electrode model, and as noted earlier, they would be scaled significantly with metallic DOS, and thinner barriers, but without much effect on tunneling rates. Finally, we remark that the trends with respect to effective mass also occur for higher energy states. As an example, we show in Fig. 11 the rates for the second lowest state in the present system. We note for a fixed and m, there is no substantial difference in the rates with a dielectric matched or a metallic electrode. This is due to the fact that the states involved are deep, and therefore the wavefunction amplitude in the junction is small compared to its values inside the QD. However the effect of the electrode is similar to that in the lowest Figure 9: Surface plot of escape rates over the two-dimensional space spanned by barrier potential, and effective mass. The lines show the rate as a function of mass as the barrier potential is changed. For the dielectric matched electrode (left), the rate is clearly monotonically decreasing, while for the metallic electrode (right), the dependence of the rate on mass changes from monotonically decreasing to monotonically increasing as the barrier potential is lowered. The remaining parameters are b = 1.6, m b = 1, = 8.0, radius a = 4 nm, and h/2 = 5 nm. state, with the characteristic increasing tunneling rate with respect to effective mass inside the QD. The important difference between the two cases is that the rate for higher state shows a minimum rather than a maximum. However the simple interpretation for the trends in the lowest, nodeless, state does not carry over to the excited states directly. This is because the nodes in the excited states act as extra degrees of freedom that determine the wavefunction amplitudes. At even higher energy, especially close to the top of the potential well of the QD, the wavefunction acquires a large number of nodes, with a spatial scale reaching below the spatial scale of the image potential. The rates then fluctuate heavily due to the nodal planes moving across the IPT as is varied. Thus no obvious trends can be found in this energy range. In addition, microscopic contributions to the potential must be computed in this regime to obtain a robust and reliable trend. IV. CONCLUSION In summary, we have presented an exact solution to the electrostatic potential of a point charge in a quantum dot lying close to a planar surface. We have found that at low barrier heights (less than 1 eV) there is a significant competition between localization of the particle within the volume of the QD and its localization within the surface trap formed by the image potential at the QD surface. This leads to interesting behavior of the tunneling rate as a function of effective mass, even reversing the trend for larger dielectric contrast between the barrier and the quantum dot. At lower barrier dielectric constant, this trend continues to exist for much higher barriers of up to 4.5 eV. While the geometry of our model is idealized, the trends may continue to be important in absence of significant geometric perturbations such as protrusions or surface states that may introduce entirely new degrees of freedom relevant to the problem at hand. The basic competition between volume and surface localization will continue to exist in more complex quantum dots as well, and our confirmation of its basic consequences in an idealized model is an important guide to what may occur in those cases. Thus ours is a useful model for an initial attack on a design problem, or an analysis of an experiment in tunneling devices. The image effects illustrated here can also be implemented into more complete models at the expense of additional computational effort. The simple model used here can be generalized to treat two charges, allowing the analysis of the influence of the junction on excitons in a QD. In some cases, the optical dipole moment itself may result from the image force, in which case the multipole fields included here become essential. Finally, the polarization effects discussed here can play an important role in other Coulomb interaction induced phenomena, such as Fermi edge singularity effects in optically excited QDs in close proximity to an electrode 31 . Figure 10: Surface plot of escape rates over the two-dimensional space spanned by barrier dielectric, and effective mass. The lines show the rate as a function of mass as the barrier potential is changed. For the dielectric matched electrode (left), the rate is clearly monotonically decreasing, while for the metallic electrode (right), the dependence of the rate on mass changes from monotonically decreasing to monotonically increasing as the barrier potential is lowered. The remaining parameters are: barrier height of V b = 4.5 eV, m b = 1, = 8, radius a = 3 nm, and h/2 = 4 nm. The effective masses and the barrier heights in these figures may represent systems of II-VI or III-V colloidal quantum dots with the host matrix corresponding to vacuum, p-si, phosphate glass, MgS, or various organic passivating materials (see Fig 2).
2012-07-26T12:55:46.000Z
2012-07-26T00:00:00.000
{ "year": 2012, "sha1": "58366485a692fa46b28b9c92ee5a18822c27cfce", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.86.165322", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "58366485a692fa46b28b9c92ee5a18822c27cfce", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221036549
pes2o/s2orc
v3-fos-license
A Wideband Circularly Polarized Antenna with Characteristic Mode Analysis A wideband circularly polarized (CP) antenna is presented to achieve enhanced impedance, axial ratio (AR), and gain bandwidths. *e antenna consists of two circular patches, a split-ring microstrip line with six probes, and a circular ground plane. By using these six probes which are placed in sequence on the split-ringmicrostrip line, the operating bandwidth of the proposed antenna is increased. *e characteristic mode method is used to analyze different modes of the antenna and reveal the mechanism of extending the 3-dB AR bandwidth. Measured results show that the proposed antenna obtains an impedance bandwidth of 1.486–2.236GHz (40.3%) for S11≤−18 dB, a 3-dB AR bandwidth of 1.6–2.2 GHz (31.6%), and a boresight gain of 8.89± 0.87 dBic. Introduction With the development of many wireless systems, such as radar, satellite communication, remote control, and telemetry systems, there are more and more applications of circularly polarized (CP) antennas because CP antennas allow for reduction of multipath fading, avoiding polarization mismatching, and better weather adaptability. A CP annular-ring microstrip antenna using an equal-split power divider is proposed in [1], which has a 3-dB axial ratio (AR) bandwidth of 6%. A CP antenna fed by an L-probe is presented with a 3-dB AR bandwidth of 9%, which uses two stacked folded patches [2]. A square ring slot including four branch slots is applied to produce circular polarization, and a 3-dB AR bandwidth of 8.7% is achieved [3]. However, there is a growing demand for antennas with a wideband AR bandwidth. erefore, it is necessary to investigate methods of extending AR bandwidth. In [4], an air gap is used to a stacked patch with a single probe feed to enhance the AR bandwidth (20.2%). A coplanar parasitic ring slot patch is introduced to achieve good circular polarization performance with a 3-dB AR bandwidth of 16% [5]. By using an H-shaped patch and a probe in conjunction with a printed monopole, a 3-dB AR bandwidth of 19.4% is obtained [6]. Based on the meandering feed structures, patch antennas are studied to achieve wide 3-dB AR bandwidths [7][8][9]. 3-dB AR bandwidths of 13.5% [7], 22.4% [8], and 16.8% [9] are achieved by using a horizontally meandered strip, a 3D meandering strip, and a printed meandering probe, respectively. In [10], a driven patch with a square ring, a 270°l oop of microstrip line, and an L-shaped parasitic patch is used to excite four square parasitic patches to achieve a 3-dB AR bandwidth of 28.1%. By using a differentially fed method, a 3-dB AR bandwidth of 31% is obtained [11]. e CP antenna using an n-shaped proximity coupling probe configuration has the impedance and 3-dB AR bandwidths of 25% [12]. In [13], a quasi-magnetic-electric CP patch antenna with a single feed is studied, which can achieve a 3-dB AR bandwidth of 15.3%. An H-shaped microstrip patch and a reactive impedance surface are applied to improve the 3-dB AR bandwidth to 27.5% [14]. Sequential feed methods are also proposed to achieve good circular polarization performance [15][16][17][18][19][20]. In [15], four cross slots via a microstrip feed line with multiple matching segments are used to excite a square patch, and a 3-dB AR bandwidth of 16% is achieved. Four asymmetric cross slots and a microstrip ring with eight matching segments are applied to obtain an enhanced 3-dB AR bandwidth of 26% [16]. Two coin-shaped patches and a sequential feed structure using four probes are presented in [17], and a 3-dB AR bandwidth of 15.9% is achieved. e 3-dB AR bandwidth is enhanced by using a ring-shaped strip inlaid along the edge of the parasitic patch and two square holes in the center of the main patch and the parasitic one. In [18], a sequential feed structure using four probes which are connected to a microstrip feed line is applied to achieve a wide 3-dB AR bandwidth (16.4%). Two probes connected to a horizontal L-shaped strip are as a sequential feed structure, and a shorting pin is loaded to enhance the 3-dB AR bandwidth (17.9%) [19]. In [20], four slot elements fed by a 4-way sequential-phase feeding network is proposed to achieve a 3-dB AR bandwidth of 15.6%. In this paper, a wideband right-hand circular polarization (RHCP) antenna with two circular patches and a sequential feed structure is presented. By increasing the number of probes of the sequential feed structure from 4 to 6, the impedance and 3-dB AR bandwidths are improved. e effects of the number of probes on the antenna performance are analyzed by the characteristic mode (CM) method. e proposed antenna exhibits a 3-dB AR bandwidth of 1.6-2.2 GHz (31.6%) and a good impedance matching performance in the band of 1.486-2.236 GHz (40.3%) for S 11 ≤ −18 dB. e diameters of patch 1 and patch 2 are D 1 and D 2 , respectively. e distance between the two patches is h 3 . e inner and outer radii of the split-ring microstrip line are r 1 and r 2 , respectively. e beginning of the split-ring microstrip line contains a rectangular section with length l 1 . e rectangular section is introduced to enhance the impedance bandwidth. e distance between patch 1 and the split-ring microstrip line is h 2 . e split-ring microstrip line, patch 1, and patch 2 are printed on the substrate F 4 BMX220 with ε r � 2.2 and tanδ � 0.0007. Substrate 1, substrate 2, and substrate 3 have a thickness of 0.5 mm, 1 mm, and 1 mm, respectively. e diameter of the three substrates is rsub. Six probes with a diameter of 2.8 mm are used to feed patch 1, which are placed in sequence on the split-ring microstrip line. e probes are distributed along an arc line with radius rp. e angle between the two adjacent probes is α. e number of probes and the angle α affect the CP performance and impedance bandwidth greatly. e diameter of the ground is rg. e distance between the ground and the splitring microstrip line is h 1 . e detailed dimensions of the proposed antenna are shown in Table 1. ree prototypes of the CP antennas are created to exhibit the design process of the proposed antenna, as shown in Figure 2. Antenna 1 has four probes and no parasitic patch. Antenna 2 includes a circular parasitic patch. e proposed antenna contains six probes and a circular parasitic patch. e simulated S 11 and boresight AR of the three CP antennas are shown in Figure 3. For antenna 1, by using the sequential feed structure with four probes, the RHCP wave is obtained. Antenna 1 achieves International Journal of Antennas and Propagation an impedance bandwidth of 19.28% for S 11 ≤ −18 dB and 3-dB AR bandwidth of 9.36%. e impedance and 3-dB AR bandwidths are still narrow. By adding a parasitic patch, the impedance bandwidth of antenna 2 is increased to 21.41% for S 11 ≤ −18 dB. e 3-dB AR bandwidth is only 8.19%. However, the AR of antenna 2 in the band of 1.8-2.3 GHz is better than that of antenna 1. For the proposed antenna, the number of the probes is increased from 4 to 6, and the impedance and 3-dB AR bandwidths are improved to 41% (1.483-2.247 GHz) for S 11 ≤ −18 dB and 29.6% (1.628-2.193 GHz). It can be observed that the number of probes has a significant effect on the impedance and 3-dB AR bandwidths. Effects of the Number of Probes. We study the performance of the antenna when the number of probes is 4, 5, 6, and 7 with α � 90°, 72°, 60°, and 51.429°, respectively, as shown in Figure 4. For traditional four probes with α � 90°, the 3-dB AR bandwidth is only 7.8% and the impedance bandwidth for S 11 ≤ −18 dB is 21.2%. To extend the 3-dB AR bandwidth, we proposed a new method to achieve an enhanced AR bandwidth by increasing the number of probes. When five probes with α � 72°are used, the impedance bandwidth for S 11 ≤ −21 dB is 34.6% (1.547-2.194 GHz). However, the 3-dB AR bandwidth is only 18.3%. e impedance bandwidth of 41% for S 11 ≤ −18 dB and the 3-dB AR bandwidth of 29.6% are achieved when six probes with α � 60°are introduced. As the number of probes increases to seven (α � 51.429°), the impedance matching becomes worse and the 3-dB AR bandwidth is only 15.9%. Effects of Patch 1 and Patch 2. Patch 1 is a main radiation patch, and patch 2 is a parasitic patch. e simulated results of S 11 and AR by varying D 1 are shown in Figure 5. As D 1 increases, the operating frequency band of the antenna is shifted to a lower frequency. International Journal of Antennas and Propagation Figure 6 shows the effects of patch 2 on the performance of S 11 and boresight AR. It is shown that patch 2 has a significant effect on the AR, especially in the high-frequency band. Effects of the Sequential Feed Structure. e parameters of the angle α, the length l 1 , and the radius rp are investigated to further illustrate the antenna design process. e effects of the angle α on the performance of S 11 and boresight AR are shown in Figure 7. It is shown that the maximum value of S 11 in the band of 1.55-2.25 GHz is increased and the low cutoff frequency is shifted to a lower frequency as the angle α increases. Decreasing α makes the AR become worse in the high-frequency band. us, α � 60°w as chosen. As shown in Figure 8(a), the radius rp has a slight effect on the impedance matching. However, the AR in the band of 1.8-2.2 GHz becomes worse as rp decreases, as shown in Figure 8(b). e effects of the length l 1 of the rectangular section on the split-ring microstrip line are shown in Figure 9. e length l 1 has a slight effect on the AR. Compared with l 1 � 8.5 mm, the impedance bandwidth is shifted down and the impedance matching in the band of 1.6-2.2 GHz becomes worse with l 1 � 4.5 mm; while with l 1 � 12.5 mm, the impedance matching in the band of 1.7-2.2 GHz becomes worse. International Journal of Antennas and Propagation Effects of the Height h 2 and h 3 . e effects of varying height of patch 1 and patch 2 on the performance of S 11 and boresight AR are shown in Figures 10 and 11, respectively. As the height h 2 decreases, the impedance matching and boresight AR in the operating band get worse. Increasing h 2 makes the impedance matching and boresight AR better, but narrows the operating bandwidth. A slight effect of the height h 3 on the impedance matching has been observed. However, the height h 3 has a significant effect on the boresight AR. Increasing h 3 greatly degrades the 3-dB AR bandwidth. When h 3 is reduced to 10 mm, the boresight AR deteriorates in the operating band. Figure 12 shows the surface current distribution on patch 1 and patch 2 at 1.91 GHz. It is obvious that the vector current rotates counterclockwise in a circular path, which depicts RHCP. Characteristic Mode Analysis (CMA). In order to illustrate the operating principle of the antenna, CMA is carried out using FEKO. When four probes with α � 90°are used, the mode current distribution at 1.72 and 1.96 GHz is shown in Figure 13. Figure 13(a) shows that mode 1 is orthogonal to modes 3 and 4 at 1.72 GHz. us, the CP radiation International Journal of Antennas and Propagation characteristics are obtained by modes 1, 3, and 4. However, modes 1 and 5 cannot be excited, and there is no mode that is orthogonal to mode 3 at 1.96 GHz, as shown in Figure 13(b). is leads to the deterioration of AR in the high-frequency band. Figure 14 shows the modal current distribution of the proposed antenna with six probes at 1.72 and 1.96 GHz. We can see that mode 2 is orthogonal to modes 3 and 5 at 1.72 GHz, as well as mode 3 is orthogonal to mode 5 at 1.96 GHz. us, when the six probes are used, the antenna can obtain a wide AR bandwidth. Furthermore, mode 5 has intense current at patch 2 at 1.96 GHz. is indicates that patch 2 has a great influence on the CP radiation in the highfrequency band. When seven probes with α � 51.429°are used, the AR in the low-frequency band has deteriorated. e mode current distribution at 1.72 GHz is shown in Figure 15. It is shown that mode 2 is orthogonal to modes 3 and 5. However, the current in modes 3 and 5 flows in opposite directions, which leads to the amplitude imbalance between the orthogonal modes, resulting in the deterioration of the AR. Figure 16 shows the fabricated prototype. S 11 was measured by using the Rohde & Schwarz ZVT 8 vector network analyzer. Radiation patterns, gain, and AR were measured in an anechoic chamber. e simulated and measured S 11 of the proposed antenna are shown in Figure 17(a). e measured impedance bandwidth for S 11 ≤ −18 dB is 40.3% from 1.486 to 2.236 GHz. Figure 17(b) shows the simulated and measured boresight AR. e measured bandwidth for AR ≤ 3 dB is 31.6%, covering 1.6 to 2.2 GHz. Figure 17( International Journal of Antennas and Propagation Figure 18. e 3-dB beamwidth is more than 45°for both planes. Table 2 summarizes some key indicators of the proposed antenna and other wideband CP antennas. e antenna proposed in [14] is fed by a single feed point. e antenna proposed in [10] is fed equivalently by two feed points. To improve the AR of the antenna in [14], a reactive impedance surface is used. To improve the AR of the antenna in [10], an L-shaped parasitic patch and four parasitic patches are used. In [11], a wide 3-dB AR bandwidth of 31% is achieved by generating an equivalent four-point feeding. It is observed that the more the feeding points, the easier it is to obtain a wider 3-dB AR bandwidth (27.5% in [14], 28.1% in [10], and 31% in [11]) and a wider impedance bandwidth (44.5%% in [14], 38% in [10], and 60.5% in [11]). In this paper, an equivalent six-point feeding and a parasitic patch are used to extend the impedance and 3-dB AR bandwidths. Compared with the CP antennas in [5, 9-16, 19, 20], the proposed antenna has a wider 3-dB AR bandwidth and better impedance matching. e peak gain of the proposed antenna is 9.76 dBic, which is a good result. Although the antenna in [20] has a greater peak gain, the size is larger than that of the proposed antenna. e size of the proposed antenna is also competitive. Conclusion In this paper, a wideband CP antenna is studied and its advantages in impedance matching and CP radiation performance are discussed in detail. By introducing six probes, a parasitic patch, and a split-ring microstrip line containing a rectangular section, the impedance bandwidth of 40.3% for S 11 ≤ −18 dB and the 3-dB AR bandwidth of 31.6% are achieved, which shows a great improvement in the operating bandwidth. A peak gain of 9.76 dBic and unidirectional radiation patterns are also achieved. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-06-18T09:05:29.189Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "591e4f470b96aba4e5a81e3d7138b1320b3994e5", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijap/2020/5379892.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "970d010151de841b0e8ae9ad3fe21559d724e4aa", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Physics" ] }
246438039
pes2o/s2orc
v3-fos-license
Actinomycotic Osteomyelitis of the Maxilla in a Patient on Phenytoin Actinomycosis is caused by Actinomyces species and is relatively rare in humans. Because of the special collateral blood flow, osteomyelitis is less common in the maxilla than the mandible. Although there are few case reports for jaw osteomyelitis, actinomycotic osteomyelitis associated with phenytoin therapy has not been reported before. The data show that antiepileptic drugs induce suppression of the immune system. This report presents a rare case of a 58-year-old man on phenytoin with actinomycotic osteomyelitis, and reviews the relevant literature. INTRODUCTION Osteomyelitis refers to inflammation of osseous tissue, which starts as an infection in the calcified part of the bone in the medullary area and quickly spreads to the Haversian system and periosteum [1][2][3][4].It is often caused by Staphylococcus aureus, Staphylococcus epidermis, and Escherichia Coli [1].The pathogens cause pus formation and edema in the medullary cavity followed by an increase in the intramedullary pressure, leading to vascular collapse through stasis, thrombosis, or ischemia.As a result, bone necrosis and ultimately sequester formation occur as the classic symptoms of osteomyelitis [1].Osteomyelitis is less prevalent in the maxilla due to its collateral blood flow system and subsequently higher oxygen supply [2,3].Furthermore, thin cortical bone, bone marrow spaces, and the porosities in the maxilla make it less prone to infection [3][4][5].Osteomyelitis is more common in young adults and women than men (2:1 ratio) [6].Facial osteomyelitis is rare [7].Osteomyelitis was a common disease with a high mortality rate before the advent of antibiotics [4,5].The causes of infection in osteomyelitis can be traumatic, rhinogenic, or odontogenic.Contributing factors include diseases that have weakened the immune system, such as diabetes mellitus, HIV, malnutrition, or use of chemo-therapeutic agents and other immunosuppressive drugs [5]. Actinomycosis is an endogenous saprophytic infection that is relatively uncommon in humans [5], and is very rare in the oral mucosa [7].Actinomyces found in nature are unable to cause diseases in humans [6].The prevalence of jaw actinomycosis infection is low in the mandible (6.53%), buccal mucosa (4.16%), chin (3.13%), maxilla (7.5%), and temporomandibular joint (0.3%) [3].Actinomycotic osteomyelitis is the spread of Actinomyces to the alveolar bone and does not have a clear pathological mechanism [8].To induce disease, Actinomyces should first penetrate into the tissue, and then, they need a perfect environment for growth and proliferation.Dental lesions, loss of tissue viability due to trauma and necrotic bone can be considered as favorable environments for this pathogen [6].This report presents a rare case of a 58-year-old man on phenytoin with actinomycotic osteomyelitis, and reviews the relevant literature. CASE REPORT A 58-year-old man complaining of sudden loss of his upper left premolar presented to our department.He had no apparent problems but upon waking up he found the tooth in his mouth with no pain or bleeding.Clinical examination indicated other missing teeth near the lost tooth.The upper left second premolar had grade 3-and the upper left canine had grade 2-mobility. The socket of the lost tooth was completely empty, and there was a mass in the form of an erythematous nodule with medium consistency on the buccal edge of the socket, which was not painful or hemorrhagic.His other teeth appeared to be sound without any mobility.The patient underwent periapical radiography of the maxillary left teeth from the lateral incisor to the second premolar.A unilateral periodontal ligament widening was observed distal to the maxillary second premolar (Fig. 1).Considering the sudden tooth loss and unilateral periodontal ligament widening, malignancy was suspected.We performed a biopsy from the erythematous nodule, that showed chronic diffuse inflammation.The patient was discharged afterwards, but was followed and recalled due to the suspected history.Three weeks later, the patient mentioned in a telephone conversation that a large mass had emerged in the area of the missing tooth.The patient was scheduled an emergency appointment.On clinical examination, a buccopalatally expanded firm mass, which had filled the extraction socket was found.There was no bleeding or pain on palpation.Cone-beam computed tomography was requested which revealed a radiolucent lesion with a motheaten border in the left side of the maxilla, extending from the distal of the maxillary left canine to the edentulous molar area, limited to the alveolar process.The sinus floor and the basal bone appeared to be intact.There was unilateral widening of the periodontal ligament with asymmetric destruction of the lamina dura and no root resorption.The lesion had destroyed the buccal plate (Fig. 2).The differential diagnosis included 1) intra-osseous squamous cell carcinoma, and 2) osteosarcoma (lytic phase).The second incisional biopsy was obtained from the mass inside the socket that was facing outwards.Pathologic analysis of hematoxylin/eosin-stained sections (Fig. 3) revealed fragments of necrotic bone trabeculae enclosing large colonies of actinomycotic organism.The bone trabeculae showed a loss of the osteocytes from their lacunae and prominent reversal lines.Intertrabecular spaces were filled with necrotic debris and an acute inflammatory infiltrate consisting of poly-morphonuclear leukocytes and large microbial colonies.In the overlying oral mucosa, dense lymphoplasmacytic infiltration, marked exocytosis, and spongiosis were notable.The buccal cortical plate at the site of the maxillary left second premolar was isolated and sent for pathological assessment of the second biopsy.The report was very surprising because osteomyelitis required a systemic or localized condition as a risk factor, and it rarely occurs in the maxilla, and actinomycotic osteomyelitis is even rarer.Therefore, the subject received a complete work-up to detect local and systemic factors.The patient had been taking phenytoin for 14 years due to epilepsy, which had shown significant improvement during the past 8 years with no recurrence of seizures.However, a full examination was requested for further reassurance despite the fact that he had no history.Complete blood count and electrolytes were normal.Only HDL was slightly above the normal range of 30-60mg/dL (72mg/dL).Full debridement was performed to reach sound bone tissue, and the area was washed and sutured and amoxiclav was prescribed every 12h for 2 weeks.The patient has been followed for 10 months with no problems so far.Since his clinical symptoms were not consistent with the disease, osteomyelitis was not initially considered as the most likely diagnosis; but the patient was properly tested and followed, eventually leading to the correct diagnosis and management.While receiving treatment, the patient was referred to his physician to change his medication because otherwise, he might not respond well to treatment. The patient is still routinely controlled and is in good general health. DISCUSSION Osteomyelitis is relatively uncommon in the jaws [9].Actinomyces israelii is the main cause of neck and face actinomycosis.This pathogen is normally found in decayed teeth, dental plaque, gingival grooves, and tonsillar crypts [6,7,14].Therefore, poor oral hygiene, unhealed sockets, and surgical manipulation can facilitate its penetration into deeper tissues [3].In general, the infection can be primary or secondary to nonspecific local osteomyelitis [14].Actinomyces lack the hyaluronidase enzyme that is responsible for degradation of tissues [14].Hence, it has low virulence and invasiveness, but other bacteria may act as copathogens and facilitate the development of infection by this microorganism through its toxins and enzymes.The microbial flora plays a synergistic role meaning that it creates a specific ecosystem in which there is potential for oxygen reduction, facilitating the proliferation of anaerobes. Following the development of an anaerobic environment, the oxygenated environment with high vascular supply is destroyed and replaced by a very irregular granular tissue (sulfur granules) that in turn causes further growth of anaerobic microorganisms.In fact, the infection manifests as a granulomatous inflammation that comes with a purulent necrotic core characterized by the accumulation of bacterial filaments surrounded by neutrophils.Instead of complete lytic destruction of bone, granulomatous inflammation results in formation of bone spicules, leading to the development of sclerosis which resembles bone tumors [3,17].Due to its diverse clinical picture in the head and neck, it can resemble a benign infection or a metastatic tumor [7].Actinomycosis may look like fungal infections.Thus, it is usually discussed in mycology, while Actinomyces are in fact filamentous and prokaryotic Gram-positive bacteria [14,17] maxilla due to its excellent blood supply and thin bone structure.Infectious maxillary sinusitis [14], an unhealed extraction socket, hyperglycemia [4], and immunosuppression caused by chemotherapy [16], can be predisposing factors for maxillary osteomyelitis.Non-healing wounds after the long-term habitual use of chewing tobacco should be distinguished from actinomycotic osteomyelitis [11].Our patient had none of the above-mentioned symptoms or disorders.Actinomycotic osteomyelitis has manifestations such as paresthesia, exposure of bone, pathologic fracture of bones, and persistent infection after root canal treatment and tooth extraction, with no history of trauma [14].In our case, the patient had no symptoms like pus, fistula, swelling, or sensory disturbances.In addition, there was no history of trauma to the jaws or to the head and face.The diagnosis of actinomycotic osteomyelitis is often based on clinical, radiographic, and microscopic findings [7].Clinically the disease can be painless or painful and mimic different benign or malignant conditions.Culturing this microorganism is technically difficult because it requires an anaerobic environment [7,14].Use of polymerase chain reaction is expensive [3] and serological methods are not reliable, especially for those with immunodeficiency [16].Biopsy and histopathological examination are strongly recommended for diagnosis [3,14,16], which were used for the final diagnosis in our patient.Imaging techniques are also available for advanced assessment of osteomyelitis. Computed tomography is the best modality for assessment of the calcified structures especially the cortical plate.The isotope 99m TC and bone scan of methylene diphosphonate is another modality, which can help detect the acute involvement, but they cannot be used for definite diagnosis due to poor resolution. Positron emission tomography was recently found suitable to differentiate between normal and damaged bone [4].Our patient was first examined based on his panoramic radiograph, and then cone-beam computed tomography was requested for further evaluation. The treatment of osteomyelitis may vary from non-invasive approaches to more aggressive radical treatments.A combination of highdose antibiotic therapy along with surgery is often effective.Surgical treatment includes removal of sequesters and mobile teeth and debridement of the area to achieve healthy bleeding tissue, and decortication, resection and reconstruction [3,5].Since under in vitro conditions, Eikenella and Actinomyces within the sulfur granules escape from direct contact with antibiotics and leukocytes, surgery along with antibiotic therapy is considered as the basis of treatment.[3,17].In cases with poor response to antibiotic therapy and surgery, the treatment may continue with amphotericin B [16].Our patient also underwent debridement surgery, and received antibiotics and was followed for 10 months with no problem. According to the literature, antiepileptic drugs like phenytoin, valproic acid, and carbamazepine affect the cytokine levels.Thus, their intake increases the level of IL1β, IL2, IL5, IL6T and TNFα [18].Also, they decrease the number of T cell suppressors [19], and increase the ratio of T-helper1 to T-helper2 [20].Phenytoin intake can decrease lymphoid cells [21].Various studies have found that phenytoin suppresses the immune system [18,22] and it may even cause lymphoma in the presence of other factors [21].Therefore, occurrence of actinomycotic osteomyelitis in our patient might be due to many years of phenytoin intake.Our literature review from 1960-2020 revealed that this case was rare not only because of its maxillary location, but also because of the underlying disease (epilepsy).To the best of our knowledge, this is the first case of actinomycotic osteomyelitis that may be related to long-term use of phenytoin.It is noteworthy that the present report is the only one reporting actinomycotic osteomyelitis in a male patient. CONCLUSION Actinomycotic osteomyelitis is a rare infection, especially in the maxilla.Chemotherapy, diabetes mellitus, and other 6 / 6 immune-suppressive conditions can increase its prevalence.Epileptic drugs like phenytoin affect the level of cytokines and decrease the number of T-cell suppressors.To the best of our knowledge, our case was the first male patient with actinomycotic osteomyelitis of the maxilla, for whom long-term use of phenytoin was hypothesized to be the predisposing factor. Fig. 1 : Fig. 1: Unilateral periodontal ligament widening in the cropped view of OPG.Loss of teeth around the fallen tooth and a mass in the form of an erythematous nodules Volume 19 | 6 Fig. 2 : Fig. 2: Perforation of the buccal and lingual cortical plates on cropped view of cone-beam computed tomography scan Table 1 : Overview of patients with underlying diseases, and age and gender of patients with lesions in maxilla . Actinomycotic osteomyelitis is rare in the
2022-02-01T16:13:24.325Z
2022-01-15T00:00:00.000
{ "year": 2022, "sha1": "c9bc7ded0fff17aef88d49955aa9ce3d45230d04", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/fid.v19i3.8511", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b026b8bf06802bce526090c6e1d4729eb63be879", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
240071932
pes2o/s2orc
v3-fos-license
Loss of exosomal LncRNA HCG15 prevents acute myocardial ischemic injury through the NF-κB/p65 and p38 pathways Exosomes are nanosized bilayer membrane vesicles that may mediate intercellular communication by transporting bioactive molecules, including noncoding RNAs, mRNAs, and proteins. Research has shown that exosomes play an important role in acute myocardial infarction (AMI), but the function and regulation of exosomal long noncoding RNAs (lncRNAs) in AMI are unclear. Thus, RNA sequencing (RNA-Seq) was conducted to investigate the exosomal lncRNA transcriptome from MI patients and identified 65 differentially expressed lncRNAs between the MI and control groups. HCG15, one of the differentially expressed lncRNAs, was verified to have the highest correlation with cTnT by qRT-PCR, and it also contributed to the diagnosis of AMI by receiver operating characteristic (ROC) analysis. Upregulation of HCG15 expression facilitated cardiomyocyte apoptosis and inflammatory cytokine production and inhibited cell proliferation. We also confirmed that HCG15 was mainly wrapped in exosomes from AC16 cardiomyocytes under hypoxia, which contributed to cardiomyocyte apoptosis, the release of inflammatory factors, and inhibition of cell proliferation via the activation of the NF-κB/p65 and p38 pathways, whereas suppressing the expression of HCG15exerted opposite effects, In addition, Overexpression of HCG15 aggravated cardiac IR injury in C57BL/6J mice. This study not only helps elucidate exosomal lncRNA function in AMI pathogenesis but also contributes to the development of novel therapeutic strategies. INTRODUCTION Acute myocardial infarction (AMI), which is mainly caused by coronary artery occlusion and myocardial ischemia, remains a serious cardiovascular disease with high morbidity and mortality [1,2]. AMI can lead to congestive heart failure and malignant arrhythmia, which seriously threaten human health [3]. The sensitive and early diagnosis of AMI has become a research focus, as it will provide benefits for the treatment of AMI and improve the survival and cure rates of patients [4]. In the clinic, cardiac troponins are the most frequently used markers in the diagnosis of AMI. However, several conditions other than AMI, such as myocarditis and malignant tumors, may lead to elevated levels of cardiac troponins. Thus, exploration of specific markers in the early clinical diagnosis of AMI and clarification of the potential mechanisms are important. Long noncoding RNAs (lncRNAs) are a novel group of noncoding RNAs with >200 nucleotides. Although lncRNAs cannot encode proteins, they have been proven to be involved in cell growth, differentiation, proliferation, and apoptosis [3]. LncRNAs regulate gene expression by cooperating with transcription factors (TFs) or remodeling complexes and can maintain protein stability [5]. Recently, numerous studies have reported that circulating lncRNAs, such as LIPCAR [6], UCA1 [4], NRON, and MHRT [7], play an important role in the pathogenesis of heart failure. LncRNA Meg3 could promote myocardial cell apoptosis, while knocking down Meg3 significantly improved cardiac function in mice with MI [8]. The microarray dataset GSE66360 showed that lncRNA SLC8A1-AS1 protected against MI by activating the cGMP-PKG pathway in mice with MI [9]. Enhanced expression of lncRNA ECRAR could substantially stimulate myocardial regeneration and induce recovery of cardiac function after MI, suggesting that ECRAR may be an effective therapeutic target for heart failure [10]. Therefore, systemic exploration of the differentially expressed lncRNAs in AMI, screening of potential functional lncRNAs and investigation of their roles and mechanisms in AMI are urgently needed. Exosomes are nanosized (30-100 nm in diameter) membranewrapped vesicles that may be involved in intercellular communication by transporting proteins and RNA molecules [11]. Emerging evidence has shown that exosomes from stem cells or cardiac progenitor cells facilitate cardiac repair and ameliorate AMI [12][13][14]. In particular, exosome-mediated miRNA and mRNA transfer between cardiomyocytes affects cardiac physiology and pathological state [2,[15][16][17]. However, the function and regulation of exosomal lncRNAs in AMI are poorly understood. In this study, RNA sequencing (RNA-Seq) was performed to investigate the exosomal lncRNA transcriptome from AMI patients, and an anoxic model of AC16 cardiomyocytes was established. We confirmed that lncRNA HLA complex group 15 (HCG15) expression was upregulated in exosomes from AMI patients and hypoxiainduced cells, which resulted in cardiomyocyte apoptosis, the release of inflammatory cytokines and inhibition of cell proliferation. These findings may contribute to the development of novel therapeutic strategies. MATERIALS AND METHODS Patients and samples Blood samples of AMI patients were collected within 12 h after the onset of chest pain. The control samples were obtained on the morning after admission in a fasting state. Serum was separated and stored at −80°C within half an hour. Isolation and identification of exosomes Exosomes from serum were isolated using ExoQuick precipitation solution (SBI, USA) according to the manufacturer's protocol. Briefly, 1 ml of serum was filtered with a 0.22-μm pore filter. Then, cell-free serum was mixed with ExoQuick precipitation solution. After the mixture was incubated for 30 min at 4°C and centrifuged at 1500 × g for 30 min at 4°C, the obtained pellets were resuspended in PBS and stored at −80°C until use. Conditioned medium (10 ml) was collected from normoxic and hypoxic AC16 cells cultured in DMEM (Gibco, USA) with 10% exosome-depleted fetal bovine serum (FBS) (Gibco, USA) for 24 h and centrifuged at 3000 × g for 15 min to remove cells and debris. After filtered with a 0.2-μm pore filter, the supernatant was further concentrated by centrifugation at 3000 × g for 20 min at 4°C in an ultrafiltration tube (Amicon Ultra-15, Millipore, USA). The concentrated samples were mixed with ExoQuick-TC precipitation solution by vortexing, incubated overnight, and finally centrifuged at 1500 × g for 30 min at 4°C to isolate the pellets. The pellets were resuspended in PBS and stored at −80°C until use. RNA-Seq analysis Total RNA of exosomes (20 μg) from six AMI patients and six healthy controls (clinical characteristics shown in Supplementary Table 1) was extracted using TRIzol reagent (Life Technologies, USA) following the instructions. The preparation of whole transcriptome libraries and deep sequencing were performed by Novogene Bioinformatics Technology Corporation (Shanghai, China). Ribosomal RNA was eliminated, and strand-specific sequencing libraries were constructed according to the protocol. RNA-Seq was conducted on an Illumina HiSeq 2000 sequencer (Illumina, San Diego, USA), and 150 bp reads were produced conforming to Illumina's instructions. After low-quality data were filtered, the clean reads were aligned to the human reference genome GRCh37/hg19 by the HISAT2 program. Then, transcripts were spliced and assembled using StringTie software. Information about lncRNAs was annotated based on GENCODE [18]. The fragment per kilobase of transcript per million mapped reads (FPKM) values were used to evaluate the expression abundance for each transcriptional region, which was calculated by StringTie software. Differential expression of RNAs with a p value < 0.05 and fold change (FC) ≥ 2 was identified by DESeq2 software. Cell culture, hypoxic treatment, and transfection The human cardiomyocyte cell line AC16 was obtained from the BeNa Culture Collection (Beijing, China) and cultured in DMEM (Gibco, USA) supplemented with 10% FBS (Gibco, USA). Normoxic cells were cultured in a humid atmosphere of 37°C, 95% air, and 5% CO 2 . For simulation of the state of ischemic injury in vitro, cells were pre-exposed to 95% N 2 and 5% CO 2 for 2 h and placed in a hypoxic incubator (Sanyo, Japan) with 1% O 2 for different durations. HCG15 overexpression plasmids were constructed by cloning the sequences of HCG15 (synthesized by General Biosystems, Chuzhou, China) into the pcDNA3.1 vector. SiRNAs targeting lncRNA HCG15 were transfected into AC16 cells, and a nontargeting siRNA (scrambled) was used as the negative control. Cells were seeded in six-well plates at 1 × 10 6 cells per well. HCG15 overexpression plasmids or siRNAs (siRNA-1, siRNA-2, and siRNA-3) were transfected into AC16 cells with Lipofectamine 2000 (Invitrogen, USA) in Opti-Minimal Essential Medium (Gibco, USA). Experiments were conducted 48 h after transfection to assess the silencing effect. The sequences of siRNA-1, siRNA-2, siRNA-3 and scramble were following: Labeling and uptake of exosomes Exosomes were extracted from the culture medium of AC16 cells under hypoxia for 8 h and labeled with PKH67 dye (Sigma-Aldrich, USA) at room temperature for 5 min. Then, exosome-depleted bovine serum albumin was added to labeling reactions. Subsequently, exosomes were washed three times with an ultrafiltration tube (Amicon Ultra-15, Millipore, USA) to remove unbound dyes. Finally, AC16 cells (2 × 10 5 ) were incubated with labeled exosomes. Normoxic AC16 cells that assimilated exosomes from hypoxic AC16 cells were monitored using confocal microscopy (LSM880, Zeiss, Germany). Cell viability analysis AC16 cell viability was detected by MTT (3-[4,5-dimethylthiazyol-2yl]-2,5diphenyltetrazolium bromide, Sigma) assays. First, 5 × 10 3 cells/well were seeded into 96-well plates. After 24 h of culture, 20 µl of MTT solution was added to each well and maintained at 37°C for 4 h. After the medium was removed, 150 µl of dimethylsulfoxide (DMSO) was added to each well and incubated at 37°C for 10 min. Subsequently, the absorbance was measured at 570 nm using a Multiskan MK microplate reader (Thermo Scientific, USA). Detection of cell apoptosis The terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL) (Roche, Mannheim, Germany) method was applied to detect apoptosis. Briefly, cells in each treated group were immobilized with 4% paraformaldehyde at room temperature. Subsequently, the cells were washed twice with PBS and incubated with buffer containing 0.1% Triton X-100 on ice for 2 min. Next, the cells were washed and sealed with 3% BSA. Finally, the slides were incubated with 50 μl of freshly prepared TUNEL reaction mixture for 1 h at 37°C in a humidified chamber. Fluorescent images were acquired with fluorescence microscopy (AE31, Motic, Xiamen, China). Flow cytometry was also used to detect cell apoptosis. After specific treatment, the cells were digested with 0.25% trypsin and collected by centrifugation at 2000 rpm for 5 min. An Annexin V-FITC/PI Apoptosis Detection Kit (Bioleng, CA, USA) was used for apoptotic detection. Cells were suspended in 500 μl of binding buffer, and then, 5 ml of annexin V-FITC was added and fully mixed. Propidium iodide (5 μl) was added and incubated in the dark at room temperature for 10 min. Finally, the cells were analyzed within 1 h by flow cytometry (CytoFlex, Beckman, CA, USA). Enzyme-linked immunosorbent assay The inflammatory cytokines IL-6, IL-1β, and TNF-α released from the supernatant were measured by enzyme-linked immunosorbent assay kits (ELISA) (Immunoway, TX, USA). Optical density (OD) values were measured at 450 nm by a Multiskan MK microplate reader (Thermo Scientific, USA). The OD values for each sample were compared with standard curves to quantify the amount of protein in the original samples. Detection of the level of cTnT cTnT was detected by a Cobas E601 chemiluminescence analyzer (Roche, Basel, Switzerland) and supporting reagents. The detection range of reagents was 0.003-10 ng/ml. Mouse model of myocardial ischemia-reperfusion injury The protocol was performed following the guidelines approved by the Institutional Animal Care and Use Committee of Southern Medical University. All animal care and experimental protocols were in compliance with the National Institutes of Health guidelines for the care and use of laboratory animals. Male C57BL/6J mice (8-10 weeks) were used in this study. Mice with normal lipid metabolism were provided by the Experimental Animal Center of Southern Medical University. All mice used in the study were fed normal mouse food. The mice were kept under pathogen-free conditions and maintained at a standard temperature and humidity. Mice were anesthetized with an intraperitoneal injection of 100 mg/kg ketamine (Ketathesia) and 10 mg/kg xylazine and intubated for assisted respiration using a small animal ventilator (Harvard Apparatus, Natick, MA). The neck and chest of the mice were cleaned and disinfected with 75% ethanol. The trachea was intubated from the mouth of the mice. After successful intubation, the cannula was fixed, and the ventilator was quickly connected to assist breathing. After left lateral thoracotomy, the pericardium was dissected, and an 8-0 surgical suture was carefully passed underneath the left anterior descending coronary artery (LAD) at a position 1-2.5 mm from the tip of the left auricle. The suture was cut at the needle site, and both ends were threaded through a 1 mm section of PE-10 tubing, forming a loose snare around the LAD. Ischemia for 45 min and reperfusion were produced by loosening the silk thread. Cardiac ischemia was confirmed by a pale area below the suture or ST-T elevation shown in the ECG that gradually became cyanotic, while reperfusion was characterized by the rapid disappearance of cyanosis followed by vascular blush. The sham group underwent the same surgical procedures but with no coronary artery ligation. Quantification of infarct size At 24 h post-reperfusion, mice were reanesthetized and reintubated, and the LAD was reoccluded by ligating the suture in the same position as for the original infarction. Animals were then killed, and 1 ml of 1% Evans Blue dye (Sigma) was infused intravenously (i.v.) to delineate the area at risk (AAR, corresponding to the myocardium lacking blood flow). The left ventricle (LV) was isolated and cut into transverse slices (5-7 1-mm slices per LV), and both sides were imaged. For delineation of the infarcted (necrotic) myocardium, slices were incubated in triphenyltetrazolium chloride (TTC, Sigma) at 37°C for 15 min. The slices were then rephotographed and weighed, and regions negative for Evans Blue staining (AAR) and TTC (infarcted myocardium) were quantified with ImageJ. Percentage values for AAR and infarcted myocardium were independently mass corrected for each slice. Echocardiography Four weeks after surgery, transthoracic echocardiography was performed to evaluate cardiac function. Echocardiography was performed as previously described [19]. Echocardiographic analysis using a Vevo2100 digital imaging system (Visual Sonics) was performed under 1% isoflurane, with midventricular M and B mode measurements acquired in the parasternal short-axis view at the level of the papillary muscles. Once the mice were acclimated to the procedures, images were stored in digital format on a magnetic optical disk for review and analysis. M-mode tracings were recorded through the anterior and posterior LV walls at the papillary muscle level in the same long axis views to measure left ventricular enddiastolic diameter (LVEDD) and LV end-systolic diameter (LVESD), as well as the interventricular septum (IVS) and posterior wall (PW) dimensions. The left ventricular ejection fraction (EF) was calculated by the cubic method: LVEF (%) = ((LVIDd) 3 -(LVIDs) 3 )/(LVIDd) 3 × 100, and the left ventricular FS was calculated by LVFS (%) = (LVIDd-LVIDs)/LVIDd × 100. The data were averaged from five cardiac cycles. Masson staining The mice were killed at 4 weeks after ischemia-reperfusion (IR). The hearts were collected and fixed in 4% formaldehyde solution for 24-48 h. Then, the hearts were dehydrated and paraffin-embedded. Next, 5-µm-thick slices were cut for Masson's trichrome staining to visualize fibrosis. Images were captured by microscopy and analyzed by ImageJ. Overexpression of lncRNA HCG15 For in vivo infection, pAAV9-CMV-ZsGreen-HCG15 (Oe-HCG15) or control (scramble) virus particles (1 × 10 11 viral genomes/ml) were administered by direct injection with a 30-gauge syringe needle into the free wall, apex, and sidewall of the LV (three sites, 8 μl/site) in 8-week-old mice. Four weeks later, the mice were subjected to IR or sham surgery. Intervention with the NF-κB antagonist (SN50) or p38 MAPK antagonist (ralimetinib (LY2228820) dimesylate) in IR mice Male C57BL/6J mice were injected via tail veil with Oe-HCG15 virus. At 25 days after infection, we treated mice with daily intraperitoneal injection of SN50 (10 μg/kg per day) 3 days before IR model establishment. The treatment was last for 4 weeks. To block p38 MAPK signaling pathway, Oe-HCG15 mice were given ralimetinib (LY2228820) dimesylate (20 mg/kg) dissolved in saline by the oral administration 3 days before model establishment and last for 4 weeks. Statistical analysis SPSS Statistics 20.0 was used to analyze the data. The data are presented as the mean ± SD. Each assay was performed at least three times. Comparisons between groups were performed using Student's t test for two groups of data and one-way ANOVA followed by Tukey's multiple comparison test for multiple groups of data. p < 0.05 was considered statistically significant. Identification of differentially expressed exosomal lncRNAs from the serum of AMI patients The expression profiles of exosomal lncRNAs from six AMI patients and six matched healthy controls were determined by RNA-Seq, and their clinical characteristics are shown in Supplementary Table 1. First, exosomes were extracted and identified by TEM and dynamic light scattering; most of the exosomes were~100 nm in diameter (Fig. S1A, B). Second, total RNA isolated from exosomes was used for the preparation and sequencing of the cDNA library. As shown by the volcano plot (Fig. S1C), a total of 65 differentially expressed lncRNAs were identified between the AMI patients and the healthy controls based on the following criteria: log 2 (fold change) > 1 and padj-value < 0.05. Twenty-nine lncRNAs with upregulated expression and 36 with downregulated expression were identified in the AMI group, and the top ten lncRNAs with upregulated and downregulated expression are shown in Fig. 1A. To validate the RNA-Seq results, we assessed the top five lncRNAs with upregulated expression (STX18-AS1, HCG15, LINC00265, NPH3-AS1, and ENTPD1-AS1; Fig. 1A and Supplementary Table 3. There were obvious differences in sex, blood pressure, creatinine, LDL, and cTnT expression between the AMI patients and the healthy controls. All five lncRNAs showed significantly upregulated expression in the AMI group (Fig. 1B), which was consistent with the RNA-Seq data. Subsequently, we examined the correlation between lncRNAs and cTnT. The results showed a positive correlation between the expression of these 5 lncRNAs and the cTnT concentration, and the expression of HCG15 had the highest positive correlation with cTnT (Fig. 1C). In addition, we performed receiver operating characteristic (ROC) analysis to assess the clinical diagnostic value of the five lncRNAs in MI. The area under the receiver operating characteristic curve (AUC) values of the five lncRNAs ranged from 0.849 to 0.952 (Fig. 1D), and the AUC value of lncRNA HCG15 was the highest. Thus, we speculated that HCG15 might play an important role in MI, and we focused on HCG15 in further studies. LncRNA HCG15-induced cardiomyocyte apoptosis and the production of inflammatory cytokines To investigate the biological role of HCG15 in cardiomyocytes, we transfected a HCG15 overexpression plasmid or control vector into AC16 cells. qRT-PCR confirmed that HCG15 expression was significantly elevated in the AC16 cells transfected with the HCG15-overexpressing plasmids ( Fig. 2A). Proliferation was suppressed after HCG15 overexpression, as shown by MTT (5 mg/ml, Sigma) assays (Fig. 2B). TUNEL staining (Fig. 2C) and flow cytometry (Fig. 2D) showed that cell apoptosis was increased after HCG15 overexpression. In addition, the levels of inflammatory cytokines, such as IL-6, IL-1β, and TNF-α, were strongly increased when AC16 cells were transfected with the HCG15 overexpression vector (Fig. 2E). HCG15 was transferred to cardiomyocytes via exosomes after hypoxia To explore how HCG15 functions and to simulate hypoxic conditions in vitro, we analyzed hypoxic AC16 cells. As shown in Fig. 3A, the level of HCG15 was significantly upregulated after hypoxia and peaked at 8 h. Exosomes were extracted from the culture media of AC16 cells under normal and hypoxic conditions and then identified by TEM (Fig. 3B). The exosomal markers CD63, CD9, and TSG101, but not calnexin, were detected by western blotting (Figs. 3C and S1D). Moreover, HCG15 was enriched in exosomes, as the level was over three times higher than that of producer cells (Fig. 3D). To identify whether exosomes (exosomes derived from hypoxic cells were named Exo-H and exosomes derived from normoxic cells were named Exo-N) were endocytosed by normoxic cells, we incubated AC16 cells with Exo-H labeled with PKH67 for different durations. After 4 h, 12 h, and 24 h of incubation, green signals were detected in the normoxic AC16 cells (Fig. 3E). This finding indicated that hypoxic exosomes could be effectively taken up by AC16 cells. To investigate whether exosomes could transfer HCG15 to receptor cells and Fig. 2 LncRNA HCG15-induced cardiomyocytes apoptosis and the production of inflammatory cytokines. A qRT-PCR was performed to examined HCG15 expression in AC16 cells after transfection of the HCG15 overexpression plasmid. B MTT assays detected the proliferation of AC16 cells after transfection of the HCG15 overexpression plasmid. TUNEL staining (C) and flow cytometry (D) were performed to measure the apoptosis of AC16 cells after transfection with the HCG15 overexpression plasmid. E The production of inflammatory cytokines of AC16 cells transfected with the HCG15 overexpression plasmid was measured by ELISAs. *p value < 0.05, **p value < 0.001, ***p value < 0.001. Each assay was performed in triplicate. whether HCG15 was released directly in exosomes, we incubated AC16 cells with Exo-H or Exo-N. The expression of HCG15 in the AC16 cells incubated with Exo-H was significantly increased compared to that of the cells incubated with Exo-N (Fig. 3F). These findings suggested that exosome-mediated HCG15 may be involved in cardiomyocyte apoptosis. LncRNA HCG15 in Exo-H induced cardiomyocyte apoptosis and the production of inflammatory cytokines To verify whether HCG15 in Exo-H cells was involved in myocardial cell injury, we transfected three different siRNAs targeting lncRNA HCG15 (named siRNA-1, 2, and 3) into AC16 cells. Only siRNA-2 and siRNA-3 effectively interfered with the expression of endogenous HCG15 (Fig. 4A), and siRNA-2 showed the best interference effect, so siRNA-2 was selected for the following assays. The expression of lncRNA HCG15 in the AC16 cells incubated with Exo-H increased significantly and was attenuated by HCG15 siRNA transfection (Fig. 4B). MTT analysis showed that cell proliferation was significantly inhibited after incubation with Exo-H, but this inhibition was significantly attenuated when HCG15 siRNA was transfected after incubation with Exo-H (Fig. 4C). In particular, when cells were incubated with Exo-H, cell apoptosis was increased significantly, but after incubation with Exo-H and transfection with HCG15 siRNA, cell apoptosis was strongly alleviated, as shown by TUNEL staining (Fig. 4D) and flow cytometry (Fig. 4E). In addition, inflammatory factors such as IL-6, IL-1β, and TNF-α from AC16 cells were examined by ELISAs. After incubation with Exo-H, the production of inflammatory cytokines was substantially increased, but their expression levels were reduced when cells were transfected with HCG15 siRNA (Fig. 4F). These results suggested that reducing exosomal lncRNA HCG15 levels released from cells under hypoxia may attenuate myocardial injury. Mechanisms of lncRNA HCG15-induced cardiomyocyte injury in exosomes released by hypoxic cells To elucidate the possible molecular mechanisms of lncRNA HCG15 in exosomes released from hypoxic cells that mediate cardiomyocyte injury, we assessed components of the NF-κB and MAPK signaling pathways, such as p38 MAPK, JNK1/2, and ERK1/2, by western blot analysis. The results showed that the expression of phosphorylated NF-κB p65 and p38 was upregulated when AC16 cells were incubated with Exo-H (Fig. 5A). This upregulation was antagonized when cells were transfected with HCG15 siRNA after incubation with Exo-H, while other signaling molecules did not show obvious changes (Fig. 5A), which indicated that the NF-κb/ p65 and p38 pathways were activated during this process. Furthermore, we blocked the NF-κB/p65 and p38 pathways with the specific signaling repressors PDTC and PD169316, respectively, and found that apoptosis and inhibition of cell growth were Fig. 3 HCG15 carried by exosomes was transferred to cardiomyocytes after hypoxia. A qRT-PCR detected the expression of HCG15 in hypoxic AC16 cells. B Exosomes isolated from hypoxic AC16 cells are shown by TEM. C Western blotting was used to detect the expression of biomarkers in the exosome (Exo) and supernatant fractions (Sup) of the medium from AC16 cells after 8 h of hypoxia. D qRT-PCR was used to detect the level of HCG15 in exosomes and their producers (AC16 cells) after hypoxic treatment. E Confocal microscopy showed that PKH67labeled exosomes from hypoxic AC16 cells were taken up by AC16 cells at different times. The nuclei were stained with DAPI; green: PKH67 (the magnification was × 400). F Expression of HCG15 in AC16 cells after incubation with Exo-N and Exo-H at different times was detected by qRT-PCR. *p value < 0.05 vs 0 h, #p value < 0.05 vs Exo-N. Each assay was performed in triplicate. antagonized (Fig. 5B-D). Moreover, after the NF-κB/p65 and p38 pathways were blocked, the elevated levels of inflammatory cytokines induced by Exo-H were strongly attenuated (Fig. 5E). Thus, lncRNA HCG15 in exosomes released from hypoxic cells may mediate cardiomyocyte injury through the NF-κB/p65 and p38 pathways. Overexpression of lncRNA HCG15 aggravated cardiac IR injury To investigate the role of HCG15 in cardiac IR injury, we subjected male C57BL/6J mice to IR after transfection with oe-HCG15 or oe-HCG15 together with NF-κB-specific inhibitors or oe-HCG15 together with p38 pathway-specific inhibitors. ST segments were significantly elevated after LCA ligation for 45 min and returned to baseline after reperfusion (Fig. 6A), indicating successful modeling. Histological analyses were performed 4 weeks after surgery. Compared with that in the sham group, the ischemia size (IS) in the IR group was substantially inecreased. The IR-induced mice treated with oe-HCG15 showed a significantly wider IS than the control IR-induced mice. Furthermore, we found that inhibition of NF-κB/p65 or p38 pathway partially reversed the effect of lncRNA HCG15 overexpression (Fig. 6B, C). Correspondingly, the oe-HCG15-treated IR-induced mice had a larger infarct scar size, while blocking the NF-κB/p65 or p38 pathway with their specific inhibitors decreased the infarct scar size in the oe-HCG15treated IR-induced mice (Fig. 6D). Echocardiography showed that the oe-HCG15-treated IR-induced mice had a higher LVEDD and LVESD and a lower ejection fraction (LVEF) and fractional shortening (LVFS). Compared with those of the oe-HCG15treated IR-induced mice, the cardiac functions were preserved in the oe-HCG15-treated IR-induced mice treated with the NF-κB/p65 and p38 pathway inhibitors (Fig. 6E-I). These findings indicated that overexpression of HCG15 aggravates cardiac IR injury, which was alleviated by blocking the NF-κB and p38 pathways. LncRNA HCG15 in exosomes released from hypoxic cells contributed to cardiomyocyte apoptosis and the production of inflammatory cytokines. A qRT-PCR was performed to determine the expression of HCG15 in AC16 cells after transfection with HCG15 siRNAs. B qRT-PCR was performed to detect the expression of HCG15 in AC16 cells after transfection with HCG15 siRNA and subsequent incubation with Exo-H. C MTT assays detected the proliferation of AC16 cells after transfection with HCG15 siRNA and subsequent incubation with Exo-H. TUNEL staining (D) and flow cytometry (E) were performed to measure the apoptosis of AC16 cells after the indicated treatment. F ELISAs were used to measure the production of inflammatory cytokines in AC16 cells after the indicated treatment. *p value < 0.05, ***p value < 0.001. Each assay was performed in triplicate. DISCUSSION In this study, we identified lncRNA HCG15 as a novel diagnostic marker that was one of the top five lncRNAs with upregulated expression between AMI patients and healthy controls and had the highest correlation with cTnT. We discovered that HCG15 was enriched in exosomes derived from hypoxic AC16 cells and that reducing lncRNA HCG15 levels could decrease cell apoptosis, reduce the release of inflammatory factors and promote cell proliferation in cardiomyocytes. Overexpression of HCG15 aggravated cardiac IR injury in C57BL/6J mice. To the best of our knowledge, this is the first study to reveal the function of HCG15 in AMI. Exosomes, which are small bilayer membrane vesicles, can be extracted from most body fluids, such as urine, saliva and plasma [20]. Lipids, proteins, RNAs (microRNAs, mRNAs and ncRNAs), and DNA are the biological cargos of exosomes secreted by most human cells [21]. Increasing evidence has demonstrated that exosomes carrying bioactive molecules can be transferred to receptor cells, which affects various biological processes, including inflammation, metabolic autoimmunity, tumor development, and cardiovascular diseases [22]. Notably, exosomes play an important role in mediating intercellular communication. Since Valadi et al. first reported that exosomes mediated miRNA transfer [11], accumulating evidence has suggested that proteins and miRNAs in exosomes released by cardiomyocytes can be transferred to myocardial endothelial cells and affect their functions [23,24]. Exosomes secreted by hypoxic cardiomyocytes harboring TNF-α could aggravate the apoptosis of receptor cardiomyocytes [25]. According to Yang's report, miR-30a is rich in exosomes derived from cardiomyocytes under hypoxic stimulation. Inhibiting the release of miR-30a or exosomes led to maintenance of autophagy and reduced apoptosis in cardiomyocytes after hypoxia [26]. Furthermore, exosomal miRNA-194 led to cardiac injury and mitochondrial dysfunction in obese mice [27]. In our research, we isolated exosomes from hypoxic cardiomyocytes and identified their morphology and biomarkers. Moreover, we found that exosome-mediated intercellular communication and exosomes derived from hypoxic cardiomyocytes could be effectively Fig. 5 Exosomal HCG15 released from hypoxic cells promoted cardiomyocyte apoptosis and the production of inflammatory cytokines via the NF-κB/p65 and p38 pathways. A Western blotting was used to analyze the expression of NF-κB/p65, JNK, ERK, and p38 signaling pathway molecules in AC16 cells after the indicated treatments. AC16 cells were incubated with Exo-H and subsequently treated with PDTC (0.1 mg/ml) or PD169316 (10 μmol/l), and MTT assays (B) were used to detect cell proliferation. TUNEL staining (C) and flow cytometry (D) were performed to measure cell apoptosis, and the magnification was 200 ×. E ELISA was used to measure the level of inflammatory cytokines. *p value < 0.05, ***p value < 0.001. Each assay was performed in triplicate. engulfed by other cardiomyocytes. However, the precise transfer mechanisms of exosomes are still unclear. In one possible mechanism, exosomes bind to the plasma membrane of the receptor through a specific pathway and release the effectors into the cytoplasm of the receptor cell. In another possible mechanism, exosomes enter receptor cells through endocytosis and fuse with the intima to release effectors [26]. Further exploration of the detailed transfer mechanism of exosomes is required in future studies. LncRNAs have long been believed to mediate simple transcriptional interference [28], but recent research has confirmed that these molecules are involved in regulating transcription, posttranscriptional processes, epigenetic modifications, histone modification, and protein function [29]. Numerous studies have confirmed that lncRNAs participate in AMI and are regarded as a novel group of regulators of this condition [8][9][10]. Exosomal lncRNAs have been shown to be involved in the incidence of diseases such as cancer [30], rheumatoid arthritis [31], cholestatic liver injury [32], and atherosclerosis [33]. However, the function and regulation of exosomal lncRNAs in AMI have not been reported. Thus, we isolated exosomes from the serum of AMI patients and healthy controls, identified 65 differentially expressed lncRNAs between the AMI and normal groups using RNA-Seq and identified HCG15 as a new biomarker. HCG15 is located at the 6p21 locus in humans, and there is little related literature on HCG15. Along with these findings, our study provides novel insight into the function of HCG15 and reveals a potential molecule for the early diagnosis of AMI. Based on clinical observations and animal experiments, apoptosis and inflammation play a vital role in the process of AMI and heart failure [34]. Previous studies have shown that the mitogen-activated protein kinase (MAPK) signaling pathway is stimulated in pathological processes, such as oxidative stress, IR, and inflammation, and plays a crucial role in postinfarct remodeling and heart failure after AMI [35]. c-Jun NH-terminal kinase (JNK), cellular signal-regulated kinase-1/2 (ERK1/2), and p38 MAPK are subfamilies of MAPK signaling pathways that are associated with a range of myocardial pathologies, such as inflammation, apoptosis, cardiac hypertrophy, and heart failure [36]. Activation of the JNK and p38 pathways induces Fig. 6 Overexpression of HCG15 aggravated cardiac IR injury. A ECG of mice in response to cardiac IR. B Representative images of Evans blue and triphenyl tetrazolium chloride (TTC) double-stained myocardial sections from mice at 4 weeks after cardiac IR; scale bar = 2 mm. C Myocardial IS in the indicated groups at 4 weeks after surgery; n = 5 per group, **p < 0.05. D Representative images of Masson staining of ventricular sections from mice at 4 weeks after cardiac IR and quantitative analyses of infarct size; scale bar = 1 mm; n = 5 per group, **p < 0.05. E-I Representative images of M-mode echocardiography (E) and relative indices of LVEDD (F), LVESD (G), LVEF (H), and LVFS (I) in mice at 4 weeks after cardiac IR, n = 5 per group, **p < 0.05. NF-κB antagonist (SN50); p38 MAPK antagonist (ralimetinib (LY2228820) dimesylate); (RA: ralimetinib (LY2228820) dimesylate). cardiomyocyte apoptosis, dysfunction, and fibrosis [35], while ERK1/2 has a bidirectional role in apoptosis [2]. In this research, we detected molecules of the JNK, ERK1/2 and p38 pathways by western blotting. The results showed that only the level of phosphorylated p38 was increased when AC16 cells were incubated with Exo-H, while other molecules had no significant changes. These results suggested that the p38 MAPK pathway was activated in cardiomyocytes treated with Exo-H. Moreover, the activation was weakened when HCG15 was silenced in AC16 cells. NF-κB is a nuclear transcription factor that regulates gene expression related to inflammation and apoptosis during various pathologies, including AMI [36]. Under normal conditions, NF-κB is mainly located in the cytoplasm in the inactivated state, as it is sequestered by IκB family proteins, including IκB-α. Once stimulated, NF-κB is phosphorylated, and IκB-α is degraded by IKK. NF-κB subunits transfer from the cytoplasm to the nucleus and regulate the expression of inflammatory cytokines [37]. The NF-κB subunit p65 has been shown to be involved in MI [38][39][40]. We found that the NF-κB/p65 pathway was also activated in cardiomyocytes treated with Exo-H, and reducing HCG15 levels had an antagonistic effect. Moreover, blocking the NF-κB and p38 pathways with specific inhibitors attenuated cell apoptosis and inflammatory cytokine production. In addition, overexpression of lncRNA HCG15 aggravated cardiac IR injury, while blocking the NF-κB and p38 pathways alleviated cardiac IR injury. These results confirmed that lncRNA HCG15 in exosomes released from hypoxic cells mediated myocardial injury through the NF-κB/p65 and p38 pathways, although the detailed process should be further elucidated. CONCLUSIONS In summary, lncRNA HCG15 levels were significantly higher in exosomes isolated from AMI patients and hypoxic cells than the controls. HCG15 released by hypoxic cells contributes to cardiomyocyte apoptosis and the production of inflammatory cytokines by activating the NF-κB/p65 and p38 pathways. This study not only helps elucidate the function of exosomal lncRNAs in the pathogenesis of AMI but also contributes to the development of novel therapeutic strategies.
2021-10-29T06:18:57.792Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "d87d69ae97727469f0e377b9554542160e7dbef9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-021-04281-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29fadcc0f0b82bd49d2723b4b25f626b7d4ae772", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1821402
pes2o/s2orc
v3-fos-license
A longitudinal comparison of charge-based weights with cost-based weights The diagnosis-related group weights that determine prices for Medicare hospital stays are recalibrated annually using charge data. Using data from fiscal years 1985 through 1987, the authors show that differences between these charge-based weights and cost-based weights are increasing only slightly. Charge-based weights are available in a more timely manner and, based on temporal changes in the weights, we show that this is an important consideration. Charge-based weights provide higher payments than cost-based weights to hospitals with higher case-mix indexes, but have little effect on hospitals with low cost-to-charge ratios, high capital costs, or high teaching costs. Introduction The Health Care Financing Administration (HCFA) uses a prospective payment system (PPS) to pay for hospital stays of Medicare patients. The amount of the payment for a stay is proportional to the weight assigned to the diagnosis-related group (DRG) of the stay. The weight is intended to measure the average amount of resources required to treat a patient in a given DRG relative to the amount of resources required to treat patients in other DRGs. For example, patients in a DRG with weight 2 should cost twice as much to treat, on average, as patients in a DRG with weight 1. We compare DRG relative weights computed from operating costs, as was done in the first PPS year (fiscal year 1984) with those computed only from charges. Operating costs are computed using hospital-specific cost report data to transform case-level data on charges and length of stay into an estimate of the cost of each case. Since fiscal year (FY) 1986, weights have been recomputed annually using the charges for each case. Criteria for choice between bases It is impossible to determine theoretically whether cost-based weights or charged-based weights are more accurate measures of the relative operating costs of DRGs. Cost-based weights may be more accurate for several reasons: They capture variation among hospitals in cost-to-charge ratios; they capture variation among departments within a hospital in cost-to-charge ratios; and they remove the costs of capital and the direct cost of medical education, which are paid on a passthrough basis and which therefore should not be counted in the relative weights. However, some of the variation among hospitals and among departments in cost-to-charge ratios is the result of variations in accounting methodology. Also, there exists variation among services in the same department in the cost-to-charge ratio, and this variation is not captured in the cost estimate. Nevertheless, if other things were equal, then cost-based weights likely would be superior to chargebased weights as measures of relative resource use. Other things are not equal, however. Cost-based weights would be based on data 1-2 years older than the data on which charge-based weights could be calculated because cost reports are not available for at least a year after the end of a fiscal year. If the relative intensity of DRGs changes rapidly, then the cost-based weights may be less accurate than more recent charge-based weights. Charge-based weights also have an advantage in that they are simpler to administer. Another advantage of charge-based weights is that they have been found empirically to have greater dispersion than cost-based weights (Cotterill, Bobula, and Connerton, 1986;Rogowski and Byrne, 1990;Price, 1989). It is believed that, using the cost methodology, the resources needed by DRGs with high relative weights are underestimated and those with low relative weights are overestimated. Three reasons for this compression are usually given: 1 (1) Each hospital is assigned just one per diem for routine costs and one per diem for special care costs, yet per diem nursing costs almost surely vary by DRG. (2) A single cost-to-charge ratio is used within each ancillary department. Many believe that prices are set so that low-cost services subsidize higher cost services, even within a particular ancillary department. To the extent that this is true, then the cost-based weights of DRGs that use those low-cost services (which tend to be low-cost DRGs) will be overestimated and the cost-based weights of highweight DRGs that use the higher cost services will be underestimated. (3) Errors in classification of cases into DRGs will tend to make the weights more similar than they should be. Pettengill and Vertrees (1982) used simulation to show the effect of varying amounts of classification error on weight compression. The empirical finding that charge-based weights have more dispersion leads to the possibility that they are also more accurate measures of relative resource intensity. Most of the sources of compression for cost-based weights also affect charge-based weights, however. 2 Thus it might be that the increased dispersion in charge-based weights occurs in the mid-range of the distribution and is the result of random error related to the variance across hospitals in the cost-to-charge ratios, rather than occurring at the extremes and being the result of compression. Rogowski and Byrne (1990), however, showed that the difference between cost-and charge-based weights calculated on fiscal year 1984 PPS data was not distributed randomly. Rather, it was concentrated at the extremes with charge-based weights being higher than cost-based weights for patients in high-weight DRGs and lower for patients in low-weight DRGs. Study questions Our study describes differences between charge-based weights and cost-based weights and the effect of these differences on HCFA and hospitals. First, we examine how the form of the weights affects growth in the national case-mix index (CMI). The CMI for any set of cases is the average of the DRG weights; the national CMI is the case-weighted average of the weights for all cases. Under normal procedures for recalibration of the DRG weights, an increase in the CMI for the set of all Medicare cases will be translated into a proportional increase in the price that HCFA pays for the average case. Next, we look at the extent to which cost-and chargebased weights have diverged during the period 1985-87. Cotterill, Bobula, and Connerton (1986) first compared cost-and charge-based weights calculated on 1981 data and found them to be very similar. Rogowski and Byrne (1990) used data from the first year of PPS and found less similarity. Price (1989) used a 1986 bill file and found even more divergence. These studies use the same basic methodology but differ in methodological detail and in sample selection rules. Therefore, although the studies clearly suggest the direction of change, they do not provide the magnitude of the divergence. We also use a similar methodology and offer the first study examining this issue using a 3-year longitudinal data base and exactly the same methodology in each year. We then study the relative compression of cost-and charge-based weights. We repeat the analyses of previous researchers using data from 1985 through 1987. How the form of the weights affects the distribution of payments among hospitals is examined next. We study how charge-and cost-based weights affect the CMI of groups of hospitals and individual hospitals. We also examine the relationship between the difference of the two CMIs and the level of the hospitals' CMI and three indicators of high charges: cost-to-charge ratio, capital costs, and direct cost of medical education. To the extent that charge-based weights increase the CMI of hospitals with high values of the cost-based weight CMI, it can be viewed as decompression of the weights and a desirable feature of charge-based weights. A priori, we would expect that charge-based weights are positively biased for DRGs that are over-represented at hospitals that have high charges relative to costs. Therefore, to the extent that charge-based weights increase the CMI of hospitals with low value of the cost-to-charge ratio, high capital costs, or high teaching expenditures, it can be viewed as error in the weights and an undesirable feature of charge-based weights. Finally, we look at how each set of weights changes over time. Charge-based weights allow HCFA to use more recent data to calculate weights than would be possible with cost-based weights. If 1985 weights are very similar to 1987 weights, then the value of this advantage will be much less than if the 1985 weights differ greatly from the 1987 weights. Data We used a 20-percent random sample of Medicare inpatient stays in short-term hospitals for fiscal years 1985, 1986, and 1987. To increase comparability with Rogowski and Byrne (1990), the sample excludes cases at exempt hospitals and units in PPS States and also excludes cases in Maryland and New Jersey, which were not covered by PPS. (HCFA includes cases from hospitals in all States in computing DRG relative weights.) Although New York and Massachusetts did not join PPS until FY 1986, their bills are included in the sample in all 3 fiscal years. Because of difficulties in determining costs, we also had to exclude the approximately 1.5 percent of all cases that were from "all-inclusive" providers. To estimate costs, we used cost reports from PPS years 1, 2, 3, and 4. The cost report for PPS year 1 (PPS 1) covers the hospital's FY that began during Federal FY 1984, the first year of PPS. Later PPS years are defined similarly with PPS 2 being related to Federal FY 1985, etc. The cost estimate was derived by determining the PPS year containing the day of discharge for each stay and using data from the cost report for that PPS year. However, in some cases, the appropriate cost report was not available. In these cases, we used the closest available preceding cost report. Missing cost reports were rare, except for FY 1987, in which the costs of less than 6 percent of the stays were based on an early cost report because the PPS 4 cost report was not available. Estimating costs The method used to estimate the cost of each case is similar to that used by HCFA in calculating the FY 1984 and FY 1985 DRG relative weights and is described in Newhouse, Cretin, and Witsberger (1989). Cost report data are used to generate ratios of operating costs to charges for each of 12 ancillary departments and to estimate the per diem cost of routine care and the per diem cost of care in a special care setting such as an intensive care unit or a coronary care unit. (The ratios exclude capital costs and direct costs of medical education.) To estimate ancillary costs for a particular case, ancillary charges for that case in each of 12 departments are multiplied by the appropriate cost-to-charge ratio and then summed. Per diem costs are calculated as the number of days spent in routine care times the routine care per diem plus the number of days spent in special care units times the special care per diem. The total cost of the case is the sum of ancillary costs and per diem costs. Before calculating per diem cost, routine care and special care per diems were inflated (or deflated) according to the number of months from the center of the hospital's fiscal year until the month of admission. An annual rate of 6 percent was used for calendar year 1984 to be consistent with the work of Rogowski and Byrne (1990). Eight percent was used for calendar years 1985 through 1987; this is the annual rate of increase in routine care observed from PPS 2 to PPS 4. Calculating weights Costs and charges were standardized for differences in input prices, teaching, and disproportionate share. The standardized costs and charges were averaged within each DRG and fiscal year. We decided to use the same grouper for each fiscal year's data in order to increase comparability among the data for different years. The grouper used in FY 1988 (grouper 5) was chosen, as it was the latest grouper that was available to us at the time. We omitted from the cost-based weight calculation cases that were outside three standard deviations of the distribution of the log of costs for the FY and DRG. Similarly, we omitted from the charge-based weight calculation cases that were outside three standard deviations of the distribution of the log of charges for the FY and DRG. Thus, different cases were used for each weight for each year. We also omitted all DRGs that did not have at least 10 sample cases in each of the 3 years. Our analyses use a total of 417 DRGs. Table 1 shows the sample sizes that we used. Although the trimming procedure eliminated almost exactly the same number of cases from the charge-based weights and from the cost-based weights, the identity of many of the eliminated cases differed for the two weights. The total column of Table 1 gives the number of cases that were used to compute hospital CMIs. The final step in calculating weights is normalization. We used two procedures to normalize the weights. In the first procedure, we divided the DRG-fiscal year average of costs (charges) by the case-weighted average of costs (charges) for the same fiscal year. Thus the case-weighted CMI equals 1 for each year and type of weight, and each year's weights are independent of other years' weights and are directly comparable. We call these weights the "relative cost-based weights" and the "relative charge-based weights." The second procedure used for normalization is designed to test whether the actual normalization procedure that HCFA uses affects the comparability of cost-and charge-based weights and whether the use of charge-rather than cost-based weights affects the growth in the CMI. In this normalization procedure, we applied the 1985 cost-based weights to the 1986 file to Then we normalized the 1987 cost-based weights to this CMI. An analogous procedure was used for the charge-based weights. We call these weights the "CMIadjusted" cost-and charge-based weights. This procedure is not exactly the same as the one in use at HCFA because the file HCFA uses to calibrate each new set of weights is always more than 1 year old. However, it seemed the best we could do with our 3-year time series. Comparison with earlier studies In reporting our analyses, we compare our statistics with earlier studies. 3 In several cases, we chose elements of our methodology to match those of Rogowski and Byrne (1990) in order to increase comparability. Nevertheless, there are methodological differences that make it incorrect to interpret differences between their study of 1984 discharges and ours as being entirely the result of temporal factors. Two methodological choices have large, but opposite, effects on measures of the similarity of cost and charge weights. We trimmed separately for the cost distribution and for the charge distribution, but Rogowski and Byrne excluded a case from the analysis if it was beyond three standard deviations in either distribution. 4 We believe our method is a more accurate representation of how either system would be implemented. Using the trimming method from the earlier study increases the percentage of cases for which the charge-based weight is within 5 percent of the costbased weight by approximately 8 percentage points. The second methodological difference concerns sample selection. We include cases from New York and Massachusetts. These States were not covered by PPS in 1984 and thus were excluded from the Rogowski and 3 Rogowski and Byrne (1990) reported the results of several different computations. We compare our findings with the analysis that they labeled "current," which corresponds most closely to the methodology that we use here. Byrne study. These cases have much greater than average congruence between charges and costs. We repeated the 1985 calculations dropping these two States and the percentage of cases with weights within 5 percent declined from 70.4 percent to 65.7 percent. Thus, the inclusion of New York and Massachusetts increases the similarity between the cost and charge weights. There were three other methodological differences that had smaller effects on the comparison of costs and charges. We defined our sample of DRGs as those with at least 10 cases per year, but Rogowski and Byrne used a more complicated rule. We used the 1988 grouper, they used the 1984 grouper. Because 1984 was the first year of PPS, when they limited their sample to PPS cases, they used only partial-year data for most hospitals. Other studies probably have even more differences from ours than Rogowski and Byrne. Thus, although we report comparative data, these data should not be interpreted as measuring trends. Analysis methods In examining the divergence of cost-and chargebased weights and the amount of dispersion in the weights, we weighted the data by the number of cases in the cost-based weight calculation. Almost all calculations were also performed using charge-based weights, but the findings were virtually indistinguishable. To compare cost-and charge-based weights, we examined the distribution of cases in DRG categories defined by the difference between the cost-and chargebased weights expressed as a percentage of the costbased weight. Similar distributions were used by all previously published studies in this area. 5 Although the entire distribution is of some interest, it is also desirable to have a summary measure. Picking a single range from this distribution, (e.g., the percentage of cases with cost-and charge-based weights more than 5 percent apart) appears to us to be somewhat arbitrary. Thus, we prefer a continuous measure and, therefore, report the case-weighted average of the absolute value of the difference between cost-and charge-based weights. Because payments are roughly proportional to the DRG weight, this measure is roughly proportional to the fraction of dollars that would be redistributed across cases if one moved from one system of weights to the other. To examine how cost-and charge-based weights affect hospital CMIs, we use both case-weighted and hospital-weighted analysis. To examine how important the weight methodology is to individual hospitals, we report the mean value of the absolute value of the percent difference between cost-and charge-based CMIs. When this statistic is calculated after weighting by the number of cases at each hospital, it is roughly proportional to the fraction of dollars that would be redistributed among hospitals if one moved from one system of weights to the other. Table 2 shows how growth in the national CMI (i.e., the case-weighted average DRG weight) is affected by the choice of charge-based weights rather than cost-based weights. This table presents the CMIs for the cost-based and charge-based weights, along with their case-weighted standard deviations for both relative and CMI-adjusted weights. (The relative weights average 1.000 for all years.) Growth in the national index Using the charge-based weights, the CMI increased by 4.273 percent from 1985 to 1986 and by about 2.5 percent from 1986 to 1987. The CMI of the chargebased weights increases somewhat more than CMI of the cost-based weights. By 1987, the CMI based on charge-based weights exceeded the CMI based on cost-based weights by three-tenths of 1 percent. 6 5 Cotterill, Bobula, and Connerton (1986), Rogowski and Byrne (1990), and Price (1989). Price expressed the difference between the two weights as a percentage of the charge-based weight. 6 These CMIs differ from those used for payment purposes because they are based on the FY 1988 grouper and were standardized on different year files. In addition, our 1985 cases include New York and Massachusetts, whose rate of increase in the CMI from 1985 to 1986 was substantially higher than average. The 4.3 percent rate of increase shown here for 1985 to 1986 is higher than the actual rate of increase in the paid CMI (3.0 percent according to Carter, Newhouse, and Relles, 1990). The rate of increase from 1986 to 1987 is more similar (2.5 percent here versus 2.4 percent in the paid CMI). Compression of weights Previous studies found that cost-based weights were compressed relative to charge-based weights. We observe the same phenomenon in this study, as shown by the standard deviations of the weights presented in Table 2. In all 3 years, the standard deviations of the cost-based weights are smaller than those of the chargebased weights. The argument that cost weights are compressed is based on the assumption that high-weight DRGs are undervalued and low-weight DRGs are overvalued. The larger standard deviation for charge-based weights than for cost-based weights might be in large part the result of increased variance in the middle range rather than the desired decompression. However, as shown in Table 3, the charge-based weights tend to be higher than cost-based weights for large weights and to be lower for smaller weights, yielding mean relative differences that are positive for the larger weights and negative for the smaller weights. This is the same pattern of differences found by Rogowski and Byrne (1990). This reveals clearly the relative decompression of charge-based weights. Because surgical DRGs tend to have higher weights than medical DRGs, earlier studies found that surgical DRGs have higher charge-based weights. Although not reported in detail here, we also found that the mean charge-based weights for surgical DRGs were higher than mean cost-based weights and were lower than the mean cost-based weights for medical DRGs. For example, in 1985, the charge-based weights for threequarters of the medical cases were lower than their cost-based weights, but only 40 percent of surgical cases had lower charge-based weights than cost-based weights. The evidence on whether decompression is occurring over time is mixed. Because the standard deviations of either cost-or charge-based relative weights do not change over the years, these measures indicate no decompression in relative weights over time. The increases in the standard deviations of the CMI-adjusted weights are primarily the result of differences in case mix rather than changes in the relative weights of similar cases. All the weights in each set of CMI-adjusted weights are multiplied by the same CMI, and this increases the magnitude of the standard deviation of the set of weights. On the other hand, the magnitudes of differences between charge-and cost-based weights increase from 1985 to 1987, increasing faster for the top and bottom 25-percent weights than for the mid-range 50 percent. This indicates a trend of slowly increasing differences between the weights, with a good portion of the increases occurring in the larger and smaller weights. This suggests a slight decompression of the charge weights over time. Trends Correlations between the cost-and charge-based weights are very high. Within each year, the correlation between the cost-and charge-based weights exceed 0.997. These correlations are consistent with results of previous studies (Cotterill, Bobula, and Connerton, 1986;Rogowski and Byrne, 1990;Price, 1989). This strong linear relationship between the weights does not reveal the degree of dispersion among the sets of weights. Despite the high correlation, substantial differences between the cost-and charge-based weights exist for some DRGs in each year. Table 4 presents the distribution of cases among DRG categories defined by the relative difference between the cost-and charge-based weights. The relative difference for each DRG is calculated as the DRG charge-based weight minus the cost-based weight expressed as a percentage of the cost-based weight. Distributions are presented for both relative weights (i.e., weights with an average value of 1 in each year) and CMI-adjusted weights (i.e., weights with an average value that reflects each year's increase in the frequency of higher weighted cases). (It is not necessary to distinguish between the two sets of weights in examining correlations because the CMI-adjusted weight is a multiple of the same year's relative weight.) The results from the two sets of weights are quite Table 5 Comparison of differences in weights with results of other studies: 1985-87 Year 1985 1986 1987 Other studies and year studied Cotterill, Bobula, and Connerton (1981) Rogowski and Byrne (1984) Price (1986) Number and percent of cases with absolute differences less than or equal to 5 and 10 percent 5 percent similar and indicate only a slight increase in the divergence of cost-and charge-based weights over this 3-year period. The redistribution percentages are reported in the last line of Table 4. For each relative weight calculation, this value is merely the case-weighted average of the absolute value of the difference between the cost-and charge-based weights multiplied by 100. For the CMIadjusted case, the same average absolute value is then divided by the average cost-based weight. In either case, the redistribution percentage represents the fraction of DRG weights that are redistributed when one moves from one weight system to another. The conclusion to be drawn from this statistic is similar to the conclusion from studying the entire distribution: The difference between cost-and charge-based weights increased only slightly from 1985 through 1987. For relative weights there is a 4.29 percent redistribution in 1985, increasing to 4.47 percent in 1986 and to 4.54 percent in 1987. Again, the magnitude of the redistribution is similar for each year's relative weights as for the same year's CMI-adjusted weights. A summary of the relative difference between costand charge-based relative weights found in this study is given in Table 5, along with the findings of previous studies. Although, as discussed earlier, there are some methodological differences among the studies, the results of all the PPS studies are roughly similar and consistent with the very small trend evident in our data. Our 1985 results are numerically similar to those of Rogowski and Byrne (1990), who analyzed 1984 weights. Price (1986 weights) found only slightly more divergence than we did for fiscal year 1986-and this difference is likely explainable by differences in methodology and case selection. 7 All the PPS results show much more divergence between cost-and chargebased weights than did the analysis of 1981 data. Table 6 gives the distributions of hospitals according to categories defined by the difference between the hospital's charge-and cost-based CMIs, expressed as a percentage of the cost-based CMI. The first three 7 Price based his analysis on all Medicare Provider Analysis and Review (MEDPAR) PPS cases plus Puerto Rico; we used only a 20-percent sample but omitted Puerto Rico because it was not on PPS during our time period. Price used only PPS 2 cost reports; we used whichever cost report corresponded to the day of discharge. Price used a 10-percent rate of inflation compared with our 8-percent; and different imputation methods were used for cases with out-of-range cost-to-charge ratios. Table 6 Distribution of hospitals by difference between cost-and charge-based hospital case-mix index (CMI): 1985-87 Charge-based CMI More than 6 percent less 5-6 percent less 3-4 percent less 1-2 percent less Equal 1-2 percent more 3-4 percent more 5-6 percent more More than 6 percent more 1.06 NOTES: Redistribution percentage is the mean absolute difference between the charge-and cost-based CMIs expressed as a percentage of the cost-weight CMI. This statistic is hospital-weighted (see Table 7 for case-weighted values). SOURCE: Carter, G.M., and Farley, D.O., RAND, Santa Monica, California, 1991. columns of the table give the indexes based on the relative weights, and we discuss these findings first. Roughly one-third of hospitals would have their CMIs change by less than one-half of 1 percent by changing the basis for calculating DRG weights. More than 90 percent of hospitals would have their CMIs change by 2 percent or less. The redistribution percentage, presented at the bottom of Table 6, is the percentage by which a hospital's revenue would change in going from costbased weights to charge-based weights. Under relative weights, the typical hospital would have seen its revenues change by 1.04 percent in 1985 and 1.18 percent in 1987. This slight trend toward increased dispersion over time of cost-and charge-based weights is also visible in the whole distribution and is consistent with the findings of the DRG-level analysis. It is clear from Table 6 that using charge-based weights rather than cost-based weights causes more hospitals to lose money than it causes to gain money. Because the CMI-adjusted DRG weights give higher average values to charge-based weights than to costbased weights, all hospitals do relatively better with charge-based weights than cost-based weights under the CMI-adjusted weights than under the relative weights. The effect of adding the CMI adjustment to the relative weight distribution where most hospitals had negative values is to make the cost-and charge-based weights more similar than under the relative weights. This is most visible by comparing the redistribution percentages, which, for 1986, are 1.06 for relative weights and 0.97 for the CMI-adjusted weights; for 1987, the corresponding figures are 1.18 and 1.06. This is also visible in the whole distribution of hospitals. For example, in 1987, 91.6 percent of hospitals have costbased and charge-based relative weight CMIs that differ by 2 percent or less, while 93.2 percent have CMIadjusted CMIs that differ by 2 percent or less. Because the CMI-adjusted weights are a closer approximation to the calculations used by HCFA, they probably represent the cumulative effect over a 3-year period of the use of charge-based weights rather than cost-based weights more accurately than relative weights. On the other hand, the relative weights show what would happen in any one year in which HCFA changed the basis of the DRG-weight calculation. As we show directly later, the asymmetry in the number of gaining and losing hospitals is the result of the fact that small hospitals tend to be worse off using charge-based weights and large hospitals tend to be worse off using cost-based weights. Consequently, if one examines Table 7, which gives the distribution of cases (rather than the distribution of hospitals shown in Table 6), one sees much greater symmetry in gainers and losers. About 98 percent of the cases go to hospitals whose charge-based CMI differs from its cost-based CMI by 2 percent or less. Virtually all cases go to hospitals with CMIs within 4 percent. This result holds for all 3 years and for both relative and CMI-adjusted indexes. The adjusted indexes have slightly different distributions that show more cases going to hospitals for which charge-based weights increase the CMI. The case-weighted redistribution percentage shows that the typical case goes to a hospital whose 1985 CMI would change by about three-quarters of 1 percent by changing weight bases. Again, there is a slight trend toward divergence of the cost-based and charge-based weights. Hospital characteristics The differences between charge-and cost-based CMIs for rural and urban hospitals by bed size, hospital-teaching status, and disproportionate-share status are presented in Table 8. Positive differences indicate that hospitals have higher CMIs using chargebased weights; negative differences indicate higher CMIs using cost-based weights. Although mean differences by hospital characteristics in general are fairly small, usually less than 1 percent, clear patterns are observed across the characteristics. In all years and for both types of CMIs, the charge-based CMI is smaller than the cost-based CMI for rural hospitals and larger for urban hospitals. For example, Table 7 Distributions of cases by difference between cost-and charge-based hospital case-mix index (CMI), by Federal fiscal year: 1985-87 Charge-based CMI More than 6 percent less 5-6 percent less 3-4 percent less 1-2 percent less Equal 1-2 percent more 3-4 percent more 5-6 percentmore More than 6 percent more the 1985 charge-based CMI was 0.77 percent less than the same cost-based CMI for rural hospitals, but it was 0.22 percent greater for urban hospitals. There also appear to be additive effects of hospital bed size and urban or rural location. The percent difference between charge-and cost-based CMIs increases with bed size for both rural and urban hospitals. For equivalent-size hospitals, however, the cost-based indexes for the rural hospitals are larger relative to the charge-based indexes than for urban hospitals. Teaching status and disproportionate-share status also have substantial effects on differences in the CMIs. Hospitals with no graduate medical education program have lower charge-based CMIs than cost-based CMIs. Minor teaching hospitals, defined as those with internor resident-to-bed ratios of less than 0.25 , have an average charge-based CMI that exceeds their average cost-based CMI. For major teaching hospitals, those with ratios of 0.25 or higher, charge-based CMIs typically are more than 1 percent higher than cost-based CMIs. Disproportionate-share hospitals have higher indexes under charge-based CMIs than under costbased CMIs. Over the 3 years, the absolute magnitude of the difference between charge-and cost-based indexes increases slightly for both negative and positive differences. The case-mix adjustment is roughly equivalent to adding, to the percent difference in each category, an amount equal to the percentage by which the charge-based CMI for all hospitals exceeds the cost-based CMI for all hospitals. Index magnitude and high charges Our last hospital-level analysis was motivated by a desire to compare the extent to which the decompression accomplished by using charge-based weights helped hospitals with high CMIs with the extent to which it helped hospitals with high charges relative to costs. To the extent that charge-based weights increase the CMI of hospitals with high values of the cost-based CMI, it can be viewed as the result of decompression of the weights and a desirable feature of charge-based weights. To the extent that charge-based weights increase the CMI of hospitals with unusually low cost-to-charge ratios (CCRs), it can be viewed as an undesirable feature of charge-based weights. Table 9 reports three regressions where the dependent variable is the difference between the charge-based CMI and the cost-based CMI. The difference in the CMIs was calculated without the case-mix adjustment. The univariate models show that 37 percent of the variation among hospitals in the difference between the two CMIs is linearly related to the hospital's CMI, 8 compared with only 6 percent for the CCR. The multivariate model includes both variables. It shows that adding the CCR to the model including only the CMI causes only a tiny increase in explanatory power and only slightly decreases the coefficient on the CMI. On the other hand, the coefficient of the CCR is reduced to 15 percent of its previous value. We also investigated whether the charge-based CMI helped hospitals with two particular characteristics that we postulated might provide hospitals with an unfair advantage in PPS payments. The univariate correlation of the difference between the two CMIs with the fraction of total costs that are capital costs is not even statistically significant. The univariate R 2 of the difference between the two CMIs with the fraction of total costs that are direct medical education costs is only 7.7 percent. These data and Table 9 thus demonstrate that the charge-based weights are much more likely to benefit hospitals with high values of the CMI than to benefit hospitals with low values of the CCR, high capital costs, or high teaching costs. The data in Table 9 concern only the 1987 results. We did, however, perform similar analyses for the other 2 years, with similar results, which we do not report in detail here. Changes over time The final subject of our empirical analysis is how weights change over time. One would expect weights to change as new technology and cost-saving measures have a differential effect on various DRGs, and as coding improves. Charge-based weights allow HCFA to use more recent data to calculate weights than would be possible with cost-based weights. If 1985 weights are similar to 1987 weights, then the value of this advantage will be much less than if the 1985 weights differ from the 1987 weights. We will compare the 1985 charge-based weights to the 1987 CMI-adjusted charge-based weights and also compare the 1985 cost-based weights to the 1987 CMIadjusted cost-based weights. We chose the CMI-adjusted weights rather than relative weights for 1987 because the relative weights are affected by both the relative intensity of each DRG and the distribution of cases across DRGs. 9 The normalization factor is the way that the recalibration calculation adjusts for temporal change in the distribution of cases. Roughly speaking, the 1985 weights are as different from 1987 CMI-adjusted weights as cost-based weights are from charge-based weights in the same year. The first column of Table 10 compares 1985 charge-based weights with 1987 CMI-adjusted charge-based weights. The second column provides the same data for costbased weights. From 1985 through 1987, charge-based weights diverged so that only 66.0 percent of 1987 cases were in DRGs where the 1985 weight was within 5 percent of the 1987 weight. This is almost identical to the 66.3 percent of 1987 cases in which the 1987 chargebased weight was within 5 percent of the 1987 costbased weight. The redistribution percentage shows that using charge-based weights calculated on the 1985 file would have changed the payment of the typical case by 4.55 percent, compared with using the 1987 chargebased weights. The cost-based weights diverged about the same amount as the charge-based weights during this period. 8 We use the cost-based CMI without a case-mix adjustment as our measure of the CMI level. Because differences among hospitals in the CMI are much greater than differences between cost-and chargebased CMIs for the same hospital, we would get similar results if we used the charge-based CMI as the explanatory variable in these analyses. For example, in 1985, the case-weighted interquartile range of the cost-based CMI was .846 to 1.003. The hospital-weighted interquartile range is even larger. 9 As is to be expected because of the change in case mix, the 1987 relative weights are almost all smaller than the corresponding 1985 weight. For both cost-and charge-based weights, more than 85 percent of 1987 cases went to DRGs where the 1987 relative weight was smaller than the corresponding 1985 weight. Conclusions The purpose of DRG weights is to measure the operating costs for cases in each DRG relative to the average operating cost for all cases. Since 1986, HCFA has been using charges to calibrate the DRG weights, which offer the advantages of timely access to charge data and computational simplicity. However, it is not possible to determine theoretically whether cost-based weights or charge-based weights are more accurate measures of the relative operating costs of DRGs. Lacking a theoretical foundation, empirical comparisons of differences in weights calculated using the two methods, and of their impacts on payment amounts and distributions, can provide information for PPS payment policy decisions. Of particular interest is information that may provide insight regarding the relative accuracy of the two methods. Findings that raise sufficient concern about bias in the charge-based weights might lend support to the use of costs rather than charges to calculate the DRG weights. Such a change should be considered in the context of other biases that might be introduced if cost-based weights were used instead of the charge-based weights. We found that the weight methodology used affected the total amount of payment. From 1985 through 1987, the national CMI measured by charge-based weights grew 0.3 percent more than the national CMI measured by cost-based weights. Assuming annual PPS hospital payments of approximately $400 billion, this translates into an expenditure from the Federal budget of roughly $120 million. Because we cannot be sure theoretically which index is a more accurate measure of resource intensity, this study cannot aid in a judgment about the desirability of this transfer. Within a given year, the two sets of weights distributed payments somewhat differently among cases and among hospitals. In FY 1987, the use of chargebased weights rather than cost-based weights resulted in a redistribution across cases of approximately 4.5 percent of DRG weight. It resulted in a change of 1.18 percent in the CMI of the average hospital and, therefore, in its payment. Using charge-based weights rather than cost-based weights results in a decrease in the CMI for small hospitals and for rural hospitals and, therefore, in decreased payments to these hospitals. It results in an increased CMI and increased payments for teaching hospitals. We found only a small trend toward increasing divergence of the charge-and cost-based weights during the period 1985-87. The use of charge-based weights rather than cost-based weights resulted in a redistribution across cases of approximately 4.29 percent of DRG weight in 1985, compared with 4.54 in 1987. The slightly increased divergence of the two sets of weights caused a slight change in the effect of the weight basis on most hospital groups. We measured the amount of this effect in two ways: using relative weights (i.e., weights with an average value of 1 in each year) and CMI-adjusted weights (i.e., weights with an average value that reflects each year's increase in the frequency of higher weighted cases). Because the CMIadjusted weights are a closer approximation to the calculations used by HCFA, they probably represent the cumulative effect over a period of years of the use of charge-based weights rather than cost-based weights more accurately than relative weights do. On the other hand, the relative weights show what would happen in any one year in which HCFA changed the basis of the DRG-weight calculation. Using CMI-adjusted weights multiplies the charge-based weight by a larger number than it multiplies the cost-based weight. Hence, it mitigates the effect of the use of charge-based weights for hospitals that do worse under charge-based weights than under cost-based weights and enhances the effect of the use of charge-based weights for hospitals that do better under charge-based weights than under costbased weights. For example, the use of charge-based weights rather than cost-based weights caused an increase in the relative-weight CMI for major teaching hospitals of 0.95 percent in 1985 and 1.10 in 1987. If charge-based weights had been used for all 3 years, the 1987 CMI would be 1.40 percent higher than if costbased weights had been used for all 3 years. Another example is rural hospitals with fewer than 50 beds, which have lower CMIs under charge-based weights than under cost-based weights, but the CMI-adjusted weights mitigated the reduction in their CMI under charge-based weights. The differences between cost-based and charge-based weights that we measured are very similar to those of other studies from the PPS era. We demonstrated the sensitivity of statistics to modest changes in methodology and case selection. Given this sensitivity, the other studies do not contradict our finding that differences between the two weights have been subject to only small trends. The effects of the weight basis found in all PPS studies are much larger than those found using the 1981 data base. We expect that the major reason that the 1981 findings differ is that there were substantial coding problems in the 1981 data base. The argument is analogous to Lave (1985), who showed that misclassification on a data file will lead to weight estimates that are too similar to each other. Price (1989) showed that the proportion of charges devoted to ancillary services is a significant explanatory variable for the difference between charge-based weights and cost-based weights. Thus, insofar as the 1981 data base misclassified cases from DRGs with substantial use of ancillary services into DRGs with less use, it causes the cost-based weights to be more similar to the chargebased weights than on a file with more accurate coding. This analysis of the reason for differential results between 1981 and PPS has implications for what one would expect to find when the grouper undergoes substantial changes. Fiscal year 1988 was the first large change in the grouper. Insofar as the changes to the grouper resulted in grouping cases into DRGs with more similar resource use, and insofar as the data we have used does not reflect the coding of these cases that we would expect to see when the 1988 grouper is used for payment, we would expect to see a greater divergence of cost-based weights from charge-based weights in 1988 than would be expected based on continuation of the empirical trend from 1985 to 1987. Along with previous researchers, we found that charge-based weights are less compressed than costbased weights. In addition, we found that the chargebased weights are much more likely to benefit hospitals with high values of the CMI than to benefit hospitals with low values of the cost-to-charge ratio. We find a very small trend toward increasing decompression of the charge-based weights. Again, because of the possibility of changes in coding practices, 1988 may not be a continuation of the trend. Finally, we found that DRG weights change over time. There is roughly the same amount of difference between cost-based weights calculated in the 1985 file and similar weights calculated in the 1987 file as there is between cost-based and charge-based weights calculated on the same file. Thus, the timeliness of charge-based weights is an important consideration in choosing the appropriate weight basis.
2018-04-03T00:10:48.325Z
1992-01-01T00:00:00.000
{ "year": 1992, "sha1": "249066efbd61a835ebb9e3b5617522cf7bca4fdd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "249066efbd61a835ebb9e3b5617522cf7bca4fdd", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
246701352
pes2o/s2orc
v3-fos-license
Deep residual inception encoder‐decoder network for amyloid PET harmonization Abstract Introduction Multiple positron emission tomography (PET) tracers are available for amyloid imaging, posing a significant challenge to consensus interpretation and quantitative analysis. We accordingly developed and validated a deep learning model as a harmonization strategy. Method A Residual Inception Encoder‐Decoder Neural Network was developed to harmonize images between amyloid PET image pairs made with Pittsburgh Compound‐B and florbetapir tracers. The model was trained using a dataset with 92 subjects with 10‐fold cross validation and its generalizability was further examined using an independent external dataset of 46 subjects. Results Significantly stronger between‐tracer correlations (P < .001) were observed after harmonization for both global amyloid burden indices and voxel‐wise measurements in the training cohort and the external testing cohort. Discussion We proposed and validated a novel encoder‐decoder based deep model to harmonize amyloid PET imaging data from different tracers. Further investigation is ongoing to improve the model and apply to additional tracers. including the Alzheimer's Disease Neuroimaging Initiative, 6 the Dominantly Inherited Alzheimer's Network 7 and others. 8,9 It is determined that amyloid plaques can be detected at least 15 years prior to AD symptom onset 10 and the prevalence of amyloid positivity increases with age from approximately 10% at age 50% to 44% at age 90 in cognitively normal populations. 11 Imaging measurements of brain amyloid and tau pathology help to define AD in its preclinical stage and allow the investigation of the genesis and progression of AD. 12 Many clinical trials have been designed to include amyloid and tau PET imaging for the assessment of treatment efficacy and target engagement as surrogate biomarkers. [13][14][15][16] Human amyloid imaging started more than 15 years ago with the development of Pittsburgh Compound-B (PIB), 17 and has since been widely adopted by many research groups. 18,19 Because of its short half-life (20 min), the use of PIB is limited to large research centers with access to onsite cyclotron and experienced radiochemistry teams. A number of F18 labeled amyloid tracers were later developed to address this limitation including florbetapir (FBP), 20 florbetaben (FBB), 21 flutemetamol, 22 and NAV4694, 23 with the first three subsequently receiving FDA approval for amyloid imaging. With multiple PET tracers designed for the same target pathology, each tracer has its own target binding affinity, tracer kinetic behavior, non-specific binding, and tissue retention, hence the imaging data that are acquired display tracer-dependent characteristics. Recent crosssectional comparison studies demonstrated that the global amyloid burden measures derived from PIB and FBP have a shared variance ranging from approximately 70% to 90% depending on the quantification pipelines and cohorts. [24][25][26] These tracers also show different levels of variability in the amyloid burden measurements. [24][25][26] Intertracer variability leads to inconsistent amyloid positivity threshold and poses challenges for multicenter studies. A mean cortical FBP standard uptake value ratio (SUVR) cutoff of 1.17 was determined to detect moderate to frequent brain amyloid burden based on pathological assessment 27 and this can be converted to a Centiloid (CL) cutoff of 37.1 CL using published equations 24 ; a recent study based on PIB imaging found a threshold of 20.1 CL to be optimal 28 ; and a FBB based study determined a threshold of 19 CL. 29 The CL approach 30 was proposed to define a common numerical scale hoping to unify the global amyloid measures derived from different tracers and analysis pipelines. However, the amyloid measurements still have the same level of correlation between tracers, and the inherent signal to noise property also remains the same. Differences in amyloid measurements across tracers also pose problems for longitudinal studies. The tracer difference results in different capabilities of tracking longitudinal amyloid accumulation which is especially important in clinical trials. In our recent study, we estimated that the sample size needed to detect a 20% reduction in the rate of amyloid accumulation was 305 per arm when PIB is used as the amyloid tracer while a sample size of 2156 is needed for FBP. 25 Furthermore, strategies enabling the detection of focused changes and investigating the spatial patterns of pathological changes which require regional and voxel-level details are currently lacking. One viable solution may be the emerging Artificial Intelligence technology: deep learning (DL). Deep learning has been successfully implemented in computer vision domains for decades. Only recently has it become a technique RESEARCH-IN-CONTEXT 1. Systematic review: To quantify beta-amyloid deposition in the brain, the use of multiple amyloid tracers with varied characteristics poses a major challenge to interpretation, to the ability to combine results from crosscenter studies, and to efforts to define a common positivity threshold. We propose a deep learning (DL) model as a harmonization strategy to generate imputed amyloid PET images of one amyloid tracer to the images of another. 34 is an end-to-end framework for prediction and biomarker discovery. Most recently, one DL approach is gaining traction to generate synthetic images of a missing modality based on input images of a related but different modality. 33,35 This approach was initially developed to address the missing data problem 33 and later adopted in estimating the attenuation map from magnetic resonance (MR) imaging data to allow accurate attenuation correction for PET/MR hybrid scanners. 35 We previously developed a new model termed Residual Inception Encoder-Decoder Neural Network (RIED-Net) 36 to render enhanced images from Contrast-Enhanced Digital Mammography and support breast cancer diagnosis. The success of RIED-Net motivated this research to explore its applicability to harmonizing PET imaging from different tracers, specifically, generating synthetic PIB images from FBP data and evaluate its performance using two independent datasets, one for training, validation, and testing, and one for external validation. Participants From the Open Access Series of Imaging Studies-3 dataset, 37 Imaging For the OASIS dataset, dynamic PIB PET scan was acquired on a Siemens Biograph 40 PET/CT or a Siemens/CTI EXACT HR+ scanner for 60 minutes after tracer administration and reconstructed using standard iterative methods with attenuation and scatter correction. Dynamic FBP PET was acquired on a Siemens Biograph mMR scanner for 70 minutes after FBP administration and reconstructed using an OSEM algorithm and attenuation/scatter corrected using a separately acquired low dose CT scan. For each participant, a T1-weighted MR scan was also acquired using a 3T MR scanner. All imaging was conducted at the Washington University in St. Louis, and individual scans for each participant were completed within 3 months. The imaging acquisition information for the GAAIN dataset has been previously described in. 24 Briefly, PIB PET was acquired between 50 and 70 minutes post-injection, and FBP PET was acquired 50 to 60 minutes. The imaging pair was obtained on average of 18 days apart, and a 3T T1 MR image was obtained for each subject within 6 months of PET acquisition. One participant in the GAAIN cohort was excluded from further analysis due to poor quality of the T1 MR scan. The T1-weighted MR data were analyzed using FreeSurfer (Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, USA) to define anatomical regions. Amyloid PET imaging quantification was then performed using our standard protocols that included scanner harmonization, motion correction, target registration, and regional value extraction 38,39 using a PET unified pipeline. The output included a SUVR image using cerebellar cortex as the reference region and a mean cortical SUVR (MCSUVR) as the global index of brain amyloid burden. 38 Deep learning model for PET harmonization RIED-Net was designed to estimate the voxel-wise non-linear mapping between input (FBP) and output (PIB) images. In this mapping, letting the FBP image be ∈ R m × n , and the PIB image be O ∈ R m × n , the relationship between the two can be defined as where S : R m × n → R m × n denotes the nonlinear mapping between the FBP-PIB pair. The image synthesis problem is to make an estimation of the function The overall architecture of RIED-Net is shown in Figure 1. It consists of nine residual blocks, where the encoding path consists of five blocks and the decoding path of the remaining four blocks using an architecture similar to U-Net 41 and with the addition of a residual inception shortcut path that has been shown to improve training efficiency. 42 Each block has a conventional convolution/deconvolution path with two 3 × 3 convolutional layers, and in parallel a 1 × 1 convolution path, and the output matrices from these two parallel paths are summed together and down/up sampled by a factor of 2 to serve as the input to the next block. Additional technical details are provided in Supplementary Material. We used 10-fold cross validation in the training. Specifically, for the OASIS dataset, we shuffled the dataset randomly and created 10 different groups of the dataset; for an even split, we decided to use 90 out of 92 total samples (excluding the last two participants according to alphabetical order) and created 10 different folds of size 81:9 (total 90), where 81 were used for training and validation, and nine were used for testing. These folds are generated such that there is no overlap among the training and testing samples and the test dataset in each fold is The RIED-Net model in this work adopted a U-Net-like architecture with the addition of a residual inception short-cut path which has been shown to improve the efficiency of model training. The overall model has five encoding blocks and four decoding blocks. Each blue rectangle represents a data matrix generated from the convolution operations (arrows) within the encoding blocks, and the number below each rectangle indicates the number of channels within each matrix. The leftmost thin blue rectangle represents the input data, a 2D slice from the florbetapir image (256 × 256 × 1 matrix, i.e., one channel). Similarly, each green rectangle represents a data matrix generated from the convolution/deconvolution operations within the decoding blocks, and again the number below indicates the number of channels within the corresponding matrix. Notice as the input of each decoding layer, a blue rectangle from the matching encoding block is appended by a green rectangle from the output of the previous block. Each brown arrow represents a multi-channel convolutional operation with a 3 × 3 kernel and a rectified linear unit (ReLu) as the activation function (Conv 3×3, ReLu). Each orange arrow denotes a 1×1 convolutional operation with a ReLu as the activation function (Conv 1 × 1). The single purple arrow (representing the last step in this network) denotes a 1×1 convolutional operation that generates the output of synthetic 2D Pittsburgh Compound-B slice. Each black dotted arrow denotes a copying operation. Each red arrow denotes a 3×3 convolutional operation (stride = 2, with a ReLu as the activation function), and each green arrow denotes a 3×3 deconvolutional operation (stride = 2, with a ReLu as the activation function Statistical analysis We assessed the performance of RIED-Net in harmonizing the PIB and The RIED-Net model is readily applicable to new FBP scans, provided that the FBP scans are fully processed following the same procedure described in our Methods section. Therefore, our proposed DL technique is a promising approach for the harmonization of PET imaging data obtained from different tracers targeting the same underlying pathophysiology. RESULTS Compared to DL models from the literature, RIED-Net has two major advantages. First, existing methods use patch-based approaches to alleviate computational burden, but this sacrifices synthesis performance at the voxel level. RIED-Net was designed focusing on voxel mapping with its performance proven to be satisfactory using two separate datasets for validation and testing in this study. The second advantage is from the residual inception block which made RIED-Net computational affordable, thus it has the potential to perform PET harmonization task in 3D which is one ongoing effort. Recent studies [45][46][47] have also proposed to use Generative Adversarial Networks (GANs) for image-to-image translation and generating highly realistic images. GAN models estimate complex non-linear relationships by learning estimation of joint probability distribution of the paired images at the whole image scale, but we contend the performance of GAN models on voxel-to-voxel level translation may be questionable. In our current implementation, a 2D RIED-net model was adopted to work with the limited tracer comparison data that are currently available to train a DL model which typically requires thousands of images. Operating in 2D mode, a moderate sized dataset like the OASIS cohort we used in this study with 92 subjects provided more than 20 thousand slices In this work, SUVR images were first transformed into template space before feeding to the RIED-net model and only slices within a common field of view were used so that the model was less affected by the variability of patient orientation and field of view coverage. When native space images were used to train the RIED-net model, similar performance was achieved within the cross-validation process but failed to generalize to the independent testing data. This suggests spatial normalization as a necessary preprocessing step to obtain favorable results, at least with the moderate amount of data available. Further investigation is warranted to determine whether rigid transformation or a full nonlinear spatial normalization procedure would further improve the model. In summary, we demonstrated for the first time that a DL approach can be used to harmonize amyloid PET imaging data from two different tracers to provide highly interchangeable amyloid measurement. This approach may also become invaluable for addressing similar problems such as the harmonization of tau PET imaging data from different tracers. ACKNOWLEDGMENTS The research is supported in part by R01AG031581, R01AG069453,
2022-02-11T06:24:36.762Z
2022-02-09T00:00:00.000
{ "year": 2022, "sha1": "6b354f63425aa9428f3db0c846e4614838be4492", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/alz.12564", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "eb7e16e07eb0a215f9a4c6fc48a3d3839a8620b4", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
251661126
pes2o/s2orc
v3-fos-license
Epidemiological Characteristics of Risk Factors of Preurolitiasis and Urolitiasis in Farmers’ Population : The farmer population was isolated by the absolute selection, in the climatic conditions of the Fergana Valley (2551). Of these, 2478 (men – 1270 and women – 1208) were fully examined. Participation in the study was 96.6 percent. The prevalence of common risk factors in a farmer population using epidemiological, survey, biochemical, and instrumental methods are characterized by high rates and have gender – age characteristics. Risk factors vary sharply with age and increase. Farmers have developed inconsistent epidemiological conditions predisposing them to pre – urolithiasis and urolithiasis, and hence their correction leads to success in primary and secondary prevention. INTRODUCTION Pre-urolithiasis (PUL), urolithiasis (UL), and urinary stone diseases, in general, are recorded as systemic non-communicable chronic diseases of metabolism with an increasing trend in all countries of the world [1,2,3,4]. Researchers are conducting scientific studies on the prevention of urolithiasis. Improving the regional metaphylaxis system and disease prevention is an urgent issue [5,6,7,3,8]. In Uzbekistan, non-communicable diseases (NCDs) risk factors are highly prevalent (89.0%) and pose a significant risk to public health (WHO, 2014). The main threat of these risk factors is that they lead to metabolic shifts and remain the direct causative agents of NCDs, including PUL and UL or urinary stone diseases [9, WHO, 2020]. Risk factors will be a priority in the formation of a healthy lifestyle, and according to expertsthe share of a healthy lifestyle in a healthy state will not be less than 55.0% [10,11]. Based on this, the epidemiology of risk factors for PUL and UL in the population of farmers operating in the Fergana Valley of Uzbekistan was studied and evaluated for the first time. MATERIALS AND METHODS The object of the study was the unorganised population of the Andijan region of the Fergana Valley, engaged in farming in the climatic conditions of Pakhtaabad. In its organization and implementation, the criteria developed and recommended in the international scientific community were used [CINDI, 1991, WHO, 2018, 2020. ≤17-year-olds, 18-70-year-olds and 2551 elderly people living in 11 areas of the study area, engaged in farming, were selected using the absolute selection method. The population is mainly engaged in agricultural work on farms in vegetable growing, horticulture, viticulture and rice. Of those included in the one-time epidemiological survey list, 2,478 (males -1,270 (51.2 percent) and females -1,208 (48.8 percent) participated in the survey, and population participation activity was 96.6 percent (73 people for various reasons) up to 3 times, despite repeated invitations, did not participate in the study). The examination was carried out at the screening centre, which used epidemiological, survey, biochemical, and instrumental methods. The questionnaire was used to determine the risk factors for major chronic noncommunicable diseases" (U.K. Kayumov, 2020). For epidemiological diagnosis used ECG, EchoCG, ultrasound (ultrasound), ultrasound scan (UTS) and anthropometric measurements. Biochemical indicators cholesterol, triglycerides, uric acid, micro-macro nutrients, protein metabolism indicators) were determined using the capabilities of clinical laboratories of regional medical institutions and evaluated according to international criteria. Urinary tract radiography, urography and renal tomography were performed under special instructions. Risk factors include alimentary factors, hypodynamics, chemotherapy, urostasis, hereditary predisposition, dysmetabolic disorders (hypercalcemia/uria, hyperphosphatemia/uria, hypomagnesemia, hyperoxaluria, hypershondorrhoids, socially diagnosed diseases) and demographic. The recommendations and criteria of the World Health Table 1 presents an analysis of the results of a study of the epidemiological characterization of malnutrition factor (MNF) in the farming population. The conclusion from the analysis of the table numbers is that the alimentary factor is determined by the frequency of MNF in the farming population and family members, in able-bodied men and women, at high levels. RESULTS AND DISCUSSION Diagnostic criteria of MNF: • insufficient consumption of fruits and vegetables (less than 400 g or 4-6 servings per day); • intake of more than 5 g of table salt per day (adding salt to cooked food, the habit of frequent consumption of salt, starvation of canned and sausage products); • excessive consumption of food, fat and carbohydrates (body mass index -TVI >25 kg/m 2 ); • nutritional imbalances. ≥ In the 18-70-year-old farming population and their family members, MNF is recorded with a prevalence of 31.6 percent, in men -54.9 percent, and in women -45.0 percent (R>0.005). Differences in men and women in different age groups are observed as follows ( According to a survey aimed at determining hypodynamics (Table 2), in the population of farmers over 18 years of age, this risk factor is observed with a frequency of 53.0%: in 18-30 years -39.0%, in 31-49 years -56.0% (R<0.05), in the age group of 50-69 years -4.4% (with a decrease of almost 10 times, R <0.01) and in the age of 70 years -0.00%. The conclusion is that almost 68.0% of the population engaged in farming is not guaranteed from iatrogenicthe risk of taking the wrong drugs. One of the main directions of primary prevention of urinary stone diseases is to study alcohol consumption and limit it to 30 ml of ethanol for men and 15 ml/day for women. The total amount of alcohol consumption per week should not exceed 140 g in men AIC differs at different ages. The highest frequency of alcohol consumption is detected at the age of 31-49 years (59.6%, R <0.01, very low alcohol consumption is observed at the age of ≥70 years (0.7%; R<0.001) and at the age of 18-30 years and at the age of 50-69 years. relatively low -17.7 % and 22.0 %, respectively (R>0.05). Depending on the age of male and female farmers, the incidence of AI is characterized as follows: 18-30 years -68. Further analysis is aimed at assessing the prevalence of hereditary predisposition to urinary stone disease as a risk factor (Table 5). In the population of farmers with a genetic predisposition to urinary stone disease, as shown in Table 5, the prevalence is 64.1%, in male farmers -58.2% and in female farmers -41.8% (R<0.005). The age-related difference in the frequency of this risk factor is 50.8% (R<0.001). In men and women -60.6 and 39.4% (R<0.005) at the age of 18-30 years, 54.1% and 45.9% (R>0.05) at the age of 31-49 years, -67 at the age of 50-69 years. 0 percent and 32.9 percent (R<0.005) and 5070 years -50.0 percent and 50.0 percent (R>0.05), the prevalence of hereditary predisposition is observed. This information will be of great help in planning preventive measures. Epidemiological descriptions of dystonic disorders have also been studied and evaluated in the study, as they are recognized as risk factors that play a role in the origin of urinary stone diseases [12,13,14,15,16]. Table 6 shows the results of analyzes devoted to estimating the prevalence of hypercalcemia in the farming population. It was found ( Table 6) that hypercalcemia is observed with a prevalence of 62.3% in the general population of farmers. Its frequency is -58.4% in male farmers and -41.6% in female farmers (R<0.005). Hypercalcemia occurs with a frequency of 1.6% in ≥70-year-olds or 24.5% less frequently than in 18-30-year-olds (R<0.001), in 31-49-year-olds -50.2% (R<0.05) and 50 It is established with a prevalence of -22.1 percent (R>0.05) in -69-year-olds. Hypercalcemia is noted in all age groups with a significantly higher prevalence in men than in women:  18-30 years -62.3% and 37.7%, ie 24.6% (P<0.005);  31-49 years -54.0% and 46.0%, respectively, with a difference of 18.0% (R>0.05);  50-69 yearswith a difference of 63.6% and 36.4%, ie 17.2% (R <0.005); ≥70 years -62.5% and 37.5% or 26.1%, respectively (R<0.005). Similar epidemiological descriptions are found in the studied population in relation to hyperphosphatemia (Table 7). Hyperphosphatemia is detected at a frequency of 50.7% among the farming population; with a prevalence of -61.7% in male farmers and -38.3% in female farmers (R<0.005). Hypercalcemia is not detected in women and men over 70 years of age, 39.5% in the general population aged 18-30 years, -56.1% (R<0.05) at the age of 41-49 years and up to 4.4% at the age of 50-56 years. is established with a low distribution (R<0.001). The epidemiological description of hyperuricemia was also studied and evaluated as a risk factor in the study. The data obtained are summarized and analyzed in Table 8. The analysis shows that hyperuricemia is recorded with a prevalence of 52.8% in the general ≥ 18-70-year-old farming population; At the age of 18-30 years -40.8%, at the age of 31-49 years -55.2% (R<0.05) and at the age of 50-69 years -4.0% (R<0.005). Over 70 years of age (0.0%) patients have not registered. With a high prevalence in men and women, hyperuricemia occurs in 18-30-year-olds (64.5% and 35.5%; R<0.005), 31-49 years (58.8% and 41.2%; <0.005) and 50-69 years, respectively, age (70.6 percent and 29.4 percent; R<0.005). Depending on age, hyperuricemia in women is characterized by an increase of 11.8% (R<0.05) and the same percentage in men (11.8%; R<0.05). Compared to the literature, this figure is significantly higher, and we think that future prospective studies will clarify this conclusion. Another aspect of hyperuricemia identified was that it was recorded in male farmers at a frequency 1.6 times (61.6 percent) higher than in women (38.4 percent) (R<0.005). Table 9 shows the prevalence of hypomagnesemia in the farming population. The results of the study showed that hypercylindruria was associated with a high prevalence among the farming population (Table 10). In the total farmer population (≥18-70 years), hypercylindruria is recorded with a prevalence of 66.9 percent. In particular, 26.7% at the age of 18-30 years, 50.1% at the age of 31-49 years (R<0.05), 21.9% at the age of 50-69 years (R>0.05) and 28% at the age of 70 years. With a sharp decrease to 1.3% (R<0.001), it is confirmed compared to the first age group. Epidemiological common risk factors play a "decisive role" in the development of non-communicable diseases. Therefore, the prevalence of overweight, one of the leaders among them, was studied and evaluated in the farming population. Such analyzes are described in Table 11. An analysis of the table numbers shows that the prevalence of overweight, an alimentary-related risk factor, occurs with a prevalence of 58.3% in the general population of examined farmers (≥18-70 years) (63.5% in males and 36.45% in females, R>0.005). The highest age was 31-49 years (54.5 %, R<0.05), the lowest was 50-69 years (23.5 %, R<0.05) and 18-30 years (15.9 %, R<0.05), significantly lower (1.1 %; R <0.001) is observed at ≥70 years. As a result of these and other common risk factors mentioned above, unfavourable epidemiological conditions are predisposing to urolithiasis, as in other urine stone diseases, and therefore their correction leads to the success of primary and secondary prevention.
2022-08-19T15:13:35.493Z
2022-08-17T00:00:00.000
{ "year": 2022, "sha1": "9bc598eb2d5d7d18dbb4e0b48e6d41e662605cb1", "oa_license": "CCBY", "oa_url": "https://ijcsrr.org/wp-content/uploads/2022/08/34-17-2022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f40d47b599ba1c83fe3a30c0e3cdcba93f6096dd", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
215851824
pes2o/s2orc
v3-fos-license
Features of application of fundamental knowledge in innovative space: ontological aspect The analysis of the innovative space with a complex structure and configuration has been conducted by showing that its space-time frame is uncertain due to the high dynamics. The results obtained from the analysis of virtuum as a kind of multidimensional reality, forming a new image of society and, at the same time, generating and combining new knowledge. Knowledge and innovation are shown as the fundamental dominant of the current model of society and as the result this model combines the characteristics of informational, cybercultural, technogenic, virtual, and innovative society. On this basis, the main purpose of the research was implementation of the correlation of innovative creations and new epistemes, which were the foundation of the innovative space. The ontological approach that became the theoretical basis of this research showed that the innovative space is understood as one of the multidimensional sphere of social existence as such, including all actors of innovation. The results of the research confirmed that fundamental knowledge is updating the epistemological resource, at the same time, the mechanisms and methodology of research are constantly changing, technical and technological tools are being modernized and all of these factors influence on innovative space. Besides, innovative space as an artificial environment is correcting social and natural phenomena. Introduction Innovative space is a relatively new sphere of social existence, which, however, binds together all other areas that exist in it -social, economic, scientific, educational, cultural, managerial, industrial, intellectual, virtual, spiritual, as well as sphere of business and communications, and others. It should be noted that the global world is becoming more technologically connected, interdependent, and its strategies, tactics, rules and requirements are constantly changing. Therefore, the field of innovation is presented as a systemic form of organization of all these areas, presented as objects of innovation. It creates not only an individual model of socio-economic relations, but also defines the commercial interests of global players on economic, political, cultural and other platforms. In the broad ontological sense of the word, the key goal of innovation is the innovative transformation of the current and changing reality, the complete modernization of the holistic architecture of everyday life. In the narrow utilitarian-consumer sense of the word, innovation is aimed at the production and sale of new ideas. The innovative space has a complex structure and configuration, but its space-time frame is uncertain due to the high dynamics. At the same time, it is presented as a kind of virtuum -multidimensional and multi-level formation, forming a new image of society and, at the same time, generating and combining new knowledge. In today's society a so-called "new effect" acts because there is a search for qualitatively new ideas and solutions, and there is a conflict between old and new ideologies, generations, and images of thought. Knowledge and innovation become the fundamental dominants of life in the model of society that is being formed at the moment. This model combines the characteristics of a new type of society -informational, cybercultural, technogenic, virtual, and innovative. On this basis, it is necessary to correlate innovative creations and new epistemes, which are the foundation of the innovative space. The problem of transforming society into a virtuum of cultures, logics, mentalities, and images of thought by assessing the role of fundamental knowledge in combining the ethical factors is also actualized. Materials and methods In the context of the ontological approach, the innovative space can be understood as one of the many dimensions of existence or social existence as such, including all subjects of innovation. Its content is revealed in such categories as "something new" and "no new", "possible new" and "impossible new", "certain perspective" and "uncertain perspective", "quantity of knowledge" and "measure of application of new", "quality of knowledge", "the application of knowledge" and "knowledge as truth", as well as in the concepts of "space of innovation", "time for innovation", "movement of innovation processes", "knowledge as a form of life", "becoming a new idea", "the origin of a new idea", "transition from the old to new". Ontological analysis of the phenomenon of fundamental knowledge, which influences innovative processes, is aimed at revealing the objective status of the ideal objects and theoretical constructs that they create. The ontological approach facilitates the modeling of the conceptual scheme of society as a virtuum, which consists of a structure containing a set of planes forming this virtuum (see figure 1 below). Result: Innovation is a transformative factor that transforms an entire society. The ontological approach is also applied as a fundamental philosophical methodological principle that encompasses "knowledge" in the broadest sense of the word as an object "as such". In this context, innovation serves as a projection of innovative reality, and the innovative space looks like system-structural integrity. Other components of the social virtuum are objectified and interpreted in the new ontological architecture of the innovative space. The diversity of forms of social virtuum ontologically is due to the variety of innovations that continuously change the segments of the social matrix. Therefore, the ontological approach is designed to understand the essence of innovative processes, to study the emergence of innovations and technologies, to understand the structure of the innovation space, to analyze social existence as a complex system. Results The word "innovation" (in Latin "novatio" -"update," "change"; "in" -"in the direction"; "innovatio" -"in the direction of change") is not new. It was first mentioned in scientific research of the XIX century. At the beginning of the XX century, J. Schumpeter defined innovation as a new combination of production factors developed by the spirit of entrepreneurship [1]. As a result of the analysis of "innovative combinations" changes are being implemented in the development of local economic systems. A high-quality economic system determines the level of development of the global economy as a whole, changes the image of the economy as a sphere of social existence. Today, the "market" penetrates into the sphere of knowledge, fundamentally changing its true spiritual purpose. Knowledge is no longer perceived as a sacrament in the philosophical meaning of the word, but as capital. The theory of "market defects" shows the contradictory nature of the "knowledge market" [2]. That is, knowledge is not identified with meanings, creative idea, but with the possibilities of their application and transformation in order to gain benefit. There is, therefore, a contradiction in understanding the essence of innovation, which is outlined in the approaches of philosophers and marketers. Innovation as such seriously improves the efficiency of the current system. For example, innovation today is associated with the production of high-quality new products, services, offers, the creation of new markets and the introduction to these markets of new type of products with modernized consumer properties, the maximum increase efficiency of production systems and production processes. The innovative space summarizes innovation and intensifies the intellectual work of the scientific community, establishes a dialogue between various scientific organizations and schools, enriches the creative process, promotes to expand the horizons of consciousness and imagination, as well as to the realization of discoveries and inventions. Innovation is the result of the application of fundamental research, which has historically been aimed at finding the fundamental laws of nature. They significantly expand fundamental knowledge about the world and man; contribute to the emergence of new theories explaining not only natural phenomena, but also the essence of social and spiritual processes. Fundamental research is a generator of deep scientific ideas, which, materializing and embodying, affect in the future on everyday life, change approaches to it. Fundamental research throughout the history of science is updating the epistemological resource. At the same time, the mechanisms and methodology of research are constantly changing, technical and technological tools are being modernized. Recognition of modem science as a complex, expanding process has led to a striking variety in the efforts to understand that process [3]. The modern realities of living in global markets are transforming the science itself, which in most cases solves applied problems, and creates the basis for applied research. Increasingly, the results of scientific research are considered commercially, and this leads to the loss of moral and humanitarian component of science as such. On the one hand, discoveries in the industry of applied medicine, genetics, nanotechnology, artificial intelligence are necessary and in demand. On the other hand, such categories as "spirituality", "morality" and "good" in their philosophical, transcendental sense of the word are practically leveled. Philosophy as a universal meaning-forming system, the highest scientific and theoretical form of cognition of the world, which goes beyond fundamental and applied knowledge, is able to metaphysically explain the deep nature of thinking, creativity, scientific search, discovery, invention. It was philosophy that defined the importance of the role of fundamental knowledge in the life of mankind. There is, of course, a contradiction between empirical and metaphysical approaches. In one form or another problems of metaphysical skepticism and inductive skepticism have been central to theories of knowledge and philosophies of science throughout the history of philosophy, and the same remains true of philosophy of science today. The two problems are alike in calling into question our ability to have scientific knowledge by considering circumstances in which alternative hypotheses are consistent with the evidence we might have. The problems differ in detail. Inductive skepticism trades on the fact that we are finite beings who only have a finite amount of data available at any time. Metaphysical skepticism trades on the assumption of a separation between the kinds of things we have evidence about and the kinds of things we wish to know about; even if we had infinite knowledge of things of the first kind, it would not suffice to determine the truth about things of the second kind. These problems are at the bottom of many disputes over the nature of scientific claims, the structure of scientific theories, and the appropriateness of belief in the claims of modern science [4]. However, the reality that we are studying is becoming less obvious, it can be constantly changed, updated. The concepts of "image," "idea", "discovery", "innovation" are considered by philosophical science in a certain sequence. They point to how modern science was formed. Human continuously generates new ideas, but not all ideas can rise to the level of fundamental knowledge. Only that researcher is able to gain new knowledge who receives a lifelong education [5]. Thus, fundamental research expands the horizons of consciousness and project knowledge into the complex world. Fundamental research is aimed at strengthening the intellectual potential of modern society, developing new mechanisms that contribute to the modernization of educationits primary, secondary, higher levels. Innovation in education provides an opportunity to create new professions and new jobs by learning new knowledge and introducing new technologies, creating continuity of the educational process [6]. Applied research is the result of the modernization of science itself, the development of its potential; it is directly involved in the development of innovation and the provision of innovation as the basis of the socio-economic development of modern Civilization. Knowledge derived from applied research is focused on its direct use in narrow areas of professional activity (cybernetics, genetics, space, economics, and social management). New knowledge becomes the main capital in modern society; it is not opening available to the general consumer, but, on the contrary, is encoded in various ways. They are eventually transformed into new technologies and thus generate competition between countries, corporations and enterprises. Technology is born out of a symbiosis of fundamental and applied knowledge. Technology is the result of a high degree of innovation in applied knowledge. Fundamental research is almost inseparable today from applied research, although their socio-cultural codes, as well as the ways in which knowledge is organized and broadcast are different. Fundamental research universalizes knowledge, and applied research segments it (see Figure 2 below). 2) The formation of fundamental knowledge is a consequence of the emergence of philosophical methods of understanding reality. Fundamental knowledge is a fairly broad concept; it draws a line between knowledge and ignorance, knowable and unknowable, possible and impossible, obvious and probable. Fundamental knowledge is often presented as "accurate" knowledge. Mathematics, astronomy, biology, physics, chemistry and other "accurate" sciences study the various properties of nature and its laws. For example, nanotechnology includes such fundamental disciplines as quantum physics, molecular biology, computer science, chemistry. That is, there is a convergence of knowledge [7]. Consequently, the process of development of science is represented in all its dialectical complexity and diversity. Thus, the scientific pictures of the world, modeled in the process of the generalization of the results of fundamental research, in the innovative space can change more often than in the historical space. In addition, today there is not one dominant scientific picture of the world, but many of such scientific pictures. If we summarize the results of this study, we will get the following features (see Figure 3, 4, 5, 6, 7 below): Features of application of fundamental knowledge and their influence on innovative space as ontological phenomenon. Discussion The innovative process is the introduction of new intelligent solutions in various areas of life (architecture, fashion, design, transport or military industry), as well as the development and distribution of new technologies. Moreover, the development of technology requires special training and learning of new knowledge. It also requires a special organization of knowledge and training material. L. Drotianko, S. Yahodzinskyi showed that in a globalized society, everybody faces with the need for processing and analyzing of large volume of data in the daily routine. Undoubtedly, the possibility of free and quick access to diverse information, the implementation of innovative technologies and machines, the development of printing and electronic libraries stimulates not only social processes, make them more dynamic, but also gives a chance to the wide audience to express the opinion, participate in discussing topical issues, to influence the decisions [8]. Technological innovations involve new or efficient production of existing product and machinery, new or improved technical processes. Thus, the innovation process requires: investment, new developments, new solutions, and qualitative improvement of the modernized area. Fundamental knowledge can influence the innovative process in two ways. On the one hand, the changes are radical, and on the other hand, they are incremental. Periods of innovation stagnation can be reduced on the basis that the world has become global and the demand for improved quality of life is colossal. A. Gudmanian, L. Drotianko, S. Sydorenko, O. Zhuravliova, S. Yahodzinskyi underline that globalization processes are currently penetrating all spheres of social life worldwide, above all, through steady and inevitable advance of information and communication technology [9]. The innovative process concentrates the creative power of new knowledge. The result of innovation is high economic efficiency in the production or consumption of the product, the intensification of new developments and inventions in the technical industries. In this case, a product with qualitatively new properties is created. The innovative approach is effectively transported from the technical industry to the field of intelligent design as well as intelligent engineering. Consequently, the concept of "innovation" has a broad context. And the innovative space covers, practical, spiritual, intellectual, creative plane of social life. O. Sidorkina, O. Skyba, N. Sukhova and T. Poda confirm that modern technology and consumer attitude towards nature, which at one time aimed at the happiness of people, has become a threat to the existence of humanity as such [10]. The innovative space creates the most acceptable conditions for the researchers and their inventions. Thus, T. Pinch talks about innovative production of knowledge [11]. In this case, a new concept, device, or other things that facilitate activities may be the invention, and innovation is not associated with that situation when the innovation organizer has benefited from any benefit and when it has a positive effect. P. Drucker rightly argues that we can never know in advance what idea will survive [12]. Innovation strains the global management system, implements new or significantly improved marketing methods, uses new approaches in presentation of services, their presentation and promotion to markets, formation new marketing and management strategies [13]. Scientific creativity and a penchant for collaboration are variables, and they vary over time [14]. Conclusion The "human-knowledge" interaction is a model of a new type of society called "The Society of Knowledge". In this society, the mind is a key instrument of human in the process of learning and developing the world. Knowledge is a product of smart and creative activity of human, the highest value, which human owns as a historical being, as well as the quintessence of human life. Social space is formed as a result of expansion and use of knowledge. "Social space" means to share and unite all individuals. Social space is also a collection of certain goals, rights and responsibilities that affect the nature and quality of social relations. Social reality integrates individuals into the community in the process of their common livelihoods. Social space as a virtuum is modeling specific conditions in which individuals construct their relationships, search for their place in society, and also produce vital meanings. Social virtuum also produces numerous social phenomena and processes, regulates them in space and time. It is manifested in the process of correlation of human actions and deeds. According to its content social virtuum is a reflection of organization and life of society as a subject of historical process. It is a comprehensive, fluctuating formation that accumulates knowledge, experience, traditions, cultural codes, and also determines the level of development of the society and its key elements. Social virtuum expresses the nature of the use of fundamental knowledge and intellectual capacity of society to develop universal criteria for optimal future development of the entire social organism. Human as an intellectual person, on the one hand, is a social being. It has the mind and consciousness and also is a subject of history, culture, civilization. It learns and constructs the world around, changes its architectonics, innovates it, deepens culture and models its own individual history. The essence of human, its origin and purpose, place in nature and society are the main problems of philosophy, science, religion and art. On the other hand, human is a spiritual, mysterious, creative; it is a cosmic scale substance that aims to obtain absolute knowledge. Modern fundamental knowledge is a result of collective cognitive activity. Such kind of knowledge is represented not only as based on logic or facts, and that provides for empirical or practical verification of valuable data, but also as a set of all human irrational intensions. This means that any variation of reflection of reality in human thinking is valuable. Modern science and methodology of science are aimed at gaining fundamental knowledge about the structure of the substance, new laws of nature, as well as understanding the inner mental dimension of spiritual reality. To a greater extent knowledge is the spiritual component of the internal microcosm. Knowledge of an individual (or group of individuals) is information. Information is a complex occurrence, abstraction containing an infinite number of variables. Possessing information, a person can solve any practical problem. Knowledge is an ambivalent to faith. In philosophy, in the broadest sense of the word, knowledge is an image of reality in the mind of the subject, which is recorded in the form of concepts and representations (see Figure 8 below). Result: 1) Philosophical sciences do not create a commercial market as a service sector, but form the worldview of human and society, create the prerequisites for future innovations in all areas of scientific knowledge. 2) Innovation is considered as an ideal image, a category of philosophy in all its senses. Typically, fundamental knowledge is objective, fixed, expressed in a language or any other iconic system in the form of signs. However, such kind of knowledge can also be recorded in sensual images; it is obtained by direct perception of reality. Cognition is not limited to the field of science, and fundamental knowledge in one form or another exists also beyond science. Science, philosophy, mythology, politics, religion as a form of public consciousness correspond to specific forms of knowledge. They are also fundamental. They are forms of knowledge that have a conceptual, symbolic or artistic basis. All these forms are also intertwined in a social virtuum. Fundamental knowledge was formed historically as different kinds of knowledgeproto-scientific, scientific, unscientific, routine-practical, intuitive, religious. Proto-scientific knowledge existed in the early stages of human history and represented primary information about nature and the visible cosmos. Casual knowledge is the mark in a complex unfamiliar world, a basis for modeling the future, a resource for its foresight, but it can usually be controversial. However, everyday knowledge influences the configuration of social matrices, and the viewpoint of each person is considered in the process of life. Fundamental scientific knowledge is based on rationality and is characterized by objectiveness and universality; it forms and innovates social virtuum. Virtuum is the result of innovation of the whole social sphere and the use of objective knowledge in the field of cybernetics, mathematics, technology, and cosmonautics. The main task of fundamental scientific research is description, explanation and foresight of any process and phenomenon in the innovative sphere of virtuum as a symbiosis of reality and simulation, as well as knowledge management. Knowledge management is a separate branch of innovative science. The human mind seeks to understand how innovative knowledge is produced. Innovative knowledge may be externalized, collected, accumulated, saved, predicted, used, disseminated, declarative, procedural, meta-knowledge, vague knowledge, empirical, theoretical, practical, spiritualpractical, theorized, pre-scientific, scientific, methodological, subjective, objective, social, paradigmal, trans-subjective. Numerous of these models of knowledge are connected to each other and affect the image of reality, constantly making adjustments. Management of innovative knowledge interprets knowledge as a form of information that is filled with context based on experience that is continuous and incomplete. Information is a set of all possible data about the world, the sum of all future alternatives. Data may be a subject for observation, but they are constantly replenished and reinterpreted. In this case, innovative knowledge is information that is analyzed, processed, transmitted, and removed and these processes are continuous. This approach corresponds to the model of knowledge as a pyramid of increasing usefulness. This model translates innovative knowledge into the plane of virtual reality, creating a virtual knowledge. "Virtual reality" or "artificial reality" is the world created by technical means; it is transmitted to human through its feeling: sight, hearing, smell, touch. Virtual reality is the pure innovative sphere; it mimics various influences on human consciousness, as well as numerous reactions to impacts. By computer synthesis of properties and reactions of virtual reality for human consciousness a convincing complex of sensations of reality is created, and all this is reproduced in real time. Virtual reality objects usually behave themselves similar to the behavior of objects of material reality. The user can influence these objects according to real physics laws. However, often for entertainment purposes users of virtual worlds are allowed more than possible in real life. In these possibilities the sense of innovation takes place. If virtual reality is a computer world, a simulator, the "virtuum" is a real matrix of social being [15], which is built as a result of development and deepening innovative knowledge.
2020-03-26T10:29:21.870Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "dfa709e32b58dd90fd0306827add71daaddaef6d", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/17/e3sconf_ktti2020_04012.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "014a4d59b3bca092a96da435547709f2bc68220a", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
257597880
pes2o/s2orc
v3-fos-license
Chemical Management of Senecio madagascariensis (Fireweed) Fireweed (Senecio madagascariensis Poir.) is a herbaceous weed-producing pyrrolizidine alkaloid that is poisonous to livestock. To investigate the efficacy of chemical management on fireweed and its soil seed bank density, a field experiment was conducted in Beechmont, Queensland, in 2018 within a pasture community. A total of four herbicides (bromoxynil, fluroxypyr/aminopyralid, metsulfuron-methyl and triclopyr/picloram/aminopyralid) were applied either singularly or repeated after 3 months to a mix-aged population of fireweed. The initial fireweed plant density at the field site was high (10 to 18 plants m−2). However, after the first herbicide application, the fireweed plant density declined significantly (to ca. 0 to 4 plants m−2), with further reductions following the second treatment. Prior to herbicide application, fireweed seeds in both the upper (0 to 2 cm) and lower (2 to 10 cm) soil seed bank layers averaged 8804 and 3593 seeds m−2, respectively. Post-herbicide application, the seed density was significantly reduced in both the upper (970 seeds m−2) and lower (689 seeds m−2) seed bank layers. Based on the prevailing environmental conditions and nil grazing strategy of the current study, a single application of either fluroxypyr/aminopyralid, metsulfuron-methyl or triclopyr/picloram/aminopyralid would be sufficient to achieve effective control, whilst a second follow-up application is required with bromoxynil. Introduction Madagascar ragwort (Senecio madagascariensis Poir.), commonly known as fireweed in Australia, is a short-lived perennial herb that belongs to the Asteraceae family. It is native to southern Africa and has been introduced to several countries, including Argentina, Brazil, Colombia, Uruguay, Japan, Hawaii, and Australia [1]. Fireweed plants produce pyrrolizidine alkaloids (PAs) that are poisonous to livestock, particularly cattle (Bos taurus L.) and horses (Equus ferus caballus L.). Several general management approaches, including cultural, physical, chemical, biological, or a combination of these, have been used to manage fireweed. It is known to be susceptible to the action of several selective herbicides [2], spanning several 'mode of action' groups [3]. For example, fluroxypyr/aminopyralid (HotShot™, Corteva Agriscience Australia, 67 Albert Avenue, Chatswood, New South Wales, Australia, 2067) and triclopyr/picloram/aminopyralid (Grazon™ Extra, Corteva Agriscience Australia, 67 Albert Avenue, Chatswood, New South Wales, Australia, 2067) are combinations of synthetic auxin herbicides (Group 4) [4]. These disrupt plant cell growth in the newly forming stems and leaves and negatively affect protein synthesis and normal cell division, leading to malformed growth and tissue tumours [4]. Bromoxynil (Bromicide ® 200, Nufarm Australia, 103-105 Pipe Road, Laverton North, Victoria, Australia, 3026) acts as a Photosystem II photosynthetic inhibitor (Group 5) while metsulfuron-methyl (Brush-Off ® , Bayer CropScience Australia, 391-393 Tooronga Road, Hawthorn East, Victoria, Australia, 3123)-a member of the sulfonylurea group of herbicides (Group 2)-impedes the normal function of acetolactate synthase (ALS), a key enzyme in the pathway of biosynthesis of the branched-chain amino acids isoleucine, leucine, and valine [4]. A commonplace recommendation for herbicide control of fireweed in Australia is that plants should be sprayed during the early flowering stage of growth with a follow-up treatment often essential 6 months later [2,5]. According to Sindel and Coleman [2], such herbicide applications are best applied to control fireweed populations during April (Autumn) in Australia. In Australia, 2,4-D amine (3.2 kg ha −1 ) and 2,4-D sodium salt (2 to 4 kg ha −1 ) have been reported to give good fireweed control without harming the proximate pasture species, such as blue couch (Digitaria didactyla Wild), blady grass (Imperata cylindrica (L.) Beauv) and white clover (Trifolium repens L.) [8]. Similarly, Motooka et al. [6] suggested that in Hawaiian pastures where forage legumes are mixed with grasses, the amine salt formulation of 2,4-D is preferable because of its mild impact on legumes. In contrast, metsulfuron-methyl at 40 to 80 g ha −1 provided effective control of fireweed in an Australian study, but it severely damaged legumes (such as T. repens) present within the treated pasture [9]. The seed bank is defined as a collection of viable, non-germinated seeds [9] and is an important component of grazed pastures. To develop a suitable, long-term chemical management strategy for any grazed pasture, it is important to have information on the dynamics of the weed and pasture species' seed banks [10]. The soil seed bank significantly contributes to the regeneration ability and future composition of that pasture community [11]. During a typical Australian Autumn (March-May), with average rainfall (of 94.3 mm in 2019; [12]), most fireweed seeds will germinate in the first 3 months after dispersal from the parent plant, and only a small percentage will remain viable and ungerminated in the seed bank after a year [2]. However, in a relatively drier season, a greater proportion of the seed produced will enter the seed bank and is predicted to maintain its viability there for up to 10 years [2]. In one study, freshly collected fireweed seeds buried 3 cm deep in the soil only lost a small percentage of their viability (from 63 to 54%) in the following 15 months [13]. When Radford [13] assessed the size of the fireweed soil seed bank, they found over 12,000 seeds m −2 at a heavily infested site, with most seeds found below 1 cm of depth in the soil profile. Rapidly reducing or eradicating weed seed banks should be relatively easy if seed production and their placement into the seed bank can be prevented [14]. In addition, determining the seed bank size and structure of grazed pastures will be helpful in determining an effective chemical management approach [14]. To eradicate an invasive weed species like fireweed, it is necessary to not only kill the emerging plants but also deplete the seed bank. Even with high levels of plant mortality, the soil seed bank may still allow populations to reappear in future years [15]. In the estimation of the impact of any chemical management approach in an agroecosystem, knowledge of the germination behaviour of the weed species and its seed bank ecology will be important [16]. By using ecological population indices, such as the Shannon-Weiner index [10], determining the effect of chemical management can be made on the species diversity within treated communities. Thus, the objectives of this study were to: (1) evaluate the impact of several herbicides, that have all been previously found to be efficacious on fireweed but have different modes of action, on fireweed density, (2) compare a single dose to a repeated dose, and (3) determine their ability to rapidly deplete the fireweed seed bank while maintaining the pasture community species diversity. Density of Fireweed Plants following Various Chemical Control Approaches Before applying herbicide treatments, fireweed plant density was relatively high (ca. 10 to 18 plants m −2 ) and not significantly different (p = 0.56) between treatment plots ( Figure 1). Following the first herbicide application, new seedling recruitment was detected in the bromoxynil-treated sub-plots but not in any of the other herbicide treatments. Two months after the first herbicide application, a significant reduction (p < 0.05) in fireweed plant density (ca. 0 to 4 plants m −2 ) was observed in all herbicide-treated sub-plots when compared to the control sub-plots (ca. 14 plants m −2 ; Figure 1). Figure 1. Fireweed plant density (m −2 ) in two different types of field plots: those plots that had been sprayed once (blue line) and those that had been sprayed twice (orange line) recorded over time following the application of metsulfuron-methyl, fluroxypyr/aminopyralid, triclopyr/picloram/aminopyralid, bromoxynil, and control. The data are the mean ± SEM from five replicate plots. When comparing the single application plots with the follow-up application plots, the single application had been efficacious for all herbicides except bromoxynil ( Figure 1). For bromoxynil, the follow-up application treatment was required to reduce the density of fireweed to zero plants m −2 , while the single application treatment averaged three plants m −2 ( Figure 1). Ten months after application, there was no significant difference (p = 0.10) between the single and the follow-up treatments on fireweed density ( Figure 1). In addition, during this time, fireweed density had declined in all plots, including the control. At the end of the experiment (i.e., after 13 months), there was a significant difference (p < 0.05) in fireweed density in the herbicide-applied plots as compared to the control; however, there was not a significant difference (p = 0.15) between the number of applications (apart from the bromoxynil treated plots) ( Figure 1). For the single application sub-plots for metsulfuron-methyl, fluroxypyr/aminopyralid and triclopyr/picloram/aminopyralid applications significantly reduced the density of fireweed compared to the control sub-plots, while no significant difference was observed between bromoxynil and the control ( Figure 1). However, following the second herbicide application, the fireweed plant density dropped to 0 in all herbicide-applied sub-plots (including the bromoxynil sub-plots) at the end of the experiment (13 months; Figure 1). Effect of Chemical Control on Seed Bank Structure Before the initial application of the herbicides, fireweed seeds were found in both the upper (0 to 2 cm) and lower (2 to 10 cm) soil layers. The fireweed seed density in the upper layer (8804 seeds m −2 ) was significantly (p < 0.05) greater than the seed density in the lower layer (3593 seeds m −2 ) ( Table 1). The germinable fireweed seed density across the site varied from 7711 to 10,736 germinable seeds m −2 in the upper layer and from 3198 to 3972 m −2 in the lower layer (Table 1). Fireweed was the most abundant species in the seed bank and accounted for 90% of the total seed bank in both soil layers (Table 1). Table 1. Germinable soil seed bank species seed density of a fireweed-dominated kikuyu pasture at Beechmont, Queensland, before the application of metsulfuron-methyl, fluroxypyr/aminopyralid, triclopyr/picloram/aminopyralid or bromoxynil, compared with a non-treated control, at two soil depths: upper (0 to 2 cm deep) and lower (2 to 10 cm deep). Life form details for longevity (A = annual or biennial, P = perennial) and for life form (F = forb, G = graminoid, S = shrub) Weed status (W = weed) and * Introduced species. There were about 15 species recorded from the pre-treatment seed bank (Table 1). Fireweed, followed by kikuyu (Pennisetum clandestinum Hochst. ex Chiov.), had the highest seed density (ranged from 1314 to 4034 seed m −2 ), which is consistent with the study site being a kikuyu-sown pasture. The total seed density of all species, including fireweed in both layers, varied from 18,591 to 26,698 seeds m −2 . The Shannon-Weiner index for pre-treatment at the top layer (from −0.47 to 1.37) was less than the bottom layer (−0.29 to 1.40) ( Table 2). Following two rounds of herbicide application, the germinable seed bank of fireweed was significantly (p < 0.05) reduced (Table 3) due to the absence of any remaining reproductively active plants. Before application, fireweed seeds were found in both upper and lower soil layers, with a significantly (p < 0.05) greater portion (7710 to 10,736) in the upper layer (Table 1). When assessed 5 months after the second herbicide application, the germinable fireweed seeds in the upper layer varied from only 601 (fluroxypyr/aminopyralid) to 1263 m −2 (Control), while in the lower layer, germinable fireweed seeds varied from 519 (fluroxypyr/aminopyralid) to 866 m −2 (bromoxynil) ( Table 3). Not only in herbicidetreated sub-plots but also in the control sub-plots, fireweed seed density had dropped by 28%, indicating that loss in seed viability was occurring. Kikuyu was the dominant species in the seed bank, with only 11 other species observed in the seed bank post-herbicide application (Table 3). In addition, nutgrass (Cyperus rotundus L.) had increased following herbicide application except in the triclopyr/picloram/aminopyralid treated sub-plots (Table 3). New plant species also appeared (little hogweed, Portulaca oleracea L.; burclover, Medicago polymorpha L.; Carolina bristle mallow, Modiola caroliniana L. G. Don), possibly due to seed dispersal into the site. Whilst some species-which had been present in the first seed bank assessment-were lost from the second seed bank assessment (e.g., white clover, Trifolium repens L.; marsh parsley, Cyclospermum leptophyllum (Pers.) Sprague ex Britton and P.Wilson; black nightshade, Solanum nigrum L.; vervain, Verbena spp.; pennywort, Hydrocotyle sp. and American pokeweed, Phytolacca americana L.), possibly due to short term seed longevity in the seed bank. Table 3. Species density and change in seed density of germinable soil seed bank species found in a kikuyu pasture at Beechmont, Queensland, after spraying twice with herbicides metsulfuron-methyl, fluroxypyr/aminopyralid, triclopyr/picloram/aminopyralid, bromoxynil, and non-treated control, at two soil depths: upper (0 to 2 cm deep) and lower (2 to 10 cm deep). Life form entails longevity (A= annual or biennial, P = perennial) and lifeform (F = forb, G = graminoid, S = shrub), Weed status (W = weed) and * introduced species. The percentage reduction (−) and increase (+) of species compared to before spray is shown in parentheses. Family and The total seed density of the seed bank varied from 4562 to 10,491 seeds m −2 (Table 3). Effect of Chemical Control on Seed Bank Vertical Distribution There was a significant difference (p < 0.05) between the density of the germinable seed banks before and after herbicide application and between the two depths of the soil samples (0 to 2 and 2 to 10 cm), and interaction of these two factors (herbicide application with soil depth) on fireweed seed density in the seed bank. However, there was no significant difference (p = 0.12) between the herbicide treatments and the interaction of seed banks with soil depth or with treatment (p = 0.67) ( Table 4). The Shannon-Weiner Index for post-treatment in the top layer (from 0.85 to 1.64) was higher than in the bottom layer (−0.59 to 1.17) ( Table 2). There was no significant difference in the fireweed seed density within each of the upper layers and lower layers prior to herbicide application in all sub-plots, but the seed density was higher in the upper layer (0 to 2 cm) when compared to the lower layer (2 to 10 cm). However, there was a significant reduction of fireweed density after herbicide application in every treatment as well as in the control sub-plots (Table 4). As compared to the control, the highest reduction of fireweed seed numbers (−18%) was observed in the upper layer of the bromoxynil-treated plots (Table 4). Table 4. Germinable seed density of fireweed from a kikuyu pasture at Beechmont, Queensland, before and after herbicide treatments compared with total seed density, the percent of total seed density, and the percent reduction of fireweed seed number after herbicide treatment, and reduction in seed density as compared with non-treated control plot. The measurements were taken before and after herbicide treatment and then determined for two soil layers (0 to 2 cm depth and 2 to 10 cm). Treatment Timing Discussion In the present study, single applications of all the tested herbicides with different modes of action were successful in controlling fireweed growth, except bromoxynil which required a second follow-up application. The bromoxynil treatment was not as effective in a pasture setting, as it is a contact PSII inhibitor herbicide, whereas the other herbicides used were translocatable and thus more effective in a dense grass community. Bromoxynil is most active on smaller fireweed plants, whereas the efficacy of the other translocatable herbicide treatments is less limited by plant size and age. Additionally, bromoxynil seems to be active on a wider range of species, resulting in reduced pasture diversity. Initially, the fireweed plant density at the Beechmont field site was relatively high (10 to 18 plants m −2 ). However, following the implementation of treatments, all four herbicides rapidly reduced the density of fireweed. The density remained low thereafter, except for the bromoxynil treatment, which showed some evidence of plant regrowth and seedling recruitment. Consequently, implementing a follow-up application of bromoxynil was advantageous, reducing the fireweed density to zero plants m −2 , while the single application treatments contained three plants m −2 , 5 months after the first application of bromoxynil. Watson et al. [5] suggested that a follow-up herbicide treatment was often necessary for effective fireweed management. Similarly, Sindel and Coleman [2] recommended the follow-up of an initial herbicide treatment with spot spraying with one of the registered herbicides, such as triclopyr/picloram/aminopyralid or fluroxypyr/aminopyralid in the Spring. The results from the current study suggest that a second application is not necessary, but if applied, the timing between the two treatments could vary greatly depending on which herbicide is used. For example, follow-up control using bromoxynil may need to be undertaken much sooner than fluroxypyr/aminopyralid, metsulfuron-methyl, or triclopyr/picloram/ aminopyralid (Figure 1). These or similar herbicides have been shown to give some residual control of seedling recruitment for other herbaceous Asteraceae weeds such as florestina (Florestina tripteris auth.) [17]. In future, we recommend performing herbicide residual studies on the pasture feed to determine the active substances present in the feed. Since all the tested herbicides are selective in their mode of control, they are considered not to damage the pasture grass species in the field. However, other components of the pasture community, including the broadleaved pasture legumes, may be damaged (Table 3). This aspect of fireweed management with selective herbicides needs to be further studied. According to Anderson and Panetta [8], clover (Trifolium sp.) was severely damaged by metsulfuron-methyl, clopyralid, triclopyr and triclopyr and picloram. Although the 2,4-D formulations damaged neither blue couch (Digitaria didactyla Willd.) nor clover, atrazine plus 2,4-D caused severe damage to both species. In addition, the four tested herbicides have different withholding periods, which may also influence which herbicide is selected for use. Bromoxynil has an 8-week withholding period [2]; therefore, grazing or cutting for stock feed should be avoided during that period (APVMA, 2022). Triclopyr/ picloram/aminopyralid [18] and fluroxypyr/aminopyralid [19] have a 7-day withholding period, and even after death, many plants endure toxicity and more palatable thus, the stock should not be grazing for 7 days [20,21]. However, metsulfuron-methyl has no withholding period [22]. According to the first seed bank analysis, fireweed seeds can be present in the soil in moderately large numbers (3000 to 10,000; Table 2). Ragweed parthenium (Parthenium hysterophorus L.), another invasive Asteraceae weed, can form seed banks as large as ca. 45,000 seeds m −2 in a similar habitat to that studied in Southeast Queensland (SEQ) [11]. The results for the fireweed seed bank in this study were like that of Sindel et al. [23], who had undertaken studies in two different locations in New South Wales. A recent study undertaken by Karem [24] has indicated that fireweed seeds collected from the same Beechmont site in SEQ had an indicative short life of <1 year in the seed bank. Since a single fireweed plant can produce up to ca. 30,000 seeds [25] that can be effectively dispersed by wind, the invasive strategy of this weed must be seen as a balance between high seed production with rapid dispersal and producing a medium-sized seed bank of short-lived seeds. Land managers should therefore focus more on preventing seed set and dispersal and, to a lesser extent, on preventing the formation of seed banks. The present study has shown that most of the fireweed seed is to be found in the upper soil layers (0 to 2 cm), while a significant decrease occurs in both layers over time after herbicide application. In the present trial, undertaken in the absence of grazing livestock, this could be due to the use of management practices that do not disturb the soil surface. The field trial site used in this study was not grazed for ca. 10 months following the herbicide treatments, with no cattle to disturb the soil surface, the movement of fireweed seeds deeper into the seed bank of a healthy kikuyu pasture was negligible (Table 3). According to the Shannon-Weiner index, there was no reduction in the seed community biodiversity in the topsoil layers after herbicide application; however, there was an Index reduction in the bottom layers of bromoxynil-treated sub-plots and in the control subplots. This indicates that except for bromoxynil, all other herbicides did not reduce the biodiversity of the pasture seed community. Bromoxynil, being a contact herbicide and effective on seedlings [8], may have reduced biodiversity more than other herbicides. Being a contact herbicide, only those parts of the plant that come directly in contact with the herbicide are killed, and the plant will often regrow from the unaffected parts. Significant fireweed seedling recruitment after spraying is often observed [7]. Anderson and Panetta [8] reported that bromoxynil (3 L ha 1 ) was unsuccessful in controlling mature fireweed plants, with substantial regrowth occurring five months after spraying. Through natural seed decline and increased competition from the kikuyu pasture, a rapid decline in the soil seed bank is expected to follow. The key to this rapid decline will be the elimination of reproductive plants, a critical component of an effective management strategy for fireweed. Compared to the control, the reduction in seed densities of fireweed in the herbicide-treated sub-plots was high (Table 3). Among tested herbicides, the highest seed density reduction percentage was observed in bromoxynil-treated sub-plots (−18%) ( Table 3). Kikuyu was the dominant seed bank species (Table 1) before spraying the herbicides, and there was a dramatic increase of kikuyu seed in the soil seed bank after the herbicides had been applied, indicating the herbicides did not affect kikuyu seed populations or any other grasses or sedges (Table 3). Interestingly, even after the application of herbicide, some new species (P. oleraceae, M. polymorpha and M. carolinia) appeared from the seed bank while some species declined (T. repens, C. leptophyllum, S. nigrum, Verbena sp., Hydrocotyle sp., P. americana) ( Table 3). This may be due to the seasonal variation in the seed bank. From the present study, when herbicides were used to control mature fireweed plants, the established kikuyu pasture was able to significantly suppress the recruitment of new fireweed plants from the seed bank. However, even without herbicide application (as seen in the control plots; Table 3), a dominant kikuyu pasture can reduce fireweed recruitment and seed input. Perennial pasture species [viz. setaria (Setaria sphacelata Schum.), kikuyu (Pennisetum clandestinum Hochst. ex Chiov.), paspalum (Paspalum dilatatum Poir.), and Rhodes grass (Chloris gayana Kunth.)], that are competitive through late Summer and Autumn, will help to prevent the establishment of fireweed seedlings in the Autumn and Winter months [22]. Therefore, in a field situation, a well-established pasture community, given time, may well prevent further fireweed seedlings from emerging if stocking rates are sensibly managed [26]. Study Site A field site at Beechmont (28 • 5 32.61 S; 153 • 13 11.14 E) in Southeast Queensland (SEQ), containing a dense infestation of fireweed (more than 90% of plants were adult flowering plants), was selected for this study (density of ca. 10 to 18 plants m −2 ). The soil was a well-drained red ferrosol with a clay loam-to-clay texture and was dominated by kikuyu grass (Pennisetum clandestinum Hochst. ex Chiov). Several other key species were observed ( Table 2). The climate was warm temperate, with rainfall averaging 656.5 mm annually [12]. The mean annual maximum temperature was 22.5 • C while the mean annual minimum temperature was 14.0 • C in the year of study. At the beginning of the experiment, monthly rainfall was 16.4 mm, and minimum and maximum monthly temperatures averaged 3.8 and 21.6 • C, respectively [12]. Experimental Design The experiment was established in June 2018 using a split-plot design, with herbicide treatments allocated to main plots, several applications allocated to sub-plots and each treatment replicated five times. Herbicide treatments comprised the application of either bromoxynil, fluroxypyr/aminopyralid, metsulfuron-methyl or triclopyr/picloram/aminopyralid, plus an untreated control (without herbicide). All herbicides were applied at their recommended rates for the management of fireweed (Table 5). Sub-plot treatments were either a one-off herbicide application (July 2018) or two applications whereby the herbicide treatment was repeated 3 months after the first application (i.e., in October 2018). To set up the trial, five treatment blocks were established parallel to each other, with main plots 10 × 10 m in size and divided into two 5 × 5 m sub-plots. The site was fenced, and livestock was excluded for the full duration of the study (ca. 10 months). Herbicide Application The herbicides were applied using a Makita EVH2000 24.5 cm 3 , four-stroke petrol backpack sprayer equipped with a 2.5 m swath hand-held boom containing four nozzles (spaced 50 cm apart) and delivering a carrier spray volume of 800 L ha −1 (2.0 L 25 m −2 plot). A single pass of the boom was undertaken when spraying plots, with the height maintained at 0.5 m above the soil level by attaching a weighted 50 cm vertical guide to the boom. A pressure gauge mounted on the handle of the boom provided confirmation of an operating pressure of 120 kPa, and a portable metronome ensured a constant walking speed of 4.0 km hour −1 was maintained. The Turbo Twinjet flat spray nozzles (TTJ60-11002) used in this experiment were supplied by Spraying Systems (Wheaton, IL, USA). All solutions contained a 2% (v/v) Pulse Penetrant ® (1.0 kg L −1 organo-modified polydimethylsiloxane) obtained from Nufarm Australia Limited (Laverton, North Victoria, Australia). Plant Density To determine the effect of the herbicides on fireweed density over time, two quadrats (1.0 × 1.0 m) were placed randomly within each sub-plot, and the density of fireweed plants was determined at 0, 2, 3, 5, 7, 10, and 13 months after herbicide application. To identify dead plants, the outer epidermal layer of the stem of the fireweed plants was scraped away to reveal the inner tissues of the stem. Live stems had a green cambium layer immediately beneath the epidermal layer and green or white tissue inside, whereas dead tissues appeared a distinct brown colour. Seed Bank Density Before applying the herbicides to the trial site, soil seed bank samples were taken from each subplot designated to receive the follow-up treatment, as well as the untreated control plots, in July 2018. To take soil samples, two 1 × 1 m quadrats were randomly placed within each of the designated sub-plots, and five cylindrical soil cores (5 cm in diameter and 10 cm deep) were extracted from each quadrat (one from each of the four corners and one from the centre of the quadrat) using a soil corer. Soil cores were then separated into two different depths, 0 to 2 and 2 to 10 cm deep. The soil seed bank samples, taken from the same depths and from both quadrats, were pooled into one sample for each sub-plot. The two soil samples from the two quadrats were placed into separate plastic bags, sealed, and stored at ambient temperature for 2 to 3 days. They were then spread thinly over a 2 cm layer of a Gatton media compost (Osmocote 8-9 M, Osmocote 3-4 M, Nutricote 7 M, coated iron, moisture aid, dolomite, and Osmoform) contained within shallow germination trays (20 × 25 × 6 cm; w/l/d). Then, all trays were distributed randomly to the top of a greenhouse bench at the University of Queensland, Gatton, in July 2018. The temperature in the greenhouse was maintained close to the outside ambient temperature (mean annual maximum of 26.9 • C, with a mean annual minimum of 13.0 • C). Two control trays were placed among the experimental trays to check for compost or greenhouse seed contaminants. All trays were watered daily to maintain soil moisture at or close to the field capacity. The trays were observed regularly for newly emerging seedlings, and when observed, seedlings were marked with a cocktail stick and initially recorded as either being 'fireweed' or 'other species. Once seedlings were large enough to be identified, then, they were counted and removed. If they could not be identified, representative individuals were planted into pots and grown to maturity for further taxonomic identification using the appropriate literature [27,28]. When seedling emergence ceased, the soil in the trays was dried for 2 weeks before being stirred, rewetted, and inspected for any further seedling emergence over a further 9 months [11]. Soil sampling at the field site was undertaken again in March 2019, 5 months after the follow-up herbicide treatments had been applied in October 2018. The same procedure as described above was used to collect soil samples (from follow-up subplots) and to monitor for seedling emergence in the greenhouse. At this time, the temperature in the greenhouse was maintained close to the outside ambient temperature (the mean maximum during March 2019 was 30.0 • C, with a mean minimum during March 2019 of 17.5 • C). The species diversity of the soil seed bank was calculated using the Shannon-Weiner Index: stick and initially recorded as either being 'fireweed' or 'other species. Once seedlings were large enough to be identified, then, they were counted and removed. If they could not be identified, representative individuals were planted into pots and grown to maturity for further taxonomic identification using the appropriate literature [27,28]. When seedling emergence ceased, the soil in the trays was dried for 2 weeks before being stirred, rewetted, and inspected for any further seedling emergence over a further 9 months [11]. Soil sampling at the field site was undertaken again in March 2019, 5 months after the follow-up herbicide treatments had been applied in October 2018. The same procedure as described above was used to collect soil samples (from follow-up subplots) and to monitor for seedling emergence in the greenhouse. At this time, the temperature in the greenhouse was maintained close to the outside ambient temperature (the mean maximum during March 2019 was 30.0°C, with a mean minimum during March 2019 of 17.5 °C). The species diversity of the soil seed bank was calculated using the Shannon-Weiner Index: where S is the number of species and Pi is the proportion of the total of all species' individuals per quadrat represented by the ith species [10]. Statistical Analysis Plant density data were subjected to analysis of variance (ANOVA) to compare the plant density of different treatments. Means comparisons were through the use of the least significant difference (LSD) test (p = 0.05). An ANOVA was also performed to compare the fireweed seed densities between the two seed banks (before and after spraying) once the data had been transformed to a logarithmic scale. Comparison of treatment means was conducted by the LSD (p = 0.05) test. All the data were analyzed using the R statistical software (Version 3.6.3). Conclusions Herbicides fluroxypyr/aminopyralid, bromoxynil, metsulfuron-methyl and triclopyr/picloram/aminopyralid are effective in controlling fireweed plants. Fireweed seeds dominated both the upper (0 to 2 cm) and lower (2 to 10 cm) soil layers, with the highest density of fireweed seeds observed in the upper layer. Even with a high seed load being produced at the Beechmont site, several of the herbicides tested were effective in controlling plants and reducing the density of fireweed seeds entering the soil seed bank. According to the Shannon-Weiner index, except for bromoxynil, all other herbicides did not reduce the biodiversity of the pasture seed community. However, the implementation of a follow-up application of bromoxynil was more effective than the single application. For other tested herbicides, even one application was sufficient to control fireweed plants at all stages of development at the Beechmont site, but this was when a healthy pasture stand was present, and the stocking rate was zero. Thus, for the effective management of fireweed, the application of a single dose of one of three different herbicides (fluroxypyr/aminopyralid, metsulfuron-methyl and triclopyr/picloram/ aminopyralid) when simultaneously ungrazed for a minimum of 10 months can be efficacious. However, given the presence of fireweed in the soil seed bank, subsequent follow-up herbicide control may still be necessary at some stage, depending on prevailing environmental conditions. Further research is now needed to evaluate the effect of pasture competition on fireweed establishment and to determine pasture density thresholds that can prevent recruitment. Further research is also needed to evaluate the impact of increased grazing pressure, the effect of different kinds of grazing stock (cattle, sheep, or goats) on established populations of fireweed, and how bare and disturbed ground can affect the establishment of fireweed. In addition, besides selecting the most effective herbicide, cost considerations will also need to be included in the decision-making process. Statistical Analysis Plant density data were subjected to analysis of variance (ANOVA) to compare the plant density of different treatments. Means comparisons were through the use of the least significant difference (LSD) test (p = 0.05). An ANOVA was also performed to compare the fireweed seed densities between the two seed banks (before and after spraying) once the data had been transformed to a logarithmic scale. Comparison of treatment means was conducted by the LSD (p = 0.05) test. All the data were analyzed using the R statistical software (Version 3.6.3). Conclusions Herbicides fluroxypyr/aminopyralid, bromoxynil, metsulfuron-methyl and triclopyr/ picloram/aminopyralid are effective in controlling fireweed plants. Fireweed seeds dominated both the upper (0 to 2 cm) and lower (2 to 10 cm) soil layers, with the highest density of fireweed seeds observed in the upper layer. Even with a high seed load being produced at the Beechmont site, several of the herbicides tested were effective in controlling plants and reducing the density of fireweed seeds entering the soil seed bank. According to the Shannon-Weiner index, except for bromoxynil, all other herbicides did not reduce the biodiversity of the pasture seed community. However, the implementation of a follow-up application of bromoxynil was more effective than the single application. For other tested herbicides, even one application was sufficient to control fireweed plants at all stages of development at the Beechmont site, but this was when a healthy pasture stand was present, and the stocking rate was zero. Thus, for the effective management of fireweed, the application of a single dose of one of three different herbicides (fluroxypyr/aminopyralid, metsulfuron-methyl and triclopyr/picloram/ aminopyralid) when simultaneously ungrazed for a minimum of 10 months can be efficacious. However, given the presence of fireweed in the soil seed bank, subsequent follow-up herbicide control may still be necessary at some stage, depending on prevailing environmental conditions. Further research is now needed to evaluate the effect of pasture competition on fireweed establishment and to determine pasture density thresholds that can prevent recruitment. Further research is also needed to evaluate the impact of increased grazing pressure, the effect of different kinds of grazing stock (cattle, sheep, or goats) on established populations of fireweed, and how bare and disturbed ground can affect the establishment of fireweed. In addition, besides selecting the most effective herbicide, cost considerations will also need to be included in the decision-making process. Funding: We thank the University of Queensland and the Department of Agriculture and Fisheries, Brisbane (RM 2018000131) for providing the funding for part of this project.
2023-03-18T15:05:30.861Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "a8bcdddcbfb0c1b654e6cc9a8c4f58d575cbf4f7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/12/6/1332/pdf?version=1678889594", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3affb0e99040074fe682aa868112371c178dfcf6", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
257145031
pes2o/s2orc
v3-fos-license
Is the Co-administration of Metformin and Clomiphene Superior to Induce Ovulation in Infertile Patients With Poly Cystic Ovary Syndrome and Confirmed Insulin-Resistance: A Double Blind Randomized Clinical Trial Objective: This study aimed to compare the effects of clomiphene citrate (CC) combined with metformin or placebo on infertile patients with poly cystic ovary syndrome (PCOS) and insulin resistance (IR). Materials and methods: We included 151 infertile women with PCOS and IR in a university hospital from November 2015 to April 2022 in this prospective, double-blind, randomized, placebo-controlled trial. Patients were randomized into two groups; group A: received CC plus metformin (n = 76) and group B: received CC plus placebo (n = 75). The ovulation rate was the main outcome measure. Clinical pregnancy, ongoing pregnancy, live birth and abortion rates were secondary outcome measures. Results: There was no remarkable difference in ovulation rate in two groups. Moreover, no significant changes were observed in clinical pregnancy, ongoing pregnancy, live birth and abortion rates between two groups. A larger proportion of women in group A suffered from side effects of metformin (9.3% versus 1.4%; p=0.064), although this was not significant. Conclusion: In IR infertile women with PCOS, metformin pre-treatment did not increase the ovulation, clinical pregnancy and live birth rates in patients on clomiphene citrate. influences 12-21% of women at reproductive age (1).The most important features of this syndrome include chronic anovulation, hyperandrogenism, and sometimes obesity (2). Clomiphene citrate (CC) is still a widely chosen primary infertility treatment in women with the polycystic ovary syndrome (3). Nevertheless, 20-25% of PCOS women are Original Article unsuccessful in ovulating and fail to respond because of resistance to CC (4). Hyperinsulinemia and insulin resistance (IR) as main indicators in the pathophysiology of PCOS have been proved to have an impact on ovarian hyperandrogenism and may directly affect and even prohibit ovulation (5). Levels of luteinizing hormone, insulin and also ovarian androgens can be reduced by metformin, as an insulin sensitizing agent (6). Thus, several studies have been conducted and assessed the probable useful effect of only metformin or together with clomiphene, as primary therapy for infertile women with PCOS. But they have showed varying outcomes (7)(8)(9)(10)(11). Later in 2017 a meta-analysis made the conclusion that the ovulation rate in PCOS women could be enhanced by metformin alone as compared with placebo but it isn't recommended as first-line treatment for anovulation because oral ovulation induction factors like CC or letrozole alone result in far better rates of ovulation, pregnancy, and live-birth in women with PCOS. They also showed that although metformin together with CC enhance ovulation and clinical pregnancy rates, they do not affect live-birth rate compared with CC alone in women with PCOS and CC-resistant PCOS women separately (12). In 2019 a cochrane review also showed similar outcomes to above meta-analysis and suggested a potential advantage in ovulation and pregnancy rates irrespective of BMI when using metformin compared with placebo. However, both of them recommended that more studies had to be performed to decide if metformin produced better results in particular women with different PCOS phenotypes for example: IR ,obese and varying races (13). IR can be reduced by metformin (14). Nevertheless, no single randomized clinical trial (RCT) has studied metformin in IR PCOS patients. Thus the present study aims to compare CC with co-administration of metformin and with CC with co-administration of placebo to induce ovulation in PCOS patients with infertility and IR. 23) supervised and approved this trial. The inclusion criteria consisted of IR PCOS women between 20-36 years old who had a period of infertility over 1 year and normal serum prolactin (PRL) and thyroid stimulating hormone (TSH; in case of hypothyrodisim, they entered the study after treatment). They had documented patent tubes by hystersalpingography, which was performed at least 6 months prior to entering this study, and they had also normal sperm analysis based on the World Health Organization (WHO 1992) criteria (15). Materials and methods All the women had a homeostasis model assessment-insulin resistance (HOMA-IR) ≥2.3. The exclusion criteria were: diabetic women, using any medicine that could affect pituitary-gonadal action and carbohydrate metabolism for a minimum of 2 months prior to the study, women having hypertension or disturbed liver or renal function tests and women having a history of ovarian drilling. According to the Ferriman and Gallway scale hirsutism was defined if the score was ≥ 8 (16). The diagnosis of PCOS was based on the fulfillment of at least two criteria from the three Rotterdam criteria (17). Hyperandrogenism was confirmed on the basis of hirsutism or an elevated testosterone level (18). Random block allocation method was used for randomization. The allocation order was concealed in sequentially numbered, sealed envelopes. A statistician from the Clinical Epidemiology Unit of Research Center prepared this process. A nurse who was unaware of the study opened the relevant envelope after the enrollment of a patient. Sample Size: Considering the results of Moll et al. study (19), the sample size was measured 388, using the G*Power 3.1.9.2 package, which compares the two proportions, and takes the maximum error type 1 and 2 to be 5% and 20% respectively, on the condition that the two groups were equal and the trial was one sided. Unfortunately, due to COVID 19 pandemic and the social problems following it, we were unable to reach the estimated sample size. Eventually we could have only 151 patients in this study. Study Design: Clinical examination was done on all women and the results were recorded. A nurse who was blinded to the admission number of the patients measured their weight, height and BMI. BMI was obtained by weight divided by height squared (kg/m2). Following an overnight fasting of 10 to 12 h, we obtained blood samples to determine insulin, FSH, LH, estradiol, total and free testosterone, androstenedione, DHEAS, PRL, TSH, the fasting blood glucose (FBG), and 17OH progesterone on the 3rd day of a regular menses or progestin (medroxyprogesterone acetate, 5 mg per day for 10 days) induced withdrawal bleeding. We centrifuged and kept all blood samples at -20 °C for being assessed together. Enzyme-linked immunosorbent assay (ELISA, Wernecce, 3200) (Ideal, Tehran, Iran) was used for measuring serum levels of FSH, LH, estradiol, TSH and PRL. ELISA assay (Monobind, Inc, USA) was also used to measure Total T, FreeT, A, DHEAS and insulin. Enzymatic colorimetric method with Glucose Oxidase was used for measuring fasting plasma glucose (FPG). Insulin resistance was assessed by HOMA-IR method according to this formula: fasting insulin (mU/L) × fasting glucose (mmol/L)/ 22.5. We defined IR to have the value of HOMA-IR ≥2.3 as reported by Hosseinpanah et al. study on Iranian women (20). After we completed primary studies, we divided the patients into two groups. We gave metformin (Chemformin, Chemidarou. Tehran. Iran) at a dose of 500 mg three times a day for 8 weeks to group A. To reduce its side effects, the dose of metformin was elevated from one to three tablets a day for seven days. Group B patients received placebo (similar to metformin in shape from the same factory). We measured progesterone levels in all patients every other week and confirmed ovulation if its level was > 5 ng/ml (16 nmol per liter). In the event of pregnancy, we continued metformin for another 12 weeks. If patients failed to ovulate by the end of this period, we continued metformin and placebo and started 100 mg CC for 5 days from day 3 of their regular menses or progestin induced withdrawal bleeding. A single sonographist measured ovarian follicular size every other day from day 10 of the cycle by transvaginal sonography. In case of observing at least one follicle reaching ≥18 mm in diameter, we gave 5000 IU of HCG (Pregnyl; N.V. Organon, OSS, Netherlands) intramuscularly and advised timed intercourse (every other day for one week beginning after taking HCG). If we had no follicle ≥12mm by day 16 in CC cycle, we supposed the cycle to be anovulatory and stopped the monitoring. We defined clinical pregnancy as the presence of one gestational sac detected by transvaginal sonography beginning one week following the missed period. We advised the patients to take part in other two similar cycles of therapy with 100 mg CC (group A and B), if they ovulated with CC but failed to get pregnant. If they did not ovulate with 100mg CC, we increased the dose to 150mg and used the same treatment method. Metformin in group A and placebo in group B were maintained during CC cycles. Outcome Measures: The ovulation rate was prime outcome measure. Clinical pregnancy, ongoing pregnancy, live birth and abortion rates were secondary outcome measures. Statistical Analysis: We used the SPSS software (Version 11.5.0, © SPSS Inc.) to enter and analyze all the data. We used mean, standard deviation (SD), count, percentage, median, and inter quartile ranges (Q1, Q3) to describe the data if appropriate. We also used Chi-square test (and Fisher exact test if necessary) for qualitative variables and T-test (or Mann-Whitney test if needed) for quantitative variables. In all tests P<0.05 was regarded significant. Results We decided 820 women with PCOS were eligible. We removed 669 women from the study. So we included 151 women in this study .76 subjects were put into group A (metformin +CC) and 75 subjects were put into group B (placebo + CC). After taking metformin or placebo, 10 women became pregnant ( Figure 1). Finally, 144 women received CC. Baseline demographic and infertility histories were similar in both groups (Table 1). There were no remarkable differences in ovulation (P=0.304) and clinical pregnancy (P=0.79) rates between metformin and placebo. There were also no significant differences in ovulation (P=0.308) clinical pregnancy (P=0.957) ongoing pregnancy (P=0.920), live birth (P=0.687) and abortion rates (P=0.938) between the two groups ( Table 2). There were 2 ectopic pregnancies (1 in group A and 1in group B). Both of them were treated with a single dose of methotrexate. There was one twin pregnancy in group B who delivered in 27 weeks of gestation. 7 patients complained of adverse effects of metformin (2 persons discontinued metformin, 5 persons decreased the dose of metformin from 1500 to 1000 mg or 500 mg). The most common side effects of metformin were respectively: nausea, tenesmus, diarrhea, dizziness and headache ( Table 2). Discussion In our research, no significant differences were detected in ovulation rate, clinical and ongoing pregnancy rates and live birth rate between metformin versus placebo combined with CC as ovulation induction treatment in IR PCOS patients. Vol. 17 More women in the metformin group showed adverse effects of metformin compared with placebo. Although this was not significant. Several studies are performed to decide whether to use only metformin or together with CC as primary therapy to induce ovulation in women with PCOS and a history of infertility. However, they all have shown contradictory outcomes. Some of them had a preference for metformin together with CC (10,11) and the others suggested metformin and CC were not a useful combination as the first treatment to induce ovulation in these women (7,9) and that CC should still be prescribed to this end (8,19). Moll et al. in 2006 in a RCT on women with PCOS demonstrated no remarkable differences in either rate of ovulation (64% v 72%) or rate of ongoing pregnancy (40% v 46%) between CC combined with metformin and CC combined with placebo. In their study, a significantly bigger number of patients in the metformin group (16%v5%) stopped taking metformin due to its adverse effects (7). Later the same group in 2007 in a meta-analysis (19) concluded that CC alone and not CC combined with metformin should be recommended for these women because the side effects of metformin were more than placebo. Legro et al. in 2007 also in a RCT on infertile women with the PCOS showed that CC is more beneficial than metformin in leading to live birth (8). There are also several studies comparing the effectiveness of metformin with CC as ovulation induction agents according to body mass index (BMI) in women with anovulatory PCOS. However, their results were also contradictory (21)(22). Nestler et al. in 1998 and Morin-Papunen et al. in 2012 in two separate studies showed that the ovulatory response to clomiphene can be improved with metformin in obese women with the PCOS (6,22). In contrast, a meta-analysis by Johnson et al. in 2011 on women with PCOS and a BMI≤ 30-32 kg ⁄m2 did not demonstrate a significant change in pregnancy and live birth rates between metformin and clomiphene citrate (21). IR has a high rate of occurrence in CC-resistant and obese women. Metformin treatment may prove to be more beneficial for this subgroup (13). However, there were also some inconsistencies in using metformin in CC-resistant women (23,24). Analysis Received CC (n=66) Discontinued CC due to OHSS (n=1) Discontinued CC due to cyst formation (n=1) but remained in the analysis (intention to treat). Ovulation with CC (n=50) Pregnancy with CC (n=20) Received CC (n=68) Discontinued CC due to OHSS (n=1) Discontinued CC due to cyst formation (n=1) but remained in the analysis (intention to treat). Ovulation with CC (n=44) Pregnancy with CC (n=20) Finally in 2017 Practice Committee of the American Society for Reproductive Medicine and a high quality Cochrane review in 2019 showed that metformin together with CC may improve ovulation and pregnancy rates versus only CC in PCOS women with CC-resistant but there wasn't enough data available to make sure if this would improve live birth rate in this group of women. Also, they suggested that a number of larger and sufficiently powered randomized trials are required in closely targeted populations of women with different PCOS phenotypes to decide for which group the use of metformin may be more beneficial (12,13). These challenging outcomes from the previous studies may be because of the big differences in populations under study, especially as regards body weight and insulin sensitivity. Insulin resistance is a metabolic disorder that is caused when insulin malfunctions in producing glucose uptake and utilization (25). There are only a few randomized studies on insulin sensitizing drugs in insulin resistant women with PCOS. Vol. 17 (26). Sahin et al. also in 2003 in a RCT on 21 infertile PCOS women, 13 of whom were IR, demonstrated that metformin did not remarkably improve the ovulation and pregnancy rates in combination with CC compared with CC alone in spite of a reduction in hyperinsulinemia and hyperandrogenemia (9). Liu et al. in 2006 studied 146 women with PCOS (one third of whom had IR) and showed that after the treatment with metformin, the rates of ovulation and pregnancy were identical in women having a normal or abnormal glucose to insulin proportions (27). We could not show any beneficial effect of metformin as ovulation induction factor in the treatment of overweight IR PCOS women with infertility. This may be due to several factors. 1-The number of patients was smaller than the estimated sample size and as a result, there were some limitations. However, to our best knowledge this is the first double blind RCT in this field. 2-Considering the high BMI in our study the dose of metformin may have been low. Recently Morgante et al. in a prospective non-randomized cohort study of 108 overweight and obese insulin resistant, PCOS women demonstrated that there was a close connection between metformin dose, BMI and hyperandrogenism suggesting that the higher BMI index is, the higher the dose of metformin should be to achieve an effective reduction IR in these patients (28). 3-Some studies have shown that metformin reduces IR (6,9) but this does not always happen. Acbay and Gundogdu in a single blind study on 16 women with PCOS and IR showed that metformin does not reduce IR. They suggested that the cause of insulin resistance in PCOS differs from other frequent insulin-resistant conditions like obesity and noninsulin-dependent diabetes mellitus (29). Of course we did not measure HOMA-IR after using metformin and did not know if it decreased after taking metformin or not. Conclusion In insulin-resistant infertile women with PCOS, metformin pre-treatment and co-administration with CC did not increase the ovulation, clinical and live birth rates versus CC and placebo. Further studies in larger sample sizes with high power are needed in order to determine if there are clinical, biological, or laboratory indicators that can help identify the best treatment choice for IR women with PCOS.
2023-02-24T17:20:10.635Z
2023-02-20T00:00:00.000
{ "year": 2023, "sha1": "7463e4511a34abe3ff3c66cc652fcfdd175c8978", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/jfrh.v17i1.11973", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92b083dbd0a5ca3ebcff55905d6210931c8c9646", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213730508
pes2o/s2orc
v3-fos-license
Ultra-small cobalt nanoparticles from molecularly-defined Co–salen complexes for catalytic synthesis of amines† We report the synthesis of in situ generated cobalt nanoparticles from molecularly defined complexes as efficient and selective catalysts for reductive amination reactions. In the presence of ammonia and hydrogen, cobalt–salen complexes such as cobalt(ii)–N,N′-bis(salicylidene)-1,2-phenylenediamine produce ultra-small (2–4 nm) cobalt-nanoparticles embedded in a carbon–nitrogen framework. The resulting materials constitute stable, reusable and magnetically separable catalysts, which enable the synthesis of linear and branched benzylic, heterocyclic and aliphatic primary amines from carbonyl compounds and ammonia. The isolated nanoparticles also represent excellent catalysts for the synthesis of primary, secondary as well as tertiary amines including biologically relevant N-methyl amines. Introduction In recent years, 3d metal-based nanoparticles (NPs) emerged as promising catalysts for the synthesis of functionalized and complex organic molecules for advanced applications in life and material sciences. 1 Traditionally, such syntheses are performed using homogeneous organometallic complexes, 2 which are oen sensitive and more difficult to recycle compared to heterogeneous materials. 1,2 For the preparation of stable but at the same time active and selective NPs, the use of suitable precursors and optimal methods is crucial. 1 Commonly, nanoparticles are prepared by chemical reduction processes, calcination or pyrolysis in the presence of suitable supports and metal precursors. The resulting materials are applied particularly in industrially-relevant bench mark reactions of less functionalized molecules. 3 However, in recent years there is an increasing interest to use such catalysts for advanced organic synthesis, specically for the preparation of life science products. 1 In this respect, the preparation of specic NPs by immobilization and pyrolysis of organometallic complexes or metal organic frameworks (MOFs) on heterogeneous supports attracted also attention. 1,4 These supported NPs show high activity and selectivity for the preparation of functionalized amines, 1d-i nitriles, 1k,4c carboxylic acid derivatives, 1f,k and cycloaliphatic compounds. 1j Although this preparation represents a highly useful tool to produce novel nano-structured catalysts on lab-scale, the upscaling can be difficult and requires specialized equipment. 1,4 Thus, the use of alternative, more convenient methods is highly desired. One possibility is the practical in situ generation of active heterogeneous NPs. 5 Based on this idea, herein we report a straightforward approach for the generation of cobalt-based NPs in situ from molecularly-dened metal complexes and their application in reductive amination reactions using ammonia and molecular hydrogen (Fig. 1). The resulting amines represent privileged molecules widely used in chemistry, medicine, biology, and material science. 6 For their synthesis, catalytic reductive amination of carbonyl compounds using molecular hydrogen is widely applied as cost- Fig. 1 In situ generation of Co-NPs for reductive aminations. Results and discussion In situ generation of Co-NPs and their activities Following our concept, we initially investigated the reaction of cobalt salen complexes to obtain NPs. For example, using the cobalt-N,N 0 -bis(salicylidene)-1,2-phenylenediamine (complex I) in water-THF as solvent in the presence of ammonia and molecular hydrogen at 120 C a black precipitate of Co NPs is formed, which can be magnetically separated ( Fig. 1 and S3 †). To explore their reactivity, preliminary catalytic experiments were carried out for the reductive amination of 4-bromobenzaldehyde 1 to 4-bromobenzylamine 2 in presence of ammonia and molecular hydrogen (Fig. 2). Indeed, using a mixture of cobalt(II) acetate and N,N 0 -bis(salicylidene)-1,2-phenylenediamine (L1) led to the formation of 15% of 2. In contrast, testing simple cobalt(II) acetate under the same conditions produced no desired product. Remarkably, the dened complex Co-L1 (complex I) exhibited excellent activity as well as selectivity in the bench mark reaction (98% of 4-bromobenzylamine). In addition, other molecularly-dened Co-salen complexes have also been tested (Fig. S1 †) and complexes II-IV showed good activity (85-90% yield), while complex V resulted in lower product yield (50%). In all cases of active complexes, the reaction mixtures turned black aer some hours. Hence, we assumed the in situ formed cobalt-NPs are the "real" active species for the reductive amination reaction. To conrm this, we performed a standard mercury test in the presence of complex I and aer addition of 15 mg Hg the reductive amination reaction did not occur. Hot ltration of NPs and testing the ltrate for the reaction showed that Co-NPs did not go into solution as soluble particles. Studying the course of the benchmark reaction at different intervals of time showed a prolonged catalyst preformation time and only aer 10 h 4-bromobenzylamine started to form (Fig. S3 †). Apparently, complex I generated nanoparticles slowly, which then catalyze the desired amination process. For comparison, we also prepared cobalt nanoparticles separately by mixing complex I, ammonia and hydrogen (see S7a †). Aer isolation, they were tested under similar conditions and exhibited comparable activity and selectivity to that of in situ generated ones. Due to their physical properties, the Co NPs could be magnetically separated and were conveniently re-used up to three times (Fig. 2). However, aer the third cycle we observed a signicant decrease in activity and selectivity. In addition, the stability of the catalyst system was also conrmed by recycling the NPs aer reduced reaction time ( Fig. S4 †). Next, we compared the reactivity of these active NPs with related supported NPs. However, addition of carbon or silica support to the reaction led to completely inactive materials (Fig. 2). On the other hand, materials prepared by immobilization of complex I on carbon or silica and subsequent pyrolysis produced catalysts with moderate activity ( Fig. 2; 40-50% yield of 2). In addition, specic cobalt nanoparticles have been prepared by using chemical reduction of cobalt salts 11 and tested for their activities. However, none of these cobalt nanoparticles formed the desired product, 4-bromobenzylamine (Table S1, † entries 5-6). All these results reveal the superiority of the simply in situ generated Co NPs (Fig. 3). TEM analysis of cobalt-particles at different magnication showed sheets and at some places thread bundles like morphology where cobalt nanoparticles are embedded in carbon and nitrogen framework (Fig. 4). Further detailed morphological investigations were performed by HRTEM-STEM analysis. A close inspection of HRTEM images at 20 nm, revealed the presence of ultra-small (range 2-4 nm) cobalt nanoparticles ( Fig. 4 and S6 †) supported on graphitic carbon. The HAADF-elemental mapping displayed a homogeneous distribution of the cobalt nanoparticles (Fig. 4). In case of the recycled catalyst, we observed that these particles were still intact and there are no noticeable changes in the morphology (Fig. S8 †). XRD patterns of in situ generated and reused Conanoparticles do not show variations on the phase composition ( Fig. S9 †). Two allotropes of metallic cobalt have been identied, one with face centered cubic arrangement (Co-fcc, space group Fm 3m, PDF card 01-089-7093), and the other one with hexagonal closed packing (Co-hcp, space group P6 3 /mmc, PDF card 01-089-7373). Elemental analysis of the bulk material showed 96.8 wt% of Co, 0.15 wt% of C and only 0.5 wt% of N. 09, 19.30, 8.59, 3.20 and 4.82 respectively, showing the graphitic nature of the carbon material (Fig. 5a). The presence of a specic N1s peak at 399.7 conrms pyrrolic nitrogen (Fig. 5b). The three peak components at 529.5, 531.4, and 532.9 eV in O1s spectra originate from the presence of Co(OH) 2 (6.99%), C]O (73.09%), and O-C on the surface of cobalt (19.92%). This reveals partial oxidation at the surface of the optimal material (Fig. 5c). In agreement, the two main component peaks having binding energy at 780.7 eV (50.81%) and 782.5 eV (22.54%) conrm the presence of Co 2+ (Co(OH) 2 ) (Fig. 5d). 12a Three small peaks having binding energies at 778.09 (2.45%), 781.09 (0.59%) and 783.09 (0.35%) eV indicate the presence of metallic cobalt. 12b The HR-XPS of reused catalysts revealed that there is no shiing of binding energy in Co 2p 3/2 peak but the ratio of metallic cobalt vs. cobalt hydroxide was slightly changed (Fig. S11d †). On the other hand, no perceptible change in the binding energies of C1s and N1s was discerned except for the slight shiing of the pyrrolic nitrogen peak from 399.7 eV to 400.2 eV, thus reiterating no apparent alteration in the chemical nature of the carbon shell of the catalyst ( Fig. S11a and It is interesting to note that these cobalt-particles exhibit ferromagnetic behaviour with distinct values of coercivity eld and remanent magnetization (Fig. S12 †). Ferromagnetic behaviour at room temperature is due to the stronger effect of the magnetic dipole interaction compared with thermal uctuations. We do not observe any blocking temperature suggesting the size of nanoparticles above 10-15 nm (e.g. the system is not superparamagnetic at room temperature). This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 2973-2981 | 2975 Synthesis of linear primary amines Then, we tested the general applicability of our in situ generated nanoparticles for the synthesis of primary amines. As shown in Schemes 1 and 2, a variety of structurally diverse and functionalized benzylic, heterocyclic and aliphatic linear and branched primary amines can be prepared in good to excellent yields. Simple and substituted aldehydes underwent smooth reaction to give primary benzylic amines in up to 92% yield (Scheme 1, products 3-7). For example, uoro-, chloro-, and bromo-substituted benzaldehydes produced corresponding amines without signicant dehalogenations in 86-92% yields (Scheme 1, entries 8-12). Different functionalized benzylic amines containing methoxy, triuoromethoxy, dimethylamino, and ester groups as well as C-C double bonds were synthesized in up to 95% yield (Scheme 1, products 14-23). In addition to benzylic amines, primary aliphatic ones were also prepared under similar conditions (Scheme 1, products 25-27). Interestingly, the natural product perillaldehyde was successfully aminated to produce the corresponding amine in 87% yield (product 27). Synthesis of branched primary amines Next, we tested the reductive amination of ketones (Scheme 2), which is more challenging compared to aldehydes. Nevertheless, at higher temperature (130 C) nine aromatic and six aliphatic branched primary amines were prepared in up to 92% yield. In addition, separately prepared Co NPs from complex I, gave similar yields of amines to those obtained by in situ generated nanoparticles. Synthesis of secondary and tertiary amines Apart from primary amines synthesis, we explored the applicability of Co-NPs for the synthesis of secondary and tertiary amines. Interestingly, testing complex I which generates the active NPs vide supra for the reaction of benzaldehyde and aniline at 120 C in presence of molecular hydrogen (40 bar) led to the formation of imine (N-benzylideneaniline) as the sole product. Under these conditions no nanoparticles could be isolated aer the reaction. Apparently, the presence of both ammonia and hydrogen are required for the generation of the active NPs! Indeed, using isolated Co NPs, which were prepared from complex I, ammonia and hydrogen, led to excellent activity and selectivity for the synthesis of secondary and tertiary amines including N-methyl amines (Scheme 3). As representative examples different benzaldehydes were reacted with substituted anilines and the corresponding Nbenzylanilines were obtained in 87-98% yields (Scheme 1; products 41-45). Similarly, reactions of different benzaldehydes with benzylic and aliphatic amines produced selectively the corresponding secondary and tertiary amines (Scheme 3; products 46-55). In addition, aliphatic aldehydes and 4-uoroaniline underwent reductive amination and gave the corresponding secondary amines (Scheme 3, products 56-57). Finally, N,N 0 -dimethylamines were also prepared from three different aldehydes and aqueous N,N 0 -dimethyl amine (Scheme 3, products 58-60). Reaction upscaling In order to demonstrate the synthetic utility of this novel reductive amination protocol, we performed the amination of 5 carbonyl compounds in 5-10 g scale (Scheme 4). As expected, all the tested reactions could be successfully upscaled and the yields (92-96%) of the corresponding primary amines were comparable to that of small scale (0.5 mmol) reactions. General considerations All substrates were obtained commercially from various chemical companies and their purity has been checked before use. Cobalt(II) acetate tetrahydrate (cat no. 208396-50G), salicylaldehyde, phenylenediamine and other ligand precursors were purchased from Sigma-Aldrich. Silica (silicon(IV) oxide, Scheme 2 Synthesis of branched primary amines from ketones using in situ generated Co-nanoparticles a . a Reaction conditions: 0.5 mmol ketone, 6 mol% complex I (22 mg) 5-7 bar NH 3 , 45 bar H 2 , 2.5 mL H 2 O, 130 C, 24 h, isolated yields. b Same as 'a' in H 2 O-THF (1.5 : 1 ratio). c Using prepared and isolated Co-NPs from complex I (2 mg; 6.5 mol% Co). X-ray diffraction patterns were recorded with an Empyrean (PANalytical, The Netherlands) diffractometer in the Bragg-Brentano geometry, Co-Ka radiation (40 kV, 30 mA, l ¼ 0.1789 nm) equipped with a PIXcel3D detector (1D mode) and programmable divergence and diffracted beam anti-scatter slits. The measurement range was 2q: 5-105 , with a step size of 0.026 . The identication of crystalline phases was performed using the High Score Plus soware (PANalytical) that includes the PDF-4 + database. TEM images were obtained using a HRTEM TITAN 60-300 with X-FEG type emission gun, operating at 80 kV. This microscope is equipped with a Cs image corrector and a STEM highangle annular dark-eld detector (HAADF). The point resolution is 0.06 nm in TEM mode. The elemental mappings were obtained by STEM-Energy Dispersive X-ray Spectroscopy (EDS) with an acquisition time of 20 min. For HRTEM analysis, the powder samples were dispersed in ethanol and ultrasonicated for 5 min. One drop of this solution was placed on a copper grid with holey carbon lm. XPS surface investigation has been performed on the PHI 5000 VersaProbe II XPS system (Physical Electronics) with monochromatic Al-Ka source (15 kV, 50 W) and photon energy of 1486.7 eV. Dual beam charge compensation was used for all measurements. All the spectra were measured in the vacuum of 1.3  10 À7 Pa and at room temperature of 21 C. The analyzed area on each sample was a spot of 200 mm in diameter. The survey spectra were measured with pass energy of 187.850 eV and electronvolt step of 0.8 eV while for the high resolution spectra was used pass energy of 23.500 eV and electronvolt step of 0.2 eV. The spectra were evaluated with the MultiPak (Ulvac -PHI, Inc.) soware. All binding energy (BE) values were referenced to the carbon peak C 1s at 284.80 eV. Magnetic properties of cobalt-nanoparticles were analyzed using a Quantum Design Physical Properties Measurement System (PPMS Dynacool system) with the vibrating sample magnetometer (VSM) option. The experimental data were corrected for the diamagnetism and signal of the sample holder. The temperature dependence of the magnetization was recorded in a sweep mode of 1 K min À1 in the zero-eld-cooled (ZFC) and eld-cooled (FC) measuring regimes. To get the ZFC magnetization curve, the sample was rstly cooled down from 300 to 5 K in a presence of zero magnetic eld and the measurement was carried out on warming from 5 to 300 K under the external magnetic eld (1000 Oe). In the case of the FC magnetization measurements, the sample was cooled from 300 to 5 K in an external magnetic eld (1000 Oe) and the measurement was carried out on warming from 5 to 300 K at the same value of the external magnetic eld (1000 Oe). Hysteresis loops were measured at room temperature (300 K) and at low temperature (5 K). GC conversion and yields were determined by GC-FID, HP6890 with FID detector, column HP530 m  250 mm  0.25 mm. 1 H, 13 C, NMR data were recorded on a Bruker ARX 300 and Bruker ARX 400 spectrometers using DMSO-d 6 and CDCl 3 solvents. All catalytic reactions were carried out in 300 mL and 100 mL autoclaves (PARR Instrument Company). In order to avoid unspecic reactions, catalytic reactions were carried out either in glass vials, which were placed inside the autoclave, or glass/ Teon vessel tted autoclaves. Preparation of Co-salen complexes (see scheme S1 †) (a) Preparation of salen ligand (L1). Salicylaldehyde (4 mmol; in 15 mL ethanol), 1,2-phenylenediamine (2 mmol; in 10 mL ethanol) were separately dissolved in ethanol. Then, the ethanolic solution of 1,2-phenylenediamine was slowly added to salicylaldehyde solution. The resulting reaction mixture was reuxed at 80 C for 8 h to obtain a solid compound. The reaction mixture was cooled to room temperature and the product was isolated by ltration. Then, the solid product was washed with 30 mL of cold ethanol twice and dried in vacuo to get corresponding salen ligand (L1) in 98% yield. Other salen ligands were prepared by using similar method. (b) Preparation of Co-salen complex (complex I). 13 1 g of Co(OAc) 2 $4H 2 O, (4 mmol; in 15 mL ethanol) and 1.28 g of N,N 0bis(salicylidene)-1,2-phenylenediamine (ligand L1) (4 mmol; in 20 mL ethanol) were separately dissolved in ethanol. The cobalt acetate solution was slowly added to the solution of ligand. The resulting red suspension was reuxed for 18 h at 80 C to give a reddish-brown solid compound. The obtained solid compound was ltered. Then the obtained solid compound was washed with 10 mL of cold ethanol and dried in vacuo to get corresponding cobalt salen complex I in 94% yield. The same procedure was applied to prepare other cobalt salen complexes using different salen ligands. Procedure for reductive amination (a) Procedure for the synthesis of primary amines. The magnetic stirring bar, 0.5 mmol of the carbonyl compound (aldehyde or ketone) and 22 mg complex I (in case of in situ generated Co NPs) or 2 mg of prepared and isolated Co NPs were transferred to an 8 mL glass vial. Then, 2 mL of solvent (water or THF/H 2 O (1.5 : 1)) was added and the vial was tted with septum, cap and needle. The reaction vials (8 vials with different substrates at a time) were placed into a 300 mL autoclave. The autoclave was ushed with hydrogen twice at 40 bar pressure and then it was pressurized with 5-7 bar ammonia and 45 bar hydrogen. The autoclave was placed into an aluminium block preheated at 135 C in case of aldehydes and 145 C in case of ketones and the reactions were stirred for the required time. During the reaction, the inside temperature of the autoclave was measured to be 120 C in case of aldehydes and 130 C in case of ketones and this temperature was used as the reaction temperature. Aer completion of the reactions, the autoclave was cooled to room temperature. The remaining ammonia and hydrogen were discharged and the vials containing reaction products were removed from the autoclave. The reaction mixtures containing the products were ltered off and washed thoroughly with ethanol. The reaction products were analyzed by GC-MS. The crude product was puried by column chromatography using ethyl acetate and n-heptane as the eluent. The corresponding primary amines were converted to their respective hydrochloride salt and characterized by NMR and GC-MS analysis. For converting into hydrochloride salt of amine, 0.3-0.5 mL 7 M HCl in dioxane or 1.5 M HCl in methanol was added to the dioxane solution of respective amine and stirred at room temperature for 4-5 h. Then, the solvent was removed and the resulted hydrochloride salt of amine was dried under high vacuum. The yields were determined by GC for the selected amines: aer completing the reaction, n-hexadecane (100 mL) as standard was added to the reaction vials and the reaction products were diluted with ethyl acetate followed by ltration using plug of silica and then analyzed by GC. (b) Procedure for the synthesis of secondary and tertiary amines. The magnetic stir bar, 0.5 mmol of amine and 0.6 mmol aldehyde were transferred to an 8 mL glass vial. Then 2 mg of Co NPs and 2 mL of water as solvent were added. The vial was tted with septum, cap and needle. The reaction vials (8 vials with different substrates at a time) were placed into a 300 mL autoclave. The autoclave was ushed with hydrogen twice at 40 bar pressure and then it was pressurized 45 bar hydrogen. The autoclave was placed into an aluminium block preheated at 145 C and the reactions were stirred for the required time. During the reaction, the inside temperature of the autoclave was measured to be 130 C and this temperature was used as the reaction temperature. Aer completion of the reactions, the autoclave was cooled to room temperature. The remaining hydrogen was discharged and the vials containing the reaction products were removed from the autoclave. The reaction mixtures containing the products were ltered off and washed thoroughly with ethanol. The reaction products were analyzed by GC-MS. The crude product was puried by column chromatography using ethyl acetate and n-heptane as the eluent. The corresponding amines were characterized by NMR and GC-MS analysis. Isolation of in situ generated cobalt nanoparticles Aer the completion of the reductive amination reaction of carbonyl compound in presence of ammonia and hydrogen as described in Section S3a, † the in situ generated cobalt nanoparticles from the solution containing products were separated using the magnetic stir bar. Then, they were separated from the magnetic stir bar and washed with water and ethanol. Finally the recycled Co NPs were dried under vacuum and stored in a glass vial. Recycling of in situ generated cobalt-nanoparticles A magnetic stirring bar and 5 mmol 4-bromobenzaldehyde were transferred to a glass tted 100 mL autoclave and then 20 mL THF-water (1.5 : 1) was added. Subsequently, 20 mg isolated in situ generated Co NPs were added. The autoclave was ushed with 40 bar hydrogen and then it was pressurized with 5-7 bar ammonia gas and 45 bar hydrogen. The autoclave was placed into the heating system and the reaction was allowed to progress at 120 C (temperature inside the autoclave) by stirring for 24 h. Aer completion of the reaction, the autoclave was cooled and the remaining ammonia and hydrogen pressure was discharged. To the reaction products, 200 mL n-hexadecane as standard was added. The catalyst was then separated by centrifugation and the supernatant containing the reaction products was subjected to GC analysis for determining conversion and yield. The separated catalyst was then washed with ethanol, dried under vacuum and used without further purication or reactivation for the next run. Gram-scale reactions The Teon or glass tted 300 mL autoclave was charged with a magnetic stirring bar and 5-10 g of carbonyl compound (aldehyde or ketone) and complex I (weight of complex I corresponds 6 mol%). Then, 75-150 mL of solvent (THF/H 2 O (1.5 : 1) in case of aldehyde and H 2 O in case of ketones) was added and the autoclave was ushed with hydrogen twice at 40 bar pressure. Aerwards, it was pressurized with 5-7 bar ammonia and 45 bar hydrogen. The autoclave was placed into an aluminium block preheated at 135 C in case of aldehydes and 145 C in case of ketones and the reactions were stirred for the required time. During the reaction, the inside temperature of the autoclave was measured to be 120 C in case of aldehydes and 130 C in case of ketones and this temperature was used as the reaction temperature. Aer completion of the reaction, the autoclave was cooled to room temperature and the remaining ammonia and hydrogen were discharged. The reaction mixtures containing the products were ltered off and washed thoroughly with ethanol. The reaction products were analysed by GC-MS and the crude primary amine product was puried by column chromatography using ethyl acetate and n-heptane as the eluent. Preparation of cobalt-nanoparticles (a) Preparation of Co NPs from complex I. A magnetic stir bar and 1.0 g of cobalt-salen complex (complex I) were transferred to a glass tted 100 mL autoclave. Then, 20 mL THF-water (1 : 1) was added. The autoclave was ushed with 40 bar hydrogen and then it was pressurized with 5-7 bar ammonia and 45 bar hydrogen. The autoclave was placed into the heating system and the reaction was allowed to progress at 120 C (temperature inside the autoclave) by stirring for 24 h. Aer 24 h of reaction time, the autoclave was removed from the heating system and cooled to room temperature. The remaining ammonia and hydrogen pressure was discharged. The cobalt nanoparticles formed were separated from solution by using the magnetic stir bar. Then, the nanoparticles were separated from the magnetic stir bar and washed with water and ethanol. Finally obtained nanoparticles were dried under vacuum and stored in a glass vial. (b) Preparation of carbon and silica supported Co-nanoparticles. In a 50 mL round bottomed ask, cobalt salen complex (316.8 mg) and 25 mL of ethanol were reuxed at 80 C for 15 minutes. To this, 700 mg of Vulcan XC 72R carbon powder or SiO 2 was added and then the whole reaction mixture was reuxed at 80 C for 4-5 h. The reaction mixture was cooled to room temperature and the ethanol was removed in vacuum. The solid sample obtained was dried in high vacuum, aer which it was grinded to a ne powder. Then, the grinded powder was pyrolyzed at 800 C for 2 hours under an argon atmosphere and cooled to room temperature. (c) Preparation of other cobalt nanoparticles reported in literature. 11 Method-I: 1.0 g of cobalt acetate and 1.5 mL of oleic acid were mixed in 40 mL of diphenyl ether (DPE) and the reaction mixture was heated to 200 C under N 2 atmosphere. Then, 1.0 mL of TOP (trioctylphosphine) was added and the mixture was again heated to 250 C. Subsequently, the reducing solvent such as 4.0 g of 1,2-dodecanediol dissolved in 10 mL DPE at 80 C, was injected into the reaction mixture. Then whole reaction mixture was held at 250 C for 30 min until the completion of the reduction. The reaction products were cooled to room temperature and ethanol was added to precipitate nanoparticles. The formed cobalt nanoparticles were separated by centrifugation and were nally dried and stored in glass vial. Method-II: 2 mmol of cobalt acetate and 0.4 mmol of oleic acid were mixed in 1 mL ethanol. Then, 3 mmol of NaBH 4 dissolved in 1 mL of ethanol, was slowly added to the above mixture at room temperature under stirred condition. The whole mixture was stirred at room temperature for 4 h. The formed cobalt nanoparticles were separated by centrifugation and nally dried and stored in glass vial. Conclusions In conclusion, we demonstrated that the in situ formation of cobalt nanoparticles from molecularly dened precursors is straightforward and convenient. Such approach can be used as a versatile tool to prepare selective and active heterogeneous catalysts. In our specic case, Co NPs are formed from cobaltsalen complexes (e.g. cobalt(II)-N,N 0 -bis(salicylidene)-1,2phenylenediamine) in the presence of ammonia and hydrogen. Thereby, well-dened ultra-small metallic cobalt and cobalt hydroxide nanoparticles embedded in a cobalt-nitrogen framework are formed. The resulting NPs are stable in the presence of air and water and allow for the preparation of various functionalized and structurally diverse linear and branched benzylic, heterocyclic and aliphatic primary amines as well as secondary and tertiary amines. Moreover, they can be easily magnetically separated enabling easy catalyst recycling and product purication. Conflicts of interest There are no conicts to declare.
2020-02-27T09:03:12.577Z
2020-02-21T00:00:00.000
{ "year": 2020, "sha1": "734c15170afaf0ba604affecec7831795f2d094a", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/sc/c9sc04963k", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f875b0577c0b7a51e9dd42ee90b6652061a572d3", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
195767307
pes2o/s2orc
v3-fos-license
Identification In Missing Data Models Represented By Directed Acyclic Graphs Missing data is a pervasive problem in data analyses, resulting in datasets that contain censored realizations of a target distribution. Many approaches to inference on the target distribution using censored observed data, rely on missing data models represented as a factorization with respect to a directed acyclic graph. In this paper we consider the identifiability of the target distribution within this class of models, and show that the most general identification strategies proposed so far retain a significant gap in that they fail to identify a wide class of identifiable distributions. To address this gap, we propose a new algorithm that significantly generalizes the types of manipulations used in the ID algorithm, developed in the context of causal inference, in order to obtain identification. INTRODUCTION Missing data is ubiquitous in applied data analyses resulting in target distributions that are systematically censored by a missingness process. A common modeling approach assumes data entries are censored in a way that does not depend on the underlying missing data, known as the missing completely at random (MCAR) model, or only depends on observed values in the data, known as the missing at random (MAR) model. These simple models are insufficient however, in problems where missingness status may depend on underlying values that are themselves censored. This type of missingness is known as missing not at random (MNAR) [9,10,17]. While the underlying target distribution is often not identified from observed data under MNAR, there exist identified MNAR models. These include the permutation model [9], the discrete choice model [15], the no selfcensoring model [11,12], the block-sequential MAR model [18], and others. Restrictions defining many, but not all, of these models may be represented by a factorization of the full data law (consisting of both the target distribution and the missingness process) with respect to a directed acyclic graph (DAG). The problem of identification of the target distribution from the observed distribution in missing data DAG models bears many similarities to the problem of identification of interventional distributions from the observed distribution in causal DAG models with hidden variables. This observation prompted recent work [3,4,13] on adapting identification methods from causal inference to identifying target distributions in missing data models. In this paper we show that the most general currently known methods for identification in missing data DAG models retain a significant gap, in the sense that they fail to identify the target distribution in many models where it is identified. We show that methods used to obtain a complete characterization of identification of interventional distributions, via the ID algorithm [14,16], or their simple generalizations [3,4,13], are insufficient on their own for obtaining a similar characterization for missing data problems. We describe, via a set of examples, that in order to be complete, an identification algorithm for missing data must recursively simplify the problem by removing sets of variables, rather than single variables, and these must be removed according to a partial order, rather than a total order. Furthermore, the algorithm must be able to handle subproblems where selection bias or hidden variables, or both, are present even if these complications are missing in the original problem. We develop a new general algorithm that exploits these observations and significantly narrows the identifiability gap in existing methods. Finally, we show that in certain classes of missing data DAG models, our algorithm takes on a particularly simple formulation to identify the target distribution. Our paper is organized as follows. In section 2, we introduce the necessary preliminaries from the graphical causal inference literature. In section 3 we introduce missing data models represented by DAGs. In section 4, we illustrate, via examples, that existing identification strategies based on simple generalizations of causal inference methods are not sufficient for identification in general, and describe generalizations needed for identification in these examples. In section 5, we give a general identification algorithm which incorporates techniques needed to obtain identification in the examples we describe. Section 6 contains our conclusions. We defer longer proofs to the supplement in the interests of space. PRELIMINARIES Many techniques useful for identification in missing data contexts were first derived in causal inference. Causal inference is concerned with expressing counterfactual distributions, obtained after the intervention operation, from the observed data distribution, using constraints embedded in a causal model, often represented by a DAG. A DAG is a graph G with a vertex set V connected by directed edges such that there are no directed cycles in the graph. A statistical model of a DAG G is the set of distri- where pa G (V ) are the set of parents of V in G. Causal models of a DAG are also sets of distributions, but on counterfactual random variables. Given Y ∈ V and A ⊆ V \ {Y }, a counterfactual variable, or potential outcome, written as Y (a), represents the value of Y in a hypothetical situation where A were set to values a by an intervention operation [6]. Given a set Y, define Y(a) ≡ {Y}(a) ≡ {Y (a) | Y ∈ Y}. The distribution p(Y(a)) is sometimes written as p(Y|do(a)) [6]. A causal parameter is said to be identified in a causal model if it is a function of the observed data distribution p(V). Otherwise the parameter is said to be nonidentified. In all causal models of a DAG G that are typically used, all interventional distributions p({V\A}(a)) are identified by the g-formula [8]: If a causal model contains hidden variables, only data on the observed marginal distribution is available. In this case, not every interventional distribution is identified, and identification theory becomes more complex. A general algorithm for identification of causal effects in this setting was given in [16], and proven complete in [14,1]. Here, we describe a simple reformulation of this algorithm as a truncated nested factorization analogous to the g-formula, phrased in terms of kernels and mixed graphs recursively defined via a fixing operator [7]. As we will see, many of the techniques developed for identification in the presence of hidden variables will need to be employed (and generalized) for missing data, even if no variables are completely hidden. We describe acyclic directed mixed graphs (ADMGs) obtained from a hidden variable DAG by a latent projection operation in section 2.1, and a nested factorization associated with these ADMGs in section 2.2. This factorization is formulated in terms of conditional ADMGs and kernels (described in section 2.2.1), via the fixing operator (described in section 2.2.2). The truncated nested factorization that yields all identifiable functions for interventional distributions is described in section 2.3. As a prelude to the rest of the paper, we introduce the following notation for some standard genealogic sets of a graph G with a set of vertices V: is defined as the maximal set of vertices that are pairwise connected by a bidirected path (a path containing only ↔ edges). We denote the district of V as dis G (V ), and the set of all districts in G as D(G). By convention, for any is defined as the set that gives rise to the following independence relation through m-separation: [7]. The above definitions apply disjunctively to sets of variables S ⊂ V; e.g. pa G (S) = ∪ S∈S pa G (S). NESTED FACTORIZATION The nested factorization of p(V) with respect to an ADMG G(V) is defined on kernel objects derived from p(V) and conditional ADMGs derived from G(V). The derivations are via a fixing operation, which can be causally interpreted as a single application of the gformula on a single variable (to either a graph or a kernel) to obtain another graph or another kernel. Conditional Graphs And Kernels A conditional acyclic directed mixed graph (CADMG) G(V, W) is an ADMG in which the nodes are partitioned into W, representing fixed variables, and V, representing random variables. Only outgoing directed edges may be adjacent to variables in W. A kernel q V (V|W) is a mapping from values in W to normalized densities over V [2]. In other words, kernels act like conditional distributions in the sense that v∈V q V (v|w) = 1, ∀w ∈ W. Conditioning and marginalization in kernels are defined in the usual way. For A ⊆ V, we define q(A|W) ≡ V\A q(V|W) and q(V \ A|A, W) ≡ q(V|W)/q(A|W). Fixability And Fixing where all edges with arrowheads into V are removed, and all other edges in G are kept. Similarly, given a CADMG G(V, W), a kernel q V (V|W), and V ∈ V fixable in G, the fixing operator φ V (q V ; G) yields a new kernel Fixing is a probabilistic operation in which we divide a kernel by a conditional kernel. In some cases this operates as a conditioning operation, in other cases as a marginalization operation, and in yet other cases, as neither, depending on the structure of the kernel being divided. For a set S ⊆ V in a CADMG G, if all vertices in S can be ordered into a sequence σ S = S 1 , S 2 , . . . such that S 1 is fixable in G, S 2 in φ S1 (G), etc., S is said to be fixable in G, V \ S is said to be reachable in G, and σ S is said to be valid. A reachable set C is said to be intrinsic if G C has a single district, where G C is the induced subgraph where we keep all vertices in C and edges whose endpoints are in C. We will define φ σS (G) and φ σS (q V ; G) via the usual function composition to yield operators that fix all elements in S in the order given by σ S . The distribution p(V) is said to obey the nested factorization for an ADMG G if there exists a set of kernels q C C | pa G (C) | C is intrinsic in G such that for every fixable S, and any valid σ S , φ σS (p(V); G) = D∈D(φσ S (G)) q D (D| pa GS (D)). All valid fixing sequences for S yield the same CADMG G(V \ S, S), and if p(V) obeys the nested factorization for G, all valid fixing sequences for S yield the same kernel. As a result, for any valid sequence σ for S, we will redefine the operator φ σ , for both graphs and kernels, to be φ S . In addition, it can be shown that the above kernel set is characterized as: . Thus, we can re-express the above nested factorization as stating that for any fixable set S, we have φ S (p(V); G) = D∈D(φS(G)) φ V\D (p(V); G). An important result in [7] states that if p(V ∪ H) obeys the factorization for a DAG G with vertex set V ∪ H, then p(V) obeys the nested factorization for the latent projection ADMG G(V). IDENTIFICATION AS A TRUNCATED NESTED FACTORIZATION For any disjoint subsets Y, A of V in a latent projec- If identification holds, we have: In other words, p(Y(a)) is identified if and only if it can be expressed as a factorization, where every piece corresponds to a kernel associated with a set intrinsic in G(V). Moreover, no term in this factorization contains elements of A as random variables, just as was the case in (1). The above provides a concise formulation of the ID algorithm [16,14] in terms of the nested Markov model which contains the causal model of the observed distribution. If Y = {Y }, and A = {pa G (Y )}, then the above truncated factorization has a simpler form: In words, to identify the interventional distribution of Y where all parents (direct causes) A of Y are set to values a, we must find a total ordering on variables other than Y (V \ {Y }) that forms a valid fixing sequence. If such an ordering exists, the identifying functional is found from p(V) by applying the fixing operator to each variable in succession, in accordance with this ordering. Fig. 1 shows the identification of the functional p(Y (a)) following a total ordering of fixing M, B, A. Before generalizing these tools to the identification of missing data models, we first introduce the representation of these models using DAGs. Figure 1: Identification of p(Y (a)) by following a total order of valid fixing operations. MISSING DATA MODELS OF A DAG Missing data models are sets of full data laws (distributions) p(X (1) , O, R) composed of the target laws p(X (1) , O), and the nuisance laws p(R|X (1) , O) defining the missingness processes. The target law is over a set k } of random variables that are potentially missing, and a set O ≡ {O 1 , . . . , O m } of random variables that are always observed. The nuisance law defines the behavior of missingness indicators R ≡ {R 1 , . . . , R k } given values of missing and observed variables. Each missing variable X if R i = 1, and defined as X i ≡ "?" if R i = 0 (this is the missing data analogue of the consistency property in causal inference). As a result, the observed data law in missing data problems is p(R, O, X), while some function of the target law p(X (1) , O), as its name implies, is the target of inference. The goal in missing data problems is to estimate the latter from the former. By chain rule of probability, In other words, p( is not identified from the observed data law, unless sufficient restrictions are placed on the full data law defining the missing data model. Many popular missing data models may be represented as a factorization of the full data law with respect to a DAG [4]. These include the permutation model, the monotone MAR model, the block sequential MAR model, and certain submodels of the no self-censoring model [9,12,18]. Given a set of full data laws p(X (1) , O, R), a DAG G with the following properties may be used to represent a missing data model: G has a vertex set X (1) , O, R, X; Given a DAG G with the above properties, a missing data model associated with G is the set of distributions p(X (1) , O, R) that can be written as where the set of factors of the form p(X i |R i , X i ) are deterministic to remain consistent with the definition of X i . Note that by standard results on DAG models, conditional independences in p(X (1) , O, R) may be read off from G by the d-separation criterion [5]. EXAMPLES OF IDENTIFIED MODELS In this section, we describe a set of examples of missing data models that factorize as in (3) In these examples, we show how identification may be obtained by appropriately generalizing existing techniques. In these discussions, we concentrate on obtaining identification of the nuisance law p(R|X (1) , O) evaluated at R = 1, as this suffices to identify the target law p(X (1) , O) by (2). In the course of describing these examples, we will obtain intermediate graphs and kernels. In these graphs, lower case letters (e.g. v) indicates the variable V is evaluated at v (for R i , r i = 1). A square vertex indicates V had been fixed. Drawing the vertex normally with lower case indicates V was conditioned on (creating selection bias in the subproblem). For brevity, we use 1 Ri to denote {R i = 1}. We first consider the block-sequential MAR model [18], shown in Fig. 2 for three variables. The target law is identified by applying the (valid) fixing sequence R 1 , R 2 , R 3 via the operator φ to G and p(R, X). We proceed as follows. Fig. 2(b), and corresponding kernel q 1 (X Thus, in the new subproblem represented by G 1 and q 1 , Figure 2: (a), (b), (c) are intermediate graphs obtained in identification of a block-sequential model by fixing is an MNAR model that is identifiable by fixing all Rs in parallel. in Fig. 2(c), and q 2 (X 3 ). The identifying functional for the target law only involves monotone cases (cases where R i = 0 implies R i+1 = 0) just as would be the case under the monotone MAR model, although this model does not assume monotonicity and is not MAR. In this simple example, identification may be achieved purely by causal inference methods, by treating variables in R as treatments, and finding a valid fixing sequence on them. In this example, each R i in the sequence is fixable given that the previous variables are fixable, since all parents of each R i become observed at the time it is fixed. Following a total order to fix is not always sufficient to identify the target law, as noted in [4,3,13]. Consider the model represented by DAG in Fig. 2(d). For any R i in this model, say R 1 , we have, by d-separation, that 3 , 1 R2,R3 ), which is identified. However, if we were to fix R 1 in p(X, R), we would obtain a kernel q 1 (X where selection bias on R 2 and R 3 is introduced. The fact that q 1 is not available at all levels of R 2 and R 3 prevents us from sequentially obtaining p(R i | pa G (R i )), for R i = R 2 , R 3 , due to our inability to sum out those variables from q 1 . The model in Fig. 2(d) allows identification of the target law in another way, however. This follows from the fact that p(R i | pa G (R i )) is identified for each R i by exploiting conditional independences in p(X, R) displayed by Fig. 2 the nuisance law is identified, which means the target law is also identified, as long as we fix R 1 , R 2 , R 3 in parallel (as in (2)) rather than sequentially. In other words, the model is identified, but no total order on fixing op-erations suffices for identification. A general algorithm that aimed to fix indicators in R in parallel, while potentially exploiting causal inference fixing operations to identify each p(R i | pa G (R i )) was proposed in [13]. Our subsequent examples show that this algorithm is insufficient to obtain identification of the target law in general, and thus is incomplete. Consider the DAG in Fig. 3. Since R 2 is a child of R 3 and X 2 ) by d-separation in any kernel (including the original distribution) where R 2 is not fixed. Thus, any total order on fixing operations of elements in R must start with R 1 or R 2 . Fixing either of these variables entails dividing p(X, R) by some factor p(R i | pa G (R i )), which is identified as either p(R 1 |X 1 , 1 R1 ). This division entails inducing selection bias on the subsequent kernel q 1 for a variable not yet fixed (either R 3 or R 1 ). Thus, no total order on fixing operations works to identify the target law in this model. At the same time, attempting to fix all R variables in parallel would fail as well, since we cannot identify p(R 3 |X (1) 2 ) either in the original distribution or any kernel obtained by standard causal inference operations described in [13]. In particular, in any such kernel or distribution R 3 remains dependent on R 2 given X 2 . However, the target law in this model is identified by following a partial order ≺ of fixing operations. In this partial order, R 1 is incompatble with R 2 , and R 2 ≺ R 3 . This results in an identification strategy where we fix each variable only given that variables earlier than it in the partial order are fixed. That is, distributions are obtained directly in the original distribution without fixing anything. The distribution p(R 3 | pa G (R 3 )), on the other hand, is obtained in the kernel q 1 (X 1 , X 2 , X 3 , 1 R1 , R 3 |1 R2 ) = p(X, R)/p(R 2 |X 1 , 1 R1 , R 3 ) after R 2 (the variable earlier than R 3 in the partial order) is fixed. The graph cor- responding to this kernel is shown in Fig. 3(b). Note that in this graph X 2 is observed, and there is selection bias on R 1 . However, it easily follows by d-separation that R 3 is independent of R 1 . It can thus be shown that ) are identified, so is the target law in this model, by (2). Next, we consider the model in Fig. 4. Here, 2 ) poses a problem. In order to identify this distribution, we either require that R 1 is conditionally independent of R 2 , possibly after some fixing operations, or we are able to render X (1) 2 observable by fixing R 2 in some way. Neither seems to be possible in the problem as stated. In particular, fixing R 2 via dividing by p(R 2 |X 3 , R 1 ) will necessarily induce selection bias on R 1 , which will prevent identification of p(R 1 |X (1) 2 ) in the resulting kernel. However, we can circumvent the difficulty by treating X (1) 1 as an unobserved variable U 1 , and attempting the problem in the resulting (hidden variable) DAG shown in Fig. 4(b), and its latent projection ADMG G shown in Fig. 4(c), where U 1 is "projected out." In the resulting problem, we can fix variables according to a partial order ≺ where R 2 and R 3 are incompatible, R 2 ≺ R 1 , and R 3 ≺ R 1 . Thus, we are able to fix R 2 and R 3 in parallel by dividing by p(R 2 | mbG(R 2 )) = p(R 2 |X 1 , R 1 , X 3 , 1 R3 ) and p(R 3 |R 1 , X 2 ) = p(R 3 |R 1 , X 2 , 1 R2 ), leading to a kernelq 1 (X 1 , X 3 , R 1 |1 R2,R3 ), and the graph φ ≺R1 (G) shown in Fig. 4(d), where notation φ ≺R1 means "fix all necessary elements that occur earlier than R 1 in the partial order, in a way consistent with that partial order." In this example, this means fixing R 2 and R 3 in parallel. We will describe how fixing operates given general fixing schedules given by a partial order later in the paper. In the kernelq 1 the parent of R 1 is Figure 4: A DAG where selection bias on R 1 is avoidable by following a partial order fixing schedule on an ADMG induced by latent projecting out X 1 . observed data, meaning that p(R 1 |X 2 ) is identified as q 1 (R 1 |X 2 , 1 R2,R3 ). This implies the target law is identified in this model. In general, to identify p(R i | pa G (R i )), we may need to use separate partial fixing orders on different sets of variables for different R i ∈ R. In addition, the fact that fixing introduces selection bias sometimes results in having to divide by a kernel where a set of variables are random, something that was never necessary in causal inference problems. In general, for a given R i , the goal of a fixing schedule is to arrive at a kernel where an independence exists allowing us to identify p(R i | pa G (R i )), even if some elements of pa G (R i ) are in X (1) in the original problem. This fixing must be given by a partial order, and sometimes on sets of variables. In addition, some elements of X (1) must be treated as hidden variables. These complications are necessary in general to avoid creating selection bias in subproblems, and ultimately to identify the nuisance law. The following example is a good illustration. Consider the graph in Fig. 5(a). For R 1 and R 3 , the fixing schedules are empty, and we immediately obtain their distributions as p(R 1 |X For R 2 , the partial order is R 3 ≺ R 1 in a graph where we treat X (1) 2 as a hidden variable U 2 . This yields p(R 2 |X 3 , X 4 , R 2 , 1 R4 |1 R1,R3 ) is equal to , and In order to obtain the propensity score for R 4 we must either render X 1 observable through fixing R 1 or perform valid fixing operations until we obtain a kernel in which R 4 is conditionally independent of R 1 given its parent X 1 . However, there exists no partial order on elements of R. All partial orders on elements in R induce selection bias on variables higher in the order, preventing the identification of the required distribution for R 4 . For example, choosing a partial fixing order of R 1 ≺ R 3 , where we treat X as hidden variables results in selection bias on R 3 as soon as we fix R 1 . Other partial orders fail similarly. However, the following approach is possible in the graph in which we treat X (1) 2 and X (1) 4 as hidden variables. R 1 and R 3 lie in the same district in the resulting latent projection ADMG, shown in Fig. 5(b). Moreover, the set {R 1 , R 3 } is closed under descendants in the district in Fig. 5(b). As a result, R 1 and R 3 can essentially be viewed as a single vertex from the point of view of fixing. Indeed we may choose a partial order {R 1 , R 3 } ≺ R 2 , where we fix R 1 and R 3 as a set. The fixing operation on the set is possible since 3 , X 4 ) is a function of observed data law, p(X, R). Specifically, it is equal to We then obtain 3 ,X4,R2,R4|1R 1 ,R 3 ) q1(R2|X . Our final example demonstrates that in order to identify the target law, we may potentially need to fix vari- r4 X2 (f) Figure 6: A DAG where variables besides Rs are required to be fixed. ables outside R, including variables in X (1) that become observed after fixing or conditioning on some elements of R. Fig. 6(a) contains a generalization of the model considered in [13], where O 3 is fully observed. In this model, distributions for R 4 and R 1 are identified immediately, while identification of R 2 requires a partial order R 4 ≺ X A NEW IDENTIFICATION ALGORITHM In order to identify the target law in examples discussed in the previous section, we had to consider situations where some variables were viewed as hidden, and marginalized out, and others were conditioned on, introducing selection bias. In addition, fixing operations were performed according to a partial, rather than a total, order as was the case in causal inference problems. Finally, we sometimes fixed sets of variables jointly, rather than individual variables. We now introduce relevant definitions that allow us to formulate a general identification algorithm that takes advantage of all these techniques. Let V be a set of random variables (and corresponding vertices) consisting of observed variables O, R, X, miss-ing variables X (1) , and selected variables S. Let W be a set of fixed observed variables. The following definitions apply to a latent projection G(V \ X (1) (1) , and a corresponding kernel In words, these conditions apply to some Z that is a subset of its own district (which is trivial when the set Z is a singleton). The conditions, in the listed order, require that Z is closed under descendants in the district, should not contain any selected variables, and should be independent of both selected variables S and the missingness indicators R Z of the corresponding counterfactal parents given the Markov blanket of Z, respectively. Consider the graph in Fig. 5 is closed, and both S and R Z are empty sets. A set Z spanning multiple elements in D(G) is said to be fixable if it can be partitioned into a set Z of elements Z, such that each Z is a subset of a single district in D(G) and is fixable. Given an ordering ≺ on vertices V ∪ W topological in G and Z fixable in G, define φ Z (q; G) as where mb G (V ; S) ≡ mb GS (V ) and { Z} is the set of all elements earlier than Z in the order ≺ (this includes Z itself). Given a set Z ⊆ R ∪ O ∪ X (1) , and an equivalence relation ∼, let Z / ∼ be the partition of Z into equivalence classes according to ∼. Define a fixing schedule for Z / ∼ to be a partial order ⊳ on Z / ∼ . For each Z ∈ Z / ∼ , define { Z} to be the set of elements in Z / ∼ earlier than Z in the order ⊳, and {⊳ Z} ≡ { Z} \ Z. Define Z and ⊳ Z to be restrictions of ⊳ to { Z} and {⊳ Z}, respectively. Both restrictions, Z and ⊳ Z , are also partial orders. We inductively define a valid fixing schedule (a schedule where fixing operations can be successfully implemented), along with the fixing operator on valid schedules. The fixing operator will implement fixing as in (4) on Z within an intermediate problem represented by a CADMG where some X Z ⊆ X (1) will become observed after fixing Z, with X (1) \ X Z treated as latent variables, and a kernel associated with this CADMG defined on the observed subset of variables. We also define X • Removing all edges with arrowheads into Z∈{⊳ Z} Z, • Treating elements of X (1) \ X Z as hidden variables. We say where φ ⊳ Z (q; G) ≡ q(V|W) Y∈{⊳ Z} q Y , and q Y are defined inductively as the denominator of (4) for Y, φ ⊳ Y (G) and φ ⊳ Y (q; G). We have the following claims. {Ri} , (v) and a valid fixing schedule ⊳ for Z / ∼ in G such that Moreover, p(R i | pa G (R i ))| pa G (Ri)∩R=1 is equal to q {Ri} , defined inductively as the denominator of (4) for Proposition 1 implies that p(R i | pa G (R i )) is identified if we can find a set of variables that can be fixed according to a partial order (possibly through set fixing) within subproblems where certain variables are hidden. At the end of the fixing schedule, we require that R i itself is fixable given its Markov blanket in the original DAG. We encourage the reader to view the example provided in Appendix B, for a demonstration of valid fixing schedules that may be chosen by Proposition 1. In addition, in special classes of models, the full law, rather than just the target law is identified. all conditions in Proposition 1 (i-v) are met, and also for each Z ∈ Z / ∼ , X Z does not contain any elements in {X (1) j |R j ∈ pa G (R i )}. Moreover, p(R i | pa G (R i )) is equal to q {Ri} , defined inductively as the denominator of (4) Proof. Under conditions (i-v) in Proposition 1, we are guaranteed to identify the target law and obtain p(R i | pa G (R i )) where some R j ∈ pa G (R i ) may be evaluated at R j = 1. Under the additional restriction stated above, all R j ∈ pa G (R i ) can be evaluated at all levels. Proposition 2 always fails if a special collider structure X (1) j → R i ← R j , which we call the colluder, exists in G. The following Lemma implies that colluders always imply the full law is not identified. Lemma 1. In a DAG G(X (1) , R, O, X), if there exists Proof. Follows by providing two different full laws that agree on the observed law on a DAG with 2 counterfactual random variables (Appendix C). This result holds for an arbitrary DAG representing a missing data model that contains the colluder structure mentioned above. Propositions 1 and 2 do not address a computationally efficient search procedure for a valid fixing schedule ⊳ that permit identification of p(R i | pa G (R i )) for a particular R i ∈ R. Nevertheless, the following Lemma shows how to easily obtain identification of the target law in a restricted class of missing data DAGs. DISCUSSION AND CONCLUSION In this paper we addressed the significant gap present in identification theory for missing data models representable as DAGs. We showed, by examples, that straightforward application of identification machinery in causal inference with hidden variables do not suffice for identification in missing data, and discussed the generalizations required to make it suitable for this task. These generalizations included fixing (possibly sets of) variables on a partial order and avoiding selection bias by introducing hidden variables into the problem though they were not present in the initial problem statement. Proposition 1 gives a characterization of how to utilize these generalized procedures to obtain identification of the target law, while Proposition 2 gives a similar characterization for the full law. While neither of these propositions alluded to a computationally efficient algorithm to obtain identification in general, Lemma 2 provides such a procedure for a special class of missing data models where the partial order of fixing operations required for each R is easy to determine. Providing a computationally efficient search procedure for identification in all DAG models of missing data, and questions regarding the completeness of our proposed algorithm are left for future work. APPENDIX A. Proofs Proposition 1 Given a DAG G(X (1) (v) and a valid fixing schedule ⊳ for Z / ∼ in G such that defined inductively as the denominator of (4) for Proof. We first outline the essential argument made in this proof. We will reformulate the process of fixing according to a partial order in a missing data problem as a problem of ordinary fixing based on a total order in a causal inference problem where, previously missing variables are in fact observed. If we are able to show this, we can invoke results from [7], that guarantee that we obtain the desired conditional for each R i . Consider Z ∈ Z / ∼ , and define X (1) }, and similarly for We first note that any total ordering ≺ on {⊳ Z} consistent with ⊳ yields a valid fixing sequence on sets are observed. The total ordering ≺ can be refined to operate on single variables where each set Z is fixed as singletons following a topological total order where variables with no children in Z would be fixed first. Such a total order is also valid and follows from the validity of ⊳ and the fact that at each step of the fixing operation in the total order, the Markov blanket of each Z contains only observed variables; hence no selection bias is induced on any singleton variables {≻ Z}. We now show, by induction on the structure of the partial order ⊳, that for a particular Z ∈ Z / ∼ , q Z is equal to Z∈Z Z∈Zq (Z| mbG(Z; anG (DZ)∩ ≺G {Z}, RZ)| (R∩Z)∪R Z =1 , (6) obtained from a kernel {⊳ Z} , X); G), and CADMG For any ⊳-smallest Z, Z is independent of R { Z} given its Markov blanket; therefore treating X (1) { Z} as observed results in the same kernel as q Z . We now show that the above is also true for any Z ∈ Z / ∼ . Assume the inductive hypothesis holds for all Y ∈ {⊳ Z}. Since ⊳ is valid, we obtain q Z by applying where q Y are defined by the inductive hypothesis, and φ Z is defined via where q(V\(X (1) \X Consider the equivalent functional in the model where we observe X where q † (V\(X (1) \ X {⊳ Z} )|W) ≡ p(O, X, X The only difference between (8) and (9) for the purposes of the denominator is the variables in R {⊳ Z} \ R {⊳ Z} . But the denominator is independent of these variables, by assumption. Thus, it follows that fixing on a valid partial order with missing data and fixing on a total order consistent with this partial order, as in causal inference, yield equivalent kernels. The conclusion follows by Lemma 55 in [7]. Then for every R i ∈ R, a fixing schedule ⊳ for {{R j }|R j ∈ G R∩deG (Ri) } given by the partial order induced by the ancestrality relation on G R∩deG (Ri) is valid in G(X (1) , R, O, X), by taking each X Proof. In order to prove that the target law is identified, we demonstrate that conditions (i-v) in Proposition 1 are satisfied for each R i . Conditions (i) and (ii) are trivially satisfied as we choose to fix Z ⊆ R, and we choose no equivalence relation, thus Z / ∼ consists of singleton sets of Rs. Condition (iii) is also trivial as each X Y , for Y earlier in the partial order. In the proposed order we never fix elements in X (1) , and propose to keep elements in X (1) ∩ pa G (R j ) for every R j ∈ Z. In particular, this also includes R i , satisfying condition (iv). Finally, we show that the proposed schedule ⊳ is valid by showing that each Z ∈ Z / ∼ is fixable. There are 3 conditions for an element Z to be fixable as mentioned in section 5. We go through each of these conditions and demonstrate each Z in Z / ∼ is a valid fixing in φ ⊳ Z (G) where ⊳ is the proposed fixing schedule above. In the proposed schedule each Z is a singleton R j ∈ Z / ∼ that we are trying to fix in a graph φ ⊳R j (G). Since X (1) Rj = X (1) , φ ⊳R j (G) is a CDAG. Thus, D(φ ⊳R j (G)) is just sets of singleton vertices. In particular, D Rj = {R j }. Further, by definition of the schedule, it must be that de φ⊳ R j (G) (R j ) = {R j }. This satisfies condition (i). For condition (ii), we note that S ⊆ nd φ⊳ R j (G) (R j ) else, S contains some R k ∈ de G (R j ) which should have been fixed prior to R j by the proposed partial order. Thus, it follows that S ∩ {R j } = ∅. Finally, following the partial order, and under the assumption stated in the lemma, R {Rj} ⊆ {⊳R j }. We have also proved that S ⊆ nd φ⊳ R j (G) (R j ). Therefore, R j ⊥ ⊥ (S ∪ R {Rj} ) \ mb φ⊳ R j (G) (R j )| mb φ⊳ R j (G) (R j ). Since each Z is fixable, the proposed partial order ⊳ for each R i is valid. Therefore, all five conditions in Proposition 1 are satisfied concluding the target law is ID. B. An example to illustrate the algorithm We walk the reader through identification of the target law for the missing data DAG shown in Fig. 7(a) in order to demonstrate the full generality of our missing ID algorithm. As a reminder, the target law is identified by (2) if we are able to identify p(R i | pa G (R i ))| R=1 for each R i ∈ R. The identification of these conditional densities are shown in equations (i) through (viii). For a clearer presentation of this example, we switch to one-column format. Figure 7: (a) A complex missing data DAG used to illustrate the general techniques used in our algorithm (b-e) The corresponding fixing schedules of Rs. We start with {R 3 , R 5 , R 6 , R 7 }. The fixing schedules for these are empty and we obtain the following immediately from the original distribution. For R 1 , we choose Z = {R 1 , R 5 , R 6 }, and no equivalence relations. Thus, Z / ∼ = {{R 1 }, {R 5 }, {R 6 }}. The fixing schedule ⊳ is a partial order shown in Fig. 7(b) where R 5 and R 6 are incompatible, and R 5 ≺ R 1 , R 6 ≺ R 1 . Starting with the original G in Fig. 7(a), fixing R 5 and R 6 in parallel yields the following kernel. q r1 (X \ {X 5 , X 6 }, X where the propensity scores in the denominator are identified using (ii) and (iii). The CADMG corresponding to this fixing operation is shown in Fig. 8(a).
2019-06-29T17:17:52.000Z
2019-06-29T00:00:00.000
{ "year": 2019, "sha1": "65026d3830d83ece270fa0318898f1463a57387c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "77966cac4dcf7755407bf40376453262ac5b53f6", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Mathematics" ] }
54600456
pes2o/s2orc
v3-fos-license
USE OF UAS IN A HIGH MOUNTAIN LANDSCAPE : THE CASE OF GRAN SOMMETTA ROCK GLACIER ( AO ) Photogrammetry has been used since long time to periodically control the evolution of landslides, either from aerial images as well as from ground. Landslides control and monitoring systems face a large variety of cases and situations: in hardly accessible environments, like glacial areas and high mountain locations, it is not simple finding a survey method and a measurement control system, which are capable to reliably assess, with low costs, the expected displacement and its accuracy. For this reason, the behaviour of these events presents the geologists and the surveyor each time with different challenges. The use of UAS (Unmanned Aerial System) represents, in this context, a recent and valid option to perform the data acquisition both in safety and quickly, avoiding hazards and risks for the operators while at the same time containing the costs. The paper presents an innovative monitoring system based on UAS-photogrammetry, GNSS survey and DSM change detection techniques to evaluate the Gran Sommetta rock glacier surface movements over the period 2012-2014. Since 2012, the surface movements of the glacier are monitored by ARPAVdA (a regional environmental protection agency) as a case study for the impact of climate change on high-mountain infrastructures. In such scenarios, in fact, a low-cost monitoring activity can provide important data to improve our knowledge about glacier dynamics connected to climate changes and to prevent risks in anthropic Alps areas. To evaluate the displacements of the rock glacier different techniques were proposed: the most reliable uses the orthophoto of the area and rely on a manual identification of corresponding features performed by a trained operator. To further limit the costs and improve the density of displacement information two automatic procedures were developed as well. INTRODUCTION Environmental control and monitoring systems face a large variety of cases and situations; indeed, the behaviour of different phenomena (in the time domain as well as in the space domain) depends on very many factors, which present the geologists and the surveyor each time with different challenges.In principle, as far as a geometric survey is concerned, the main parameter when designing a measurement and control system is the accuracy needed to assess, with a given probability, the magnitude of the expected displacement.However, a number of other issues influence the choice of the best monitoring system to use: the size of the area to control, the frequency of data acquisition, the time to deliver the results (alert time), the stability of the reference system, the influence of atmospheric parameters on measurement accuracy or operation, the site constraints, etc. Photogrammetry has been used since long time to periodically control the evolution of landslides, either from aerial images (Casson, et al., 2003), as well as from ground (Cardenal, et al., 2008); in (Mora, et al., 2003) the same technique has been used in combination with GPS surveys on the landslide body.In the same context, one of the most promising techniques that are rapidly spreading in many applications are UASs (Unmanned Aerial Systems).Their relatively low cost and their capability of acquiring concurrently geometric (usually producing Digital Surface Model -DSM) and thematic data (using RGB or NIR imaging systems), as well as a very good productivity rate make the technology extremely appealing also for monitoring applications.Monitoring the surface creep of mountain permafrost is important to understand the effect of ongoing climate change on slopes dynamics.Rock glaciers are widespread landforms that can show rapid acceleration and destabilization (Delaloye, et al., 2013).In heavily anthropic areas like the Alps, the accelerating creep of perennially frozen talus/debris with high ice content will probably become an increasing problem, notably for human infrastructures (Haeberli, 2013).However, traditional techniques (e.g.topographical survey) cannot easily be applied in such scenarios: for example, the glacier surface is rough and presents hazards like crevasses.Only an operator with adequate training is able to realize a survey, often with some risks to his safety.This study presents the evaluation of movements and volumetric changes of an Italian rock glacier, obtained by multi-temporal analysis of UAS images over the period 2012-2014.The movement rate obtained by photogrammetry is validated against repeated GNSS campaigns on 48 points distributed on the rock glacier. SITE DESCRIPTION The study area is located in the western Alps at the head of the Valtournenche Valley (Valle d'Aosta, Italia) on the Italian side of Matterhorn.The body of the rock glacier is composed by two lobes, spanning an elevation range between 2600 and 2750 m.It is nearly 400 m long, between 150 and 300 m wide and has an apparent thickness (based on the height of the front) of 20-30 m.The debris origins from an overhanging rockwall, mainly composed by green schists with prasinites (dark rocks) with intercalated bands of dolomite and marbles (clear rocks).It displays morphological features typical of active landforms: longitudinal ridges in the central steep part and a succession of transverse ridges and furrows in the compressive part of the tongue.An overview of the area is shown in Figure 1.Since 2012, the surface movements of the glacier are monitored by ARPAVdA (Agenzia Regionale per la Protezione dell'Ambiente Regione Autonoma Valle d'Aosta -Environmental Protection Agency of Valle d'Aosta) as a case study for the possible impact of climate change on highmountain infrastructures: in fact, this glacier juts on a ski slope of the Cervinia resort, causing every year maintenance issues to professionals.For these reasons, a multi-approach monitoring system, based on repeated UAS-photogrammetry and GNSS (Global Navigation Satellite System) survey, has been setup.The current dataset of observation consists of two UAS flights (October 2012 and 2014) and three GNSS campaigns (mid August 2012(mid August , 2013(mid August , 2014)).The monitoring activity also includes measures of subsurface temperatures and deformations in two 15 m deep boreholes and geophysical prospecting (seismic and electrical), but these data are not treated in this paper. Figure 1: Overview of the study area. The advantage of using both GNSS and UAV is in their complementarity.On one hand GNSS gives measures of surface displacement with high accuracy, but just on few points (48 in this study).On the other hand the UAS-photogrammetry provides a dense cloud of points which allows (i) to describe in detail the whole surface producing an high resolution DSM and (ii) to generate related orthophotos to evaluate the glacier displacements. The GNSS data can be used as ground truth for validating the displacement obtained by orthoimage analysis and DSM comparison and check the accuracy of the monitoring system. DATA ACQUISITION Extreme environments, as high mountain areas, represent hostile places for survey operations.Therefore, the use of a drone represents an easy and safe way to conduct the monitoring activities.The UAS must be able to fly at high altitude and over extended area in a short time; this involves efficient performances during the mission in terms of low power consumption, endurance and safety.For all these reasons, we used the fixed-wing Swinglet CAM produced by SenseFLY.It is particularly functional because, being realised in Expanded Polypropylene (EPP) foam and carbon structure, it is very lightweight and effortlessly transportable in a steep environment.Its endurance (usually with a single battery pack the drone can go on for more than 30 minutes) allows completing the planned mission with just one flight.At the same time, several features of the ground station software, if something goes wrong (e.g.loss of radio signal, low battery level, etc.), provide a safe return of the drone to the landing site. GCP network In order to properly register the DSM at every epoch, 18 GCPs (Ground Control Points) distributed on the edges of the rock glacier area were materialized: the GCPs location is shown in Figure 2. The GCPs located in the upper part of the area were signalized with paint over blocks and those located in the lower part are materialized over manholes and buildings.These control points were measured with the GEOMAX Zenith 20 Series in GNSS RTK (Real Time Kinematic) mode.The expected precisions in XY coordinates are 1-2 cm and 2-3 cm in Z. Unfortunately, only 10 GCP (mainly those located on manholes) were distinguishable on the images and were available for image block control.To provide more GCP, for the next survey campaigns which should take place in July 2015, new ad hoc targets will be provided to replace those sprayed on blocks. The UAS photogrammetry The Swinglet CAM was equipped with a 12 Mpixel CANON IXUS 220 HS camera for the 2012 flight, and with a 16 Mpixel CANON IXUS 125HS camera for the 2014 flight.The former flight was performed at 150 m height with a longitudinal overlap of 60% and a sidelap of 70% between adjacent strips, with a resultant GSD (Ground Sampling Distance) of 5 cm/pixel.The number of images acquired and used in the bundle block adjustment was 110.For the 2014 flight, the same GSD was obtained changing slightly the flight altitude.At the same time, to make the image block more rigid, the longitudinal and side overlap were respectively of 80 and 85%.Given the flight characteristics, the images acquired in the photogrammetric block were 239.In order to evaluate the repeatability of the digital models, the bundle block adjustment and the consequent dense surface reconstruction of the two UAS surveys were performed with two different software packages: MicMac (Deseilligny, 2011) developed by IGN (Institut National de l'information Géographique et Forestière, Paris) and the commercial software Agisoft PhotoScan (PhotoScan, 2014) developed by Agisoft LLC company.Both software pipelines begin with automated tie point extraction and feature matching; after that, a bundle block adjustment is performed.Then a dense image matching provides the input for surface reconstruction and then orthophoto generation.Since the images were taken with consumer grade compact cameras, whose optics are usually not very stable, a selfcalibration procedure was used in the image orientation process. Even if the embedded GNSS camera locations were available to provide an initial solution to the bundle adjustment, their accuracies were too low for correctly co-register the DSM at the different epochs, and the GCP were preferred to orient the photogrammetric blocks.Finally, to validate the DSMs accuracy, 48 GNSS check points (depicted in Figure 3) were used to check the elevation discrepancies between GPS measurements and photogrammetric surface reconstruction ( These DSM check points were measured with the Leica Viva GS10/15 GNSS RTK mode, with an expected precision of ca. 1 cm.The points were materialized using fluorescent spray paint and drilling a small pilot hole on the rock surface for the GNSS pole.The points are not clearly recognizable in the UAS images and so the check points were compared with the DSM surface.Anyway, the standard deviations of the differences are in good agreement with the theoretical precision computed during image block design.To limit the number of images, a GSD of ca. 5 cm, which provide a final theoretical precision of ca.8.5 cm for both flights, was considered optimal.The results of the comparison are good, considering also the ground resolution (of 5 cm) of the photogrammetric reconstructed digital models, and the estimated precision of the GNSS survey (comparing the measures on fixed point an accuracy of ca. 5 cm was found).However it is important to highlight the mean value of the differences revealed from the statistics of the 2012 flight: in this case the observed 10 cm can be probable due to a systematic error source between the GCP and GNSS measurements. MEASUREMENT AND ANALYSIS OF THE DISPLACEMENT FIELD The reconstruction of the rock glacier surface movements is obtained by comparing the orthophotos and the DSM of the two UAS photogrammetric surveys of the investigated area, executed over the period 2012-2014.As described in the previous section, the photogrammetric workflow has allowed to obtain two raster DSM (with a cell size of 20 cm), and two different sets of orthophotos of the inspected region, characterised respectively by a 20 cm and 5 cm pixel size.The orthophotos were analysed to identify the rock glacier displacements using two different methods: the manual identification of well-recognizable points on the glacier surface on the orthophotos of two epochs; an automatic tracking method able to recognize a dense grid of corresponding points between the images.The first set of orthophotos was used for automatic identification of displacement, the second (with higher resolution) to help the manual extraction of corresponding point.An independent assessment of the estimated displacement field was performed on the basis of repeated GNSS campaigns that were executed on 48 points distributed on the rock glacier (Figure 3). Manual measurement of the displacements A first surface displacements analysis of the rock glacier was performed by manually selecting, on the orthoimages of 2012 and 2014, 746 corresponding points.The results are shown in Figure 4: for each point a displacement vector is depicted.The manual measurements revealed significant surface movements. The highest observed values of displacements (up to 3 m) are localized at the top of the glacial landslide left front, whereas, from left to right of the surface, movements decrease from 2.5 to 0.2 m.The movement rate validation analysis (Figure 5) shows that the correlation between GNSS data and manually measured displacements is good at all magnitudes of displacement, with a root-mean-square error of about 16 cm (RMSE).Indeed, some measured data are noticeably not in agreement with GNSS survey, influencing the statistical results but, considering the pixel size of the orthophotos, errors in the manual identification of some of the 746 points might have occurred. Automatic displacements measurements In fact, displacement and/or deformation measurements of the object surface can also be obtained by automatically tracking corresponding points between the images: features and points tracking can usually be achieved using Feature (FBM) or Area Based Matching (ABM) algorithms.FBM algorithms (Förstner, 1986) extract features of interest (lines, edges, angles, etc.) from images using specific analytical operators (Mikolajczyk, 2005) and afterwards identify correspondences between feature lists.These methods are reliable and fast, but they produce sparse disparity maps because the matching process regards only the features with the highest detector response.Differently, in ABM, the matching process is directly applied to image intensity (grey values).Starting from a point on a reference image, the algorithm identifies the most probable location of the homologous point on the other images of the sequence.Each point to be matched is the centre of a small window of pixels in the reference image (the 2012 orthophoto in this case) and this window is statistically compared with equally sized windows of pixels in other images (the 2014 orthophoto).Therefore, correspondences between the two images are identified by considering the similarities between grey values.A dense displacements map of the region of interest (with little or no working load for the operator) can be obtained by scanning the reference image with short spacing.Usually, commercial image matching software packages work on 8 bit indexed images (if RGB images are used, the software generally converts them in gray scale).With this limitation, the automatic procedure should work directly on the orthophotos.Another option is to represent the local shape of the DSM converting its height map to some other representation (e.g. with a shaded relief map) and exporting it to a common image format (see Figure 6 Top).Also in this case, however, the raster product has an 8-bit color depth.Moreover, as far as hillshading rendering is concerned, being such technique based on the computation of the angle between DSM surface normals and incident light rays direction (Horn, 1981), height data noise can be amplified by the procedure.On the other hand, smoothing effects, where shape discontinuities occur, should be expected.A better choice would probably be the use of particular image filters such as the Wallis filter (Wallis, 1976) to improve the height map local contrast (Figure 6 Bottom).Automatic tracking of the displacements, both for the orthophotos and the DSM, has been performed, using in-house codes: DenseMatcher (Re, et al., 2012) and a proprietary implementation of Semi-Global Matching (SGM) algorithm (Hirschmuller, 2005;Dall'Asta, 2014).Both software codes are able to perform the correlation process taking in input also 32bit floating point rasters.The former implements a Least Square Matching (LSM) algorithm (Gruen, 1985), which tracks the position of one point at a time comparing the area (i.e. the image block) surrounding the point itself.It uses a functional model which is able to provide displacements values with sub pixel accuracy, extracting a full field description of the deformations of the rock glacier surface.In this context, however, the use of complex shape functions to model the template deformation between the two images (Re et al., 2014) should be discouraged, since, working with rectified orthophotos, the main transformation expected is simply a translation.Moreover, using SGM method it is also possible to constrain the regularity of the displacement field, improving the results reliability.The technique considers both the image similarity and the displacement field continuity, by the concurrent application of regularization constraints (in terms of adjacent pixels displacement).SGM, in its original implementation, was conceived for performing a onedimensional search of displacements between images.For instance, in stereo-vision problems the images to be matched can be transformed so that the displacement between corresponding points occurs always along the same direction.However, an in-house implementation for 2D displacement search space has been recently developed by the authors (Dall'Asta, 2015) and can be used in this context. Automatic displacements measurements on orthophotos A first series of tests showed that the application of ABM algorithm to the orthoimage pair (shown in Figure 6) is not always reliable, producing a large amount of gross errors and mismatches.In this context, the main issue is represented by the illumination conditions of the slope, which can change drastically over the monitoring period, producing significant contrast and brightness variations between the images.Moreover, the weathered surface of the rock glacier can produce relevant texture changes of the surface (Figure 7 clearly shows the radiometric problems which can arise, over a long monitoring period, between the two orthophotos).On account of this, image texture changes, together with low contrast regions and radiometric transformation problems, can worsen the matching algorithms performance.The results of the matching process, applied directly to the orthophotos, are visible in Figure 8 Top e Bottom.In order to obtain displacement maps comparable with the DSMs original data, a 20 cm pixel size orthophoto was used (instead of the original 5 cm high-resolution orthoimages).These first series of tests show that the application of area based matching algorithm to the orthoimages can produce a significant number of outliers (out of range red areas displayed in Figure 8 Top), probably due to false positive in the low contrasted regions.Indeed, the evident contrast and illumination changes, together with the presence of snow-clad areas, cannot be identified reliably by the correlation algorithm, producing erroneous matched points.At the same time, it is worth nothing that, where the images radiometry shows suitable characteristics for the ABM application, the results accuracy is very good, especially considering that, to limit the workload of the algorithm, a pixel size of 20 cm was used: a final RMSE of about 10 cm (see Figure 8 Bottom) is achieved.However, these results are validated against only 28 points of the 48 GNSS total measurements (just 58% of the control points are in agreement with those measured with the GPS campaign).The remaining 20 check points were discarded in the analysis since they fell in the regions affected by outliers. To provide a more reliable comparison of the two epochs (possibly in all the extent of the rock glacier) a different method has been investigated. Automatic displacements measurements on raster elevation models Forasmuch as the rock glacier is subjected to sliding, the local surface shape does not change so much, while the colour variations of the surface elements are significant.As already pointed out, the proprietary image matching techniques developed by the authors can use effortlessly floating-point rasters and the identification of homologous features can be performed on the DSM directly.Such solution represents the simplest and most efficient way to overcome the limitation represented by the use of long time-separated orthoimages. The calculated displacement field, using this technique, is shown as a 2D coloured map in Figure 9 Top; displacement vectors are overlapped with reduced points density to improve the legibility of the results.According to the manual measurement of the displacements, the highest displacement values observed on the rock glacier body, are in the range of 2-2.5 m, showing the same distribution within the rock glacier surface of the manual extraction procedure.It is also clear that these results appear more robust and reliable, allowing to reconstruct the displacements of the whole rock glacier body and the surrounding areas. Comparing the displacement map with the GNSS validation survey data (on a sample of 40 validated points), the accuracy seems in agreement with the manual DSM comparison results, while it is a little lower than the one obtained by orthophoto manual extraction.The RMSE is about 16 cm (results validation is shown in Figure 9 Bottom).The small, isolated regions where high displacement discontinuities are highlighted (two of them are pointed out by white arrows in Figure 9 Top) represent most likely gross errors (note, for instance that in some cases the direction of the displacement is inverted) or identified rolling blocks.The application of the matching algorithms to the elevation data has requested the use of a big image block.The level of detail of the DSM is not high enough to represent clearly the smaller blocks.In analogy with traditional image matching techniques, the area inside the matching template should not be "flat", unless high uncertainties are considered acceptable.For this reason, the use of large templates is necessary to perform an accurate and reliable identification of the same areas on the two investigated raster.In all the tests performed a template size corresponding to a ground size of ca.6÷10 m produced the best results. The higher density of the vectors allows describing in more detail the dynamics of the surface, revealing significant movements also in areas which were not initially investigated (but that probably will be subject to monitoring in the future activities).The larger investigation area that can be analysed represents therefore an added value of the automatic DSM comparison procedure to the comprehension of the whole area surrounding the rock glacier. CONCLUSIONS The paper presented the first stages of a monitoring activity of an Italian rock glacier, devoted to the evaluation of the effects of climate change on permafrost masses, which lately have shown progressive destabilization and fast acceleration in their creep behaviour.In this context, the use of UAS cuts drastically the periodic monitoring costs, while allowing to acquire dense geometric data on the glacier shape.For instance, for future monitoring activities, the authors expect to perform the periodic surveys on a monthly basis, to better investigate the seasonal velocity changes, especially during the summer period. Figure 5 : Figure 5: Scatterplot between displacements obtained from the comparison of UAS orthophotos manual analysis and GNSS measurements on 48 points. Figure 6 . Figure 6.Top: Shaded DSM of the October 2012 flight.Bottom: Elevation raster map, after the Wallis filter application, of the October 2012 DSM. Figure 8 . Figure 8. Top: 2D Displacement [m] coloured map obtained by ABM process on the orthophotos.Bottom: Scatterplot between displacements obtained from the automatic comparison of UAS orthophotos and GNSS measurements on 28 points. Figure 9 . Figure 9. Top: 2D Displacement [m] coloured map obtained by ABM process on the DSM with related movements vectors calculated automatically (using a 10 pixel grid step).Bottom: Scatterplot between displacements obtained from the automatic comparison of UAS and GNSS measurements on 40 points Table 1 . Table 1 summarises the design parameters of the two UAS flights.Summary of the UAS flights characteristics. Table 2 . Statistics of the comparison between the GNSS elevation data and the photogrammetric reconstructed DSMfor the two flights. During the winter, being the rock glacier body completely covered by the snow, the proposed methodology cannot be applied and different techniques should be implemented (e.g.GNSS survey on large rock blocks).At this time, being such solutions very expensive, and expecting little or no movement in winter, the monitoring activities are planned only during the spring, summer and autumn seasons.With the proposed methodologies, a very detailed description of the creep behaviour of the glacier can be produced, especially if automatic extraction techniques are employed.The first two monitoring campaigns (one in 2012, the other in 2014) provided important hints to be implemented in the next campaigns.Still, independent data acquisition to provide a check dataset and to evaluate the accuracy of the reconstruction of the DSM is required to provide proper validation of the technique even if the first tests showed a good agreement between UAS collected data and GNSS.Such activity is troublesome, requiring (e.g. the GNSS kinematic survey for the 48 displacements points) direct access of a human operator to the glacier area.
2018-12-04T16:42:46.299Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "27205f70633fafb42760e2c597e6386cc03f2050", "oa_license": "CCBY", "oa_url": "https://isprs-archives.copernicus.org/articles/XL-3-W3/391/2015/isprsarchives-XL-3-W3-391-2015.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "27205f70633fafb42760e2c597e6386cc03f2050", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
18301539
pes2o/s2orc
v3-fos-license
Quantum Holography We propose to make use of quantum entanglement for extracting holographic information about a remote 3-D object in a confined space which light enters, but from which it cannot escape. Light scattered from the object is detected in this confined space entirely without the benefit of spatial resolution. Quantum holography offers this possibility by virtue of the fourth-order quantum coherence inherent in entangled beams. Introduction We consider the use of quantum entanglement [1], which gives rise to 'spooky actions at a distance' in Einstein's words [2], for extracting holographic information [3,4] about a remote 3-D object concealed in an integrating sphere.Quantum holography makes use of entangledphoton pairs [5,6], one of which one scatters from the remote object while the other is locally manipulated using conventional optics that offers full spatial resolution.Remarkably, the underlying entanglement permits the measurement to yield coherent holographic information about the remote object.Quantum holography offers this possibility by virtue of the fourthorder quantum coherence inherent in entangled beams; indeed, it can be implemented despite the fact that conventional second-order coherence, required for ordinary holography, is absent.Classical holography cannot achieve this.Belinskii and Klyshko [8] constructed a two-photon analog of classical holography, although they provided no analysis.The configuration presented here makes use of entanglement to transcend the capabilities of classical holography. Specifically, consider a 3-D object placed within a chamber that has an opening through which light enters but does not escape, as illustrated in Fig. 1.Coated with a photosensitive surface, the wall of the chamber serves as an integrating sphere that converts any photon reaching it into a photoevent.The chamber therefore serves as a photon bucket that indiscriminately detects the arrival of photons at any point on its surface, whether scattered or not, but is totally incapable of discerning the location at which the photon arrives. Classically it is impossible to construct a hologram of the 3-D object in this configuration, whatever the nature of the light source or the construction of the imaging system.This is because optical systems that make use of classical light sources, even those that involve scanning and time-resolved imaging, are incapable of resolving the ambiguity of positions from which the photons are scattered; they therefore cannot be used to form a coherent image suitable for holographic reconstruction.T is a source of entangled-photon pairs.8 is a (remote) singlephoton-sensitive integrating sphere that comprises the wall of the chamber concealing the hidden object (bust of Plato).9 is a (local) 2-D single-photon-sensitive scanning or array detector. Method The implementation of quantum holography makes use of entangled-photon beams generated, for example, by the process of spontaneous optical parametric down-conversion [6][7][8][9][10][11][12] from a second-order nonlinear crystal illuminated by a pump laser.As shown in Fig. 1, one beam from the source 6 enters the chamber opening and is scattered from the object, yielding a x p 6 time time single sequence of photoevents from the integrating sphere &.The other beam is transmitted through a conventional optical system and detected using a single-photon-sensitive scanning (or array) detector '.The information registered by the two detectors, in the form of coincidence counts, is sufficient to extract coherent information about the 3-D object that is suitable for holographic reconstruction. Let 6 be a planar two-photon source emitting photons in a pure entangled quantum state [8] ( ) ( ) where x represents the probability density that a photon pair is emitted from point x in the source plane.As a consequence of the state in Eq. ( 1), each photon is individually in a mixed state (described by the density operator exhibits no second-order coherence [9], as is required in traditional holography.This entangled state may be generated, for example, by spontaneous parametric down-conversion from a thin crystal, in which case ( ) x represents the spatial distribution of the pump field [8]. Of the two photons generated by the source 6, the one directed through the opening of the chamber may (or may not) be scattered from the object and impinges on the chamber wall at position , where & represents the set of points on the chamber wall.The optical system between the source and the chamber, idealized as a simple lens in Fig. 1, as well as everything inside the chamber including the object, is assumed to be linear and is characterized by an impulse response function ( ) The photon coincidence rate at points 1 x and 2 x is described by a probability density ( ) The form of Eq. ( 2) suggests that the function ( ) x .Equation (2) may therefore be written symbolically as follows: ( ) , where * represents transmission through a linear system (convolution in the shift-invariant case) and ⋅ represents multiplication or modulation.The expression is to be read in reverse order, from right to left, as is the custom in operator algebra. with is the incoherent superposition of many coherent images of the form given in Eq. ( 2), originating from all points of &.This is therefore a modal expansion of a partially coherent system [12]. Example: Scattering objects To illustrate the principle, let us consider two samples, in turn: a single point scatterer and a collection of such scatterers.These results are readily generalized to an arbitrary object.h that directs light to the point scatterer, the fraction of the field that is scattered (the complex scattering strength) ( ) ( ) x , and the system ( ) 1 h that carries light from the scatterer to the chamber wall.These two paths are mutually coherent, so that the probability amplitudes of the two paths are added, thereby leading to Substituting Eq. ( 4) into Eq.( 3) yields the marginal coincidence rate with Equation ( 5) is the sum of three terms, which may be elucidated by referring to Fig. 2 that depicts the Feynman-like paths of the various probability amplitudes: (1) The first term ( ) ( ) , which are defined in Eqs. ( 8) and ( 9), respectively, by the symbolic relations: . In other words, ( ) ( ) x .This is the term that includes the holographic information.The quantity , by which r is multiplied in Eq. ( 5), is the image of a point at ( ) h , followed by modulation by and then forward propagation through 2 h .If the optical system is designed such that ( ) h is uniform over the area of interest, then q is independent of ( ) x and becomes unimportant.Note that integration over the area of the chamber is essential for achieving quantum holography.Thus a point detector, for example [8], cannot be used for this purpose by virtue of Eqs. ( 8) - (10).Furthermore, ray tracing techniques, such as those used in used in Ref. [13] in connection with geometric optics of entangled-photon beams, cannot be used for characterizing this interference effect.(1 ) ( x (1 ) ,x ) x (1) x 2 Direct path & h I (1 ) ( x (1 ) ,x ) x (1) x 2 x 1 x h (0) (x 1 ,x) ' which is a generalization of Eq. ( 4).The marginal coincidence rate in this case, obtained by substituting Eq. ( 11) into Eq.( 3), becomes which is a generalization of Eq. ( 5).Here with all other quantities as previously defined.Once again the marginal coincidence rate, given in Eq. ( 12), is the sum of three terms analogous to those in Eq. ( 5).The second term x Σ p , that due to the scatterers alone, includes the sum of the contributions of each scatterer independently, and terms resulting from interference amongst the scatterers.The third term in Eq. ( 12) includes the holographic information.The results can then be generalized to any object by replacing the discrete summation in Eq. ( 11) by an integral.The results also apply to objects that do not scatter. The image obtained from the marginal coincidence rate ( ) 2 2 x p is holographic by virtue of Eq. ( 12).This equation has the structure of a conventional hologram obtained by illuminating the scatterers with coherent light through a composite system involving the optics of both beams, with the state probability amplitude ( ) x serving as an effective coherent aperture.This result follows from the duality between entanglement and coherence [8]. Conclusion The remarkable possibility of quantum holography is attained by virtue of a light beam that itself does not illuminate the object, but is entangled with the beam that does, and is detected with full spatial resolution.Although each of the beams is, by itself, incoherent, and therefore not capable of conventional interference, and although the integrating sphere provides no spatial resolution whatsoever, the quantum entanglement permits interference and hence offers the possibility of holography.This surprising and purely quantum result cannot be achieved by using optical beams generated by a classical source, even if they possess the strongest possible classical correlation [9]. Acknowledgements This work was supported by the US National Science Foundation; by the Center for Subsurface Sensing and Imaging Systems (CenSSIS), an NSF engineering research center; and by the David & Lucile Packard Foundation. Figure 1 : Figure 1: Quantum holography.T is a source of entangled-photon pairs.8 is a (remote) singlephoton-sensitive integrating sphere that comprises the wall of the chamber concealing the 1 1 2 d photon state in terms of the familiar Fock state k 1 of the mode with wave vector k , is the Dirac delta function, and the state probability amplitude ( ) 1 1 h. 2 x The other photon is transmitted through a linear optical system characterized by an impulse response function( ), the single-photon-sensitive scanning (or array) detector. 1 x formed through an optical system represented by the following cascade (see Fig.1): propagation through 1 h in the reverse direction toward the source (from 1 x to x ), modulation by ( )x at the source, and subsequent transmission from the source through the system 2 h to the point 2 1 x Since we have no knowledge of the detection points & ∈ on the chamber wall (& is a bucket detector) we must integrate over &, whereupon the coincidence rate in Eq. (2) becomes 1 x marginal probability density of detecting one photon at 2 x and another at any point & ∈ .In spite of this integration, it is clear from Eq. (3) that ( ) 2 2 x p contains information about the system 1 h , and therefore about the object, via the function 1 g .The function 1 x 1 h Consider a single static scatterer located at the point ( ) inside & as depicted in Fig. 2. The system comprises two contributions.The first is a direct path to the chamber wall, represented by the system ( ) 0 h .The second is a scattering path to the chamber wall, represented by the illumination system ( ) 1 I is the marginal coincidence rate in absence of the scatterer.It is identical to that in Eq. (3) with 1 h replaced by ( ) 0 h .This term represents the direct path in Fig.2.(2is the marginal coincidence rate arising from the scatterer alone, and is represented by the scattering path in Fig.2.(3) The third term represents interference between these two paths, and is therefore the term of interest for quantum holography.It is the fourthorder analog of second-order interference in Gabor's original conception of holography[3,4].One may represent the functions 1 x 2 h is the image formed by a point at the location of the scatterer ( ) through a cascade of the systems ( ) 1 h (traveling forward) and ( ) 0 h (traveling backward), followed by modulation by , and finally traveling forward through the system to the point 2 Figure 2 . 1 x Figure 2. Quantum holography of a single point scatterer located at point ( ) 1 x inside &.All Thus of the 3-D object concealed in the chamber.It may then be recorded on a 2-D photographic plate and viewed with ordinary light in the usual fashion of holographic reconstruction. 1 h and 2h represent the optical systems that deliver the entangled photons from T
2015-03-06T19:42:58.000Z
2001-10-11T00:00:00.000
{ "year": 2001, "sha1": "244c25eb4a8bb69dcf10d23fe1c5ed0fabbe9852", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.9.000498", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "244c25eb4a8bb69dcf10d23fe1c5ed0fabbe9852", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
219409472
pes2o/s2orc
v3-fos-license
A Reconfigurable Analog Baseband Circuitry for LFMCW RADAR Receivers in 130-nm SiGe BiCMOS Process A highly reconfigurable open-loop analog baseband circuitry with programmable gain, bandwidth and filter order are proposed for integrated linear frequency modulated continuous wave (LFMCW) radar receivers in this paper. This analog baseband chain allocates noise, gain and channel selection specifications to different stages, for the sake of noise and linearity tradeoffs, by introducing a multi-stage open-loop cascaded amplifier/filter topology. The topology includes a course gain tuning pre-amplifier, a folded Gilbert variable gain amplifier (VGA) with a symmetrical dB-linear voltage generator and a 10-bit R-2R DAC for fine gain tuning, a level shifter, a programmable Gm-C low pass filter, a DC offset cancellation circuit, two fixed gain amplifiers with bandwidth extension and a novel buffer amplifier with active peaking for testing purposes. The noise figure is reduced with the help of a low noise pre-amplifier stage, while the linearity is enhanced with a power-efficient buffer and a novel high linearity Gm-C filter. Specifically, the Gm-C filter improves its linearity specification with no increase in power consumption, thanks to an alteration of the trans-conductor/capacitor connection style, instead of pursuing high linearity but power-hungry class-AB trans-conductors. In addition, the logarithmic bandwidth tuning technique is adopted for capacitor array size minimization. The linear-in-dB and DAC gain control topology facilitates the analog baseband gain tuning accuracy and stability, which also provides an efficient access to digital baseband automatic gain control. The analog baseband chip is fabricated using 130-nm SiGe BiCMOS technology. With a power consumption of 5.9~8.8 mW, the implemented circuit achieves a tunable gain range of −30~27 dB (DAC linear gain step guaranteed), a programmable −3 dB bandwidth of 18/19/20/21/22/23/24/25 MHz, a filter order of 3/6 and a gain resolution of better than 0.07 dB. Introduction The booming smart portable devices has led to an imperative demand for multi-standard compatibility on mobile/wireless receivers in recent years, such as Global System for Mobile Communications (GSM), Bluetooth, Global Position System (GPS), Wideband Code Division Multiple Access (W-CDMA), Wireless Local Area Network (WLAN), Long Term Evolution (LTE), and so on. Therefore, industrial corporations and academic institutions have released diverse integrated solutions with different RF front-ends covering various bands and sharing one analog baseband, which performs channel selectivity and gains tunability functions after down conversion mixers. In general, the analog baseband should be optimized for multi-purpose application scenarios with a fundamental tradeoff integrated solutions with different RF front-ends covering various bands and sharing one analog baseband, which performs channel selectivity and gains tunability functions after down conversion mixers. In general, the analog baseband should be optimized for multi-purpose application scenarios with a fundamental tradeoff between noise figure, linearity, power consumption, channel selection and gain/bandwidth/filter order programmability in itself. The fundamental principle, which prevents the analog baseband from obtaining universality, is that: the noise figure is determined by the preceding stages while the linearity is decided by the backward ones. The major challenges to be settled in the design of flexible analog circuits are the following: a. performance programmability; b. specification compatibility; c. energy scalability; d. complexity and cost. In direct conversion receivers, typical analog baseband consists of a programmable gain amplifier (PGA) and a low-pass filter (LPF) as depicted in Figure 1. Thus, two general architectures emerge with their respective pros and cons: the LPF-first topology is more suitable for the receivers with high linearity/medium noise figure, while the PGA-first topology fits the receivers more with low noise figure/medium linearity. The authors of [1] proposed a fourth-order Chebyshev active-R-C LPF + PGA + fourth-order Chebyshev active-R-C LPF topology, which emphasized the filtering capability. The authors of [2] proposed a third-order LPF + PGA topology. The authors of [3] proposed a discrete time IIR LPF + active FIR topology, which focuses on the discrete time filtering function. The authors of [4][5][6][7][8] proposed a typical LPF + VGA topology, with tunable cut-off frequency/noise/gain performances. The authors of [9] proposed a fourth-order merged closed-loop analog baseband topology, which integrates the channel selection and gain programmability in one merged PGA/LPF biquad. However, the closed-loop topology is intrinsically power-hungry and frequency limited since the operational amplifier should exhibit gain bandwidth product (GBW) several times larger than closed-loop bandwidth requirement [10,11]. Therefore, there is no unique optimum solution, but a myriad solution with a variety proportional to the number of input specifications, with the expectation of sufficiently fine gain resolution, bandwidth programmability, excellent noise/linearity performance, acceptable chip footprint and low power consumption. In this paper, a highly reconfigurable open-loop analog baseband prototype chip is proposed with optimized noise/linearity/power consumption performance, and digitally programmable bandwidth/gain/filter order for integrated linear FMCW radar receivers. Power consumption minimization is achieved via two methods: a. open-loop analog baseband chain with feasible compensations for process/voltage/temperature (PVT) variations; b. allocating channel selection/signal amplification specifications to every block across the baseband chain, and adjusting current consumed for diverse gain levels. The noise figure is maintained low with the help of a low noise figure/high gain pre-amplifier, while the linearity of the Gm-C LPF is guaranteed with a novel connection style without any rise in power consumption. Moreover, a novel high linearity buffer amplifier with active peaking is designed for testing purposes. The proposed prototype is fabricated in a 130 nm SiGe BiCMOS process, and the experimental results demonstrate the practicality of the proposal. The rest of the paper is organized as follows. After the introduction given in Section 1, Section 2 derives the LFMCW radar requirement on the overall analog baseband chain, Section 3 illustrates the detailed implementation schematic, Section 4 demonstrates the experimental results and Section 5 concludes the paper. Analog Baseband Architecture for LFMCW Radar LFMCW radars are expansively used in automotive anti-collision, security check, imaging and presence detection applications when high range resolution is required in localization/tracking. A variety of modulation schemes are available, with a transmitted frequency signal acting as sine wave, sawtooth wave, triangular wave or square wave. In a sawtooth wave-based FMCW radar, the achievable range and velocity resolution depend on the transmitting signal bandwidth BW and the linear chirp period T m , as seen in Figure 2. The range resolution ∆r, which refers to the minimum detectable separation distance of two targets of equal cross sections that can be differentiated as distinct targets, is proportional to c/2/BW, where c is the velocity of light. This means that a large modulation bandwidth BW is needed for a fine range resolution. For a transceiver operating at 94 GHz, a modulation period in the order of 100 µs, a modulation bandwidth of higher than 500 MHz, the analog baseband bandwidth can be calculated as follows: where BW is the transmitting signal bandwidth, T m is the linear chirp period, K is the FMCW slope, f o is the center operating frequency, v is the target velocity, c is the velocity of light, r is the distance from source to target, f IF,static is the analog baseband frequency for static target ranging, and f IF,moving is the analog baseband frequency for moving target ranging. After setting BW = 500 MHz, T m = 100 µs, c = 3 × 10 8 m/s, r = 6~7 km into Equations (1)-(4), we can calculate the f IF,static to be 20~23.4 MHz. When doppler radar effect is taken into account, the IF frequency should cover the range of 18~25 MHz. Therefore, for generally used LPFs, the analog baseband −3 dB bandwidth is set to be 18/19/20/21/22/23/24/25 MHz, with 3-bit digital control words to cover the frequency shift coming from potential target moving. Analog Baseband Architecture for LFMCW Radar LFMCW radars are expansively used in automotive anti-collision, security check, imaging and presence detection applications when high range resolution is required in localization/tracking. A variety of modulation schemes are available, with a transmitted frequency signal acting as sine wave, sawtooth wave, triangular wave or square wave. In a sawtooth wave-based FMCW radar, the achievable range and velocity resolution depend on the transmitting signal bandwidth BW and the linear chirp period Tm, as seen in Figure 2. The range resolution Δr, which refers to the minimum detectable separation distance of two targets of equal cross sections that can be differentiated as distinct targets, is proportional to c/2/BW, where c is the velocity of light. This means that a large modulation bandwidth BW is needed for a fine range resolution. For a transceiver operating at 94 GHz, a modulation period in the order of 100 µs, a modulation bandwidth of higher than 500 MHz, the analog baseband bandwidth can be calculated as follows: where BW is the transmitting signal bandwidth, Tm is the linear chirp period, K is the FMCW slope, fo is the center operating frequency, v is the target velocity, c is the velocity of light, r is the distance from source to target, fIF,static is the analog baseband frequency for static target ranging, and fIF,moving is the analog baseband frequency for moving target ranging. After setting BW = 500 MHz, Tm = 100 µs, c = 3 × 10 8 m/s, r = 6~7 km into Equations (1)-(4), we can calculate the fIF,static to be 20~23.4 MHz. When doppler radar effect is taken into account, the IF frequency should cover the range of 18~25 MHz. Therefore, for generally used LPFs, the analog baseband −3 dB bandwidth is set to be 18/19/20/21/22/23/24/25 MHz, with 3-bit digital control words to cover the frequency shift coming from potential target moving. T ra n sm it S ign a l R eceived S ign al F M C W P erio d In this LFMCW radar, direct conversion architecture is adopted to evade image rejection problems and, thus, the analog baseband consumes lower power, since its bandwidth is half of that in low-IF/super heterodyne receivers. Paradoxically, typical dynamic range of direct conversion receiver is lower than that of low-IF/super heterodyne ones. Therefore, novel dynamic range optimization techniques are investigated in two directions: lower noise figure and higher linearity. Merged PGA/LPF biquads in [9] simultaneously optimized the two specifications owing to a closed-loop topology with switchable filter orders. However, this kind of topology becomes power-hungry when the frequency rises, and its gain resolution depends on the resistor/capacitor array size. In other words, the merged analog baseband compromises power consumption and chip size for noise/linearity/gain/filter order reconfigurability. What is more, the noise figure of PGA/LPF is normally higher than 25 dB, which deteriorates the receiver noise specification, since the front-end (including a low noise amplifier and a mixer) of the linear FMCW receiver usually possesses a gain of lower than 20 dB. Thus, architectures in Figure 1 and the merged analog baseband in [9] cannot satisfy the noise/linearity/power consumption requirement concurrently. This paper makes a modification to the LPF-first/PGA-first topology by adding a programmable gain pre-amplifier with course gain tuning ability to its front, as depicted in Figure 3, which forms a PGA-LPF-PGA topology. In detail, this topology includes a pre-amplifier for noise optimization and course gain tuning, a folded Gilbert variable gain amplifier (VGA) with a symmetrical exponential voltage generator and a 10-bit R-2R DAC for fine gain tuning, a level shifter, a programmable G m -C LPF, a DC offset cancellation (DCOC) circuit, two fixed gain amplifiers (FGA) with bandwidth extension and a novel buffer amplifier with active peaking for testing purposes. DC coupling is utilized on account of spectrally efficient modulation schemes [12]. The architecture in Figure 3 is open-loop and thus, frequency/power scalable. From the LPF-first viewpoint, this topology optimizes noise figure with another PGA ahead. From the PGA-first viewpoint, this topology ameliorates linearity with another PGA behind. Nonetheless, a heavy signal-processing burden is placed upon the LPF and its high linearity is a prerequisite. In search of a high linearity LPF, researchers proposed numerous high linearity structures, such as active-R-C and class-AB G m -C ones [6,[13][14][15][16]. Linearity is refined with a compromise of power consumption and operation frequency, which is fundamentally determined by the closed-loop LPF topology and the complex routings inside the trans-conductor. Thus, high linearity open loop LPFs, which theoretically decouples the linearity from power consumption, are urgently needed. In this LFMCW radar, direct conversion architecture is adopted to evade image rejection problems and, thus, the analog baseband consumes lower power, since its bandwidth is half of that in low-IF/super heterodyne receivers. Paradoxically, typical dynamic range of direct conversion receiver is lower than that of low-IF/super heterodyne ones. Therefore, novel dynamic range optimization techniques are investigated in two directions: lower noise figure and higher linearity. Merged PGA/LPF biquads in [9] simultaneously optimized the two specifications owing to a closedloop topology with switchable filter orders. However, this kind of topology becomes power-hungry when the frequency rises, and its gain resolution depends on the resistor/capacitor array size. In other words, the merged analog baseband compromises power consumption and chip size for noise/linearity/gain/filter order reconfigurability. What is more, the noise figure of PGA/LPF is normally higher than 25 dB, which deteriorates the receiver noise specification, since the front-end (including a low noise amplifier and a mixer) of the linear FMCW receiver usually possesses a gain of lower than 20 dB. Thus, architectures in Figure 1 and the merged analog baseband in [9] cannot satisfy the noise/linearity/power consumption requirement concurrently. This paper makes a modification to the LPF-first/PGA-first topology by adding a programmable gain pre-amplifier with course gain tuning ability to its front, as depicted in Figure 3, which forms a PGA-LPF-PGA topology. In detail, this topology includes a pre-amplifier for noise optimization and course gain tuning, a folded Gilbert variable gain amplifier (VGA) with a symmetrical exponential voltage generator and a 10-bit R-2R DAC for fine gain tuning, a level shifter, a programmable Gm-C LPF, a DC offset cancellation (DCOC) circuit, two fixed gain amplifiers (FGA) with bandwidth extension and a novel buffer amplifier with active peaking for testing purposes. DC coupling is utilized on account of spectrally efficient modulation schemes [12]. The architecture in Figure 3 is open-loop and thus, frequency/power scalable. From the LPF-first viewpoint, this topology optimizes noise figure with another PGA ahead. From the PGA-first viewpoint, this topology ameliorates linearity with another PGA behind. Nonetheless, a heavy signal-processing burden is placed upon the LPF and its high linearity is a prerequisite. In search of a high linearity LPF, researchers proposed numerous high linearity structures, such as active-R-C and class-AB Gm-C ones [6,[13][14][15][16]. Linearity is refined with a compromise of power consumption and operation frequency, which is fundamentally determined by the closed-loop LPF topology and the complex routings inside the trans-conductor. Thus, high linearity open loop LPFs, which theoretically decouples the linearity from power consumption, are urgently needed. Detailed Circuit Designs In this section, detailed sub-block circuit schematics and their respective design procedures are presented. Detailed Circuit Designs In this section, detailed sub-block circuit schematics and their respective design procedures are presented. Pre-Amplifier As the first stage in the analog baseband chain, the pre-amplifier should exhibit low noise figure and simultaneously sufficient gain to suppress the noise contribution from the following stages to the overall analog baseband circuitry. Moreover, the gain programmability serves as a critical safeguard for large variations in input signal strength and a powerful certification of automatic gain control (AGC). Class-A/AB amplifiers are usually selected with common mode stability circuitry, with a high noise figure attributed to circuit complexities. Although the three fundamental gain programmability schemes, including current switching/bleeding, load resistor switching and source degeneration, are reported as depicted in Figure 4, their merits and demerits are evident: a. current switching method dynamically varies gain by correspondingly adjusting biasing current. High gain typically corresponds to high linearity/low noise figure/high dynamic range, while low gain is in accordance with low linearity/high noise figure/low dynamic range. Therefore, the dynamic range fluctuates over all gain settings; b. load resistor switching assumes a gain switching scheme with little noise figure variations, since the noise contribution from load resistors is restrained by amplifying the transistors. However, the load resistor switching has a detrimental effect on voltage swings and thus, the voltage swing varies with gain; c. the source degeneration scheme monitors the transconductance of main amplification transistors with a corresponding adjustment in noise figure and linearity. When the gain is high, a weak source degeneration is formed with a low noise figure and a medium linearity. When the gain is low, a strong source degeneration is constructed with a high linearity and a medium noise figure. Nevertheless, the current consumption is stable over all gain settings, which deteriorates the power dissipation self-adaptability. The advantages and disadvantages of them are listed in Table 1. Pre-Amplifier As the first stage in the analog baseband chain, the pre-amplifier should exhibit low noise figure and simultaneously sufficient gain to suppress the noise contribution from the following stages to the overall analog baseband circuitry. Moreover, the gain programmability serves as a critical safeguard for large variations in input signal strength and a powerful certification of automatic gain control (AGC). Class-A/AB amplifiers are usually selected with common mode stability circuitry, with a high noise figure attributed to circuit complexities. Although the three fundamental gain programmability schemes, including current switching/bleeding, load resistor switching and source degeneration, are reported as depicted in Figure 4, their merits and demerits are evident: a. current switching method dynamically varies gain by correspondingly adjusting biasing current. High gain typically corresponds to high linearity/low noise figure/high dynamic range, while low gain is in accordance with low linearity/high noise figure/low dynamic range. Therefore, the dynamic range fluctuates over all gain settings; b. load resistor switching assumes a gain switching scheme with little noise figure variations, since the noise contribution from load resistors is restrained by amplifying the transistors. However, the load resistor switching has a detrimental effect on voltage swings and thus, the voltage swing varies with gain; c. the source degeneration scheme monitors the transconductance of main amplification transistors with a corresponding adjustment in noise figure and linearity. When the gain is high, a weak source degeneration is formed with a low noise figure and a medium linearity. When the gain is low, a strong source degeneration is constructed with a high linearity and a medium noise figure. Nevertheless, the current consumption is stable over all gain settings, which deteriorates the power dissipation self-adaptability. The advantages and disadvantages of them are listed in Table 1. Forasmuch as the low noise and course gain tuning requirements, this paper utilizes a limiting amplifier structure with no common mode control [12] and adds 3-bit gain programmability with a switched resistor source degeneration technique, which is depicted in Figure 5. The common mode stabilization is left to following stages with noise figure optimization. Although the issue of stable current consumption still exists as one in source degeneration structure, the limiting amplifier structure avoids the common mode control circuitry, which is usually power-hungry. In short, the power consumption problem is partially circumvented. The component sizes and post-simulated specifications at TT 27 • are given in Table 2. Forasmuch as the low noise and course gain tuning requirements, this paper utilizes a limiting amplifier structure with no common mode control [12] and adds 3-bit gain programmability with a switched resistor source degeneration technique, which is depicted in Figure 5. The common mode stabilization is left to following stages with noise figure optimization. Although the issue of stable current consumption still exists as one in source degeneration structure, the limiting amplifier structure avoids the common mode control circuitry, which is usually power-hungry. In short, the power consumption problem is partially circumvented. The component sizes and post-simulated specifications at TT 27° are given in Table 2. VGA and Symmetrical Exponential Voltage Generator Intrinsically, Gilbert structure can be regarded as a hybrid combination of current switching and load resistor switching since the current is simultaneously split between load resistors R 1 /R 2 and tail currents I tail1 /I tail2 in Figure 6. Therefore, an optimization regarding the pros and cons of the two schemes can be conducted for the sake of dynamic range expansion. In essence, the Gilbert cell adjusts its gain via the current switching technique, but the difference between current switching and Gilbert cell lies in the current feeding through load resistors. In the Gilbert cell, the current through load resistors is stable across all gain settings, which circumvents dynamic range fluctuations encountered in current switching technique. Thus, the Gilbert structure is utilized for its wide gain tuning range and robustness, while a dB-linear voltage generator with an R-2R DAC is utilized for digital gain tuning precision. The variable gain amplifier cell with its control voltage generator is depicted in Figure 6. Moreover, dual supply voltages 1.2 V/1.8 V are adopted for power optimization. Specifically, 1.8 V feeds VGA, while 1.2 V feeds the voltage exponential generator. In the VGA cell, assuming the transistors are working in strong saturation regions and square law model is adopted, the linear relationship between the input and output of VGA is expressed as follows: where µ n is the low field mobility coefficient, C ox is the unit area oxide capacitance and W/L is the aspect ratio of the transistors. The voltage gain is in a dB-linear relationship with the input voltage as expressed in Equation (5). The component sizes and post-simulated specifications at TT 27 • are given in Table 3. DAC R-2R DAC is employed to facilitate the digital control of the dB-linear voltage generator and to ensure fine gain resolution, with a single pole double throw (SPDT) based gain tuning scheme, switchable between analog continuous tuning and digital step tuning [8]. A 10-bit R-2R DAC with its output buffer is depicted in Figure 7. Two respective voltage ports named Vrefh and Vrefl refer to the high and low reference voltages, which are in accordance with Figure 3. The unit resistor value is selected to be 14.95 Kohms, with accuracy in mind. In the design of a 10-bit DAC, component accuracy, symmetry of differential components and environmental consistency are three crucial points in layout floor planning. Thus, dummy cells near key components, shielded signal path optimization should be iterated several times. In addition, the output buffer should possess rail to rail output swing and acceptable drive capability. Therefore, the folded cascode topology depicted in Figure 7b is chosen with a linearization technique. DAC R-2R DAC is employed to facilitate the digital control of the dB-linear voltage generator and to ensure fine gain resolution, with a single pole double throw (SPDT) based gain tuning scheme, switchable between analog continuous tuning and digital step tuning [8]. A 10-bit R-2R DAC with its output buffer is depicted in Figure 7. Two respective voltage ports named V refh and V refl refer to the high and low reference voltages, which are in accordance with Figure 3. The unit resistor value is selected to be 14.95 Kohms, with accuracy in mind. In the design of a 10-bit DAC, component accuracy, symmetry of differential components and environmental consistency are three crucial points in layout floor planning. Thus, dummy cells near key components, shielded signal path optimization should be iterated several times. In addition, the output buffer should possess rail to rail output swing and acceptable drive capability. Therefore, the folded cascode topology depicted in Figure 7b is chosen with a linearization technique. In the static simulation of the whole DAC, the full voltage range is 1.2 V, the INL is 0.17 LSB and the DNL is 0.34 LSB. In dynamic simulations, a sampling frequency of 1 MHz, with a sinusoidal input signal of 0.4961 MHz, is chosen and the effective number of bits (ENOB) is higher than 9.88 bits. The component sizes and post-simulated specifications at TT 27 • are given in Table 4. In the static simulation of the whole DAC, the full voltage range is 1.2 V, the INL is 0.17 LSB and the DNL is 0.34 LSB. In dynamic simulations, a sampling frequency of 1 MHz, with a sinusoidal input signal of 0.4961 MHz, is chosen and the effective number of bits (ENOB) is higher than 9.88 bits. The component sizes and post-simulated specifications at TT 27° are given in Table 4. Level Shifter In between VGA and LPF, a level shifter is inserted for the sake of common mode voltage stabilization, since its preceding pre-amplifier and VGA do not provide this kind of function. The level shifter is depicted in Figure 8, and a typical one-stage amplification with two-stage DC voltage feedback is used. R 2 /C 1 performs the gain/phase margin compensation and snake resistors layout are utilized with large MOS-capacitors. The component sizes and post-simulated specifications at TT 27 • are given in Table 5. Level Shifter In between VGA and LPF, a level shifter is inserted for the sake of common mode voltage stabilization, since its preceding pre-amplifier and VGA do not provide this kind of function. The level shifter is depicted in Figure 8, and a typical one-stage amplification with two-stage DC voltage feedback is used. R2/C1 performs the gain/phase margin compensation and snake resistors layout are utilized with large MOS-capacitors. The component sizes and post-simulated specifications at TT 27° are given in Table 5. Programmable G m -C LPF G m -C filters are well known for their open loop and frequency scalable characteristics. In theory, the G m -C topology with simplicity, modularity and programmability would be the perfect choice for high frequency continuous-time filter designs. However, the mediocre linearity specification limits its potential use in analog baseband chain of direct conversion receivers. Therefore, linearity improvement techniques are pursued in two aspects: a. the transconductor cell [5][6][7][8]; b. the filter architecture. Most relevant work focuses on the former one with an emphasis on class AB transconductors. Nonetheless, class-AB transconductors are inherently power-hungry and possess a medium noise figure [10]. In addition, its common mode feedforward and feedback circuitry needs special care in common mode analysis, and its layout and routing are complex [6,13]. Moreover, previously reported linearity enhancement techniques in class-A transconductors are categorized into five groups: a. source degeneration; b. cross coupled transistors; c. load switching; d. current bleeding; e. local g m -boosting feedback. None of them offers an architectural approach to promote linearity. This paper proposes a modified 3rd order Butterworth LPF with 3-bit logarithmic bandwidth tuning ability, as depicted in Figure 9. Traditional third-order Butterworth LPF is depicted in Figure 9a, with a bandwidth tuning technique. From the bi-quad cascading viewpoint, an exchange of V 2+ and V 1− is conducted as in Figure 9b. The voltage-current relationships of the two bi-quads are derived as follows. Therefore, the output current remains the same for the two bi-quads. A quick glimpse of the right half of Figure 9b reveals that for a typical transconductor cell (G m in Figure 9), the differential input signal swing is replaced by common mode input signal swing. Since the major nonlinearity is attributed to the transconductor itself, this change circumvents the differential signal processing vulnerability, with a new burden on common mode signal processing ability. On one hand, differential input signals experience the same voltage-to-current transformation in transconductor cells. Therefore, the frequency response remains the same after the modifications. On the other hand, common mode input signals experience different path, which results in larger variations in common mode signal swing. In short, the modification method essentially linearizes the LPF with an increase in common mode voltage variation, and a corresponding reduction in the differential mode voltage swing in the meantime. Therefore, the common mode rejection ratio and common mode feedback circuit should be designed iteratively with an emphasis on common mode stability. It should be cautious to use traditional sharing technique of common mode feedback circuit. Afterwards, similar input exchange procedures are executed in Figure 9a and, thus, Figure 9c demonstrates the modified Gm-C LPF with its individual transconductor cell and switched capacitor array. Simulation results show that the output third-order interception point (OIP3) of a third-order Butterworth LPF is elevated from around 4.5 dBm to 14.5 dBm. Electronics 2020, 9, As to the bandwidth tuning aspect, typical binary capacitor array cannot set the bandwidth linearly when the operation frequency range covers several decades. For typical LPF, the −3 dB bandwidth is determined as follows. As to the bandwidth tuning aspect, typical binary capacitor array cannot set the bandwidth linearly when the operation frequency range covers several decades. For typical LPF, the −3 dB bandwidth is determined as follows. where f o is the −3 dB cutoff frequency, L is the inductance value and C is the capacitance value. Thus, C is in a square root relation with f o . For example, in order to achieve 2% step size in bandwidth over two decades of the frequency range, a 6-bit resolution in bandwidth programmability is required (2 6 > 1/2%), while higher than 12-bit resolution is a prerequisite in capacitor array (100 2 /98 2 requires another resolution of 6-bit for the lower than 0.025 capacitance resolution), which wastes the majority of the chip area. Therefore, the logarithmic bandwidth tuning technique is also adopted in the LPF design, as depicted in Figure 9c, since the logarithmic equation shrinks the capacitor array into small scale. The individual capacitor value can be calculated with a Taylor series expansion method, as follows: where C unit is the unit capacitor and b 2 /b 1 /b 0 are three digital control bits. This logarithmic tuning theoretically relaxes the passive array resolution by half. The post-simulated specifications of a third-order Butterworth LPF is listed in Table 6. DC Offset Cancellation (DCOC) Unlike the typical DC stabilization circuitry used in level shifter and LPF, an f T doubler based DC offset cancellation circuit is proposed for the sake of reduced parasitics, as depicted in Figure 10. The DC extraction LPF is given in Figure 3, which is composed of R 1 and C 1 . C 1 is an area efficient MOS-capacitor. The −3 dB corner frequency of the LPF is around 100 kHz. The component sizes and post-simulated specifications at TT 27 • are given in Table 7. Electronics 2020, 9, x FOR PEER REVIEW 13 of 19 two decades of the frequency range, a 6-bit resolution in bandwidth programmability is required (2 6 > 1/2%), while higher than 12-bit resolution is a prerequisite in capacitor array (100 2 /98 2 requires another resolution of 6-bit for the lower than 0.025 capacitance resolution), which wastes the majority of the chip area. Therefore, the logarithmic bandwidth tuning technique is also adopted in the LPF design, as depicted in Figure 9c, since the logarithmic equation shrinks the capacitor array into small scale. The individual capacitor value can be calculated with a Taylor series expansion method, as follows: where Cunit is the unit capacitor and b2/b1/b0 are three digital control bits. This logarithmic tuning theoretically relaxes the passive array resolution by half. The post-simulated specifications of a thirdorder Butterworth LPF is listed in Table 6. Table 6. Post simulated performance specifications of a third-order Butterworth LPF. DC Offset Cancellation (DCOC) Unlike the typical DC stabilization circuitry used in level shifter and LPF, an fT doubler based DC offset cancellation circuit is proposed for the sake of reduced parasitics, as depicted in Figure 10. The DC extraction LPF is given in Figure 3, which is composed of R1 and C1. C1 is an area efficient MOS-capacitor. The −3 dB corner frequency of the LPF is around 100 kHz. The component sizes and post-simulated specifications at TT 27° are given in Table 7. Two FGAs are used to meet the gain compensation and stability requirement. Figure 11 illustrates one FGA with three gain stages with an active inductive peaking technique, which is formed by M7/M8. It should be pointed out that active peaking in itself affects frequency dependent group delay and, thus, data dependent jitter is worsened more or less. Therefore, group delay should be carefully verified in the FGA design. The component sizes and post-simulated specifications at TT 27 • are given in Table 8. Fixed Gain Amplifier (FGA) Two FGAs are used to meet the gain compensation and stability requirement. Figure 11 illustrates one FGA with three gain stages with an active inductive peaking technique, which is formed by M7/M8. It should be pointed out that active peaking in itself affects frequency dependent group delay and, thus, data dependent jitter is worsened more or less. Therefore, group delay should be carefully verified in the FGA design. The component sizes and post-simulated specifications at TT 27° are given in Table 8. Buffer Amplifier A low power transconductance-linearized buffer with bandwidth extension technique is proposed, as depicted in Figure 12. An internal negative feedback is formed by M1/M3/M5/M6, and the output voltage can be expressed as follows. Buffer Amplifier A low power transconductance-linearized buffer with bandwidth extension technique is proposed, as depicted in Figure 12. An internal negative feedback is formed by M1/M3/M5/M6, and the output voltage can be expressed as follows. In summary, the presented buffer has triple fold advantages. Firstly, the transconductance linearization technique involves cross-coupling CMOS pairs. Secondly, the output load is lowered with the source follower topology and, hence, the bandwidth is broadened compared to common source based ones. Thirdly, the voltage headroom is V gs,PMOS + 2 × V ds,NMOS , which suits low voltage buffer design well. The component sizes and post-simulated specifications at TT 27 • are given in Table 9. Electronics 2020, 9, If we assume the ideal output resistance is infinite and neglect parasitic capacitance of transistors, we can derive a simplified version of Equation (10), which intrinsically demonstrates a voltage buffer feature. In summary, the presented buffer has triple fold advantages. Firstly, the transconductance linearization technique involves cross-coupling CMOS pairs. Secondly, the output load is lowered with the source follower topology and, hence, the bandwidth is broadened compared to common source based ones. Thirdly, the voltage headroom is Vgs,PMOS + 2 × Vds,NMOS, which suits low voltage buffer design well. The component sizes and post-simulated specifications at TT 27° are given in Table 9. Experimental Results The proposed analog baseband is implemented in a 130 nm SiGe BiCMOS technology. The chip photograph is given in Figure 13 and its chip area is 1400 µm × 800 µm, with its core area of 1000 µm × 300 µm. The extra area occupied is used for on-chip measurement and on-chip decoupling capacitors. The voltage gain tuning range is −30~35 dB, when the DAC reference voltage V refh and V refl are set between 0.6 V and 0.95 V as depicted in Figure 14. However, when a dB-linear voltage gain curve is required, V refh and V refl should be set as 0.68 V and 0.9 V, which reduces the voltage gain range to −28~32 dB. The measured gain resolution is better than 0.1 dB, which is in accordance with the post-simulated DAC performance. The programmable cut-off frequency response is measured as depicted in Figure 15, when voltage gain is around −7 dB. The biasing current of trans-conductors is adjusted to match the LPF bandwidth requirement. Therefore, the −3 dB cutoff frequency via programmable capacitor array is listed as 17~24 MHz, with a 1 MHz step. The filter order is configured for different requirements in Figure 16 with orders of three and six. The chip adopted a dual power supply of 1.2 V/1.8 V for optimized power consumption. The power consumption is 2.9 mA@1.8 V and 0.3~2.5 mA@1.2 V, which is divided to sub-blocks as depicted in Figure 17. The measured OP1 dB is around −3 dBm when the input signal is at 20 MHz. A general comparison with related works is summarized in Table 10. The proposed analog baseband exhibits comparable specifications with previous works, while the voltage gain resolution is far better, thanks to the DAC and exponential voltage generator. In future research, the minimum bit of the LPF will be modified to obtain more desirable results. Experimental Results The proposed analog baseband is implemented in a 130 nm SiGe BiCMOS technology. The chip photograph is given in Figure 13 and its chip area is 1400 µm × 800 µm, with its core area of 1000 µm × 300 µm. The extra area occupied is used for on-chip measurement and on-chip decoupling capacitors. The voltage gain tuning range is −30~35 dB, when the DAC reference voltage Vrefh and Vrefl are set between 0.6 V and 0.95 V as depicted in Figure 14. However, when a dB-linear voltage gain curve is required, Vrefh and Vrefl should be set as 0.68 V and 0.9 V, which reduces the voltage gain range to −28~32 dB. The measured gain resolution is better than 0.1 dB, which is in accordance with the post-simulated DAC performance. The programmable cut-off frequency response is measured as depicted in Figure 15, when voltage gain is around −7 dB. The biasing current of trans-conductors is adjusted to match the LPF bandwidth requirement. Therefore, the −3 dB cutoff frequency via programmable capacitor array is listed as 17~24 MHz, with a 1 MHz step. The filter order is configured for different requirements in Figure 16 with orders of three and six. The chip adopted a dual power supply of 1.2 V/1.8 V for optimized power consumption. The power consumption is 2.9 mA@1.8 V and 0.3~2.5 mA@1.2 V, which is divided to sub-blocks as depicted in Figure 17. The measured OP1 dB is around −3 dBm when the input signal is at 20 MHz. A general comparison with related works is summarized in Table 10. The proposed analog baseband exhibits comparable specifications with previous works, while the voltage gain resolution is far better, thanks to the DAC and exponential voltage generator. In future research, the minimum bit of the LPF will be modified to obtain more desirable results. Experimental Results The proposed analog baseband is implemented in a 130 nm SiGe BiCMOS technology. The chip photograph is given in Figure 13 and its chip area is 1400 µm × 800 µm, with its core area of 1000 µm × 300 µm. The extra area occupied is used for on-chip measurement and on-chip decoupling capacitors. The voltage gain tuning range is −30~35 dB, when the DAC reference voltage Vrefh and Vrefl are set between 0.6 V and 0.95 V as depicted in Figure 14. However, when a dB-linear voltage gain curve is required, Vrefh and Vrefl should be set as 0.68 V and 0.9 V, which reduces the voltage gain range to −28~32 dB. The measured gain resolution is better than 0.1 dB, which is in accordance with the post-simulated DAC performance. The programmable cut-off frequency response is measured as depicted in Figure 15, when voltage gain is around −7 dB. The biasing current of trans-conductors is adjusted to match the LPF bandwidth requirement. Therefore, the −3 dB cutoff frequency via programmable capacitor array is listed as 17~24 MHz, with a 1 MHz step. The filter order is configured for different requirements in Figure 16 with orders of three and six. The chip adopted a dual power supply of 1.2 V/1.8 V for optimized power consumption. The power consumption is 2.9 mA@1.8 V and 0.3~2.5 mA@1.2 V, which is divided to sub-blocks as depicted in Figure 17. The measured OP1 dB is around −3 dBm when the input signal is at 20 MHz. A general comparison with related works is summarized in Table 10. The proposed analog baseband exhibits comparable specifications with previous works, while the voltage gain resolution is far better, thanks to the DAC and exponential voltage generator. In future research, the minimum bit of the LPF will be modified to obtain more desirable results. a Core area is 0.3 mm 2 and total chip area is 1.12 mm 2 ; b The author did not give the port impedance levels; c The noise figure is post-simulated at maximum gain; d The filter order is programmable and thus this parameter varies across filter order settings. a Core area is 0.3 mm 2 and total chip area is 1.12 mm 2 ; b The author did not give the port impedance levels; c The noise figure is post-simulated at maximum gain; d The filter order is programmable and thus this parameter varies across filter order settings. Conclusions This paper proposed a reconfigurable analog baseband circuitry for LFMCW RADAR receivers with bandwidth/gain/filter order programmability. Measurement results are acceptable and match the simulations well. Although the majority of performance specifications are acceptable with respect to recent references, there is still no clear distinction between the closed loop analog baseband and an open loop one in the facet of power consumption. However, when operating frequency rises, closed loop architecture will soon evaporate, owing to the upper limit of GBW and power consumption. In future research, the reconfigurable analog baseband should be updated with a bandpass filter or complex filter, in order to reduce the in-band integrated noise for better receiver detection specification. Conclusions This paper proposed a reconfigurable analog baseband circuitry for LFMCW RADAR receivers with bandwidth/gain/filter order programmability. Measurement results are acceptable and match the simulations well. Although the majority of performance specifications are acceptable with respect to recent references, there is still no clear distinction between the closed loop analog baseband and an open loop one in the facet of power consumption. However, when operating frequency rises, closed loop architecture will soon evaporate, owing to the upper limit of GBW and power consumption. In future research, the reconfigurable analog baseband should be updated with a bandpass filter or complex filter, in order to reduce the in-band integrated noise for better receiver detection specification.
2020-05-21T09:11:18.435Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "2e856c61a926cd42d3eefb14be7b53df31f3b2b1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/5/831/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f944a88472d5cea2740affe710b704b5c80ec6a0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
252034902
pes2o/s2orc
v3-fos-license
Overexpression of CDCP1 is Associated with Poor Prognosis and Enhanced Immune Checkpoints Expressions in Breast Cancer CUB-domain containing protein 1 (CDCP1) is a transmembrane protein acting as an effector of SRC family kinases, which play an oncogenic role in multiple human cancers. However, its clinical and immune correlations in breast cancer (BrCa) have not been explored. To define the expression, prognostic value, and potential molecular role of CDCP1 in BrCa, multiple public datasets, and an in-house cohort were used. Compared with paratumor tissue, CDCP1 was remarkably upregulated in the tumor tissues at both mRNA and protein levels. In the in-house cohort, CDCP1 protein expression was related to several clinicopathological parameters, including age, ER status, PR status, molecular type, and survival status. Kaplan–Meier analysis and Cox regression analysis exhibited that CDCP1 was an important prognostic biomarker in BrCa. In addition, enrichment analysis uncovered that CDCP1 was not only involved in multiple oncogenic pathways, but correlated with overexpression of immune checkpoints. Overall, we reported that increased expression of CDCP1 is a favorable prognostic factor in patients with BrCa. In addition, the correlations between CDCP1 and immune checkpoints provide a novel insight into the adjuvant treatment for immune checkpoint blockade via targeting CDCP1. Introduction Breast cancer (BrCa) is a common malignancy with the highest morbidity and terrible mortality among all cancers worldwide [1]. According to the latest statistical data, there will be 290,560 estimated new cases and more than 43,000 estimated deaths in 2022 in the USA [1]. In addition, the morbidity of BrCa has been slowly increasing by approximately 0.5% per year since the mid-2000 s partly due to continued decreases in fertility and increase in excess body weight [2]. Although the prognosis of BrCa has been persistently improved with the rapid development of comprehensive and personalized therapeutic strategies, including chemotherapy, radiotherapy, targeted therapy, and immunotherapy, not all patients could benefit from the established treatment options [3]. us, reliable biomarkers are important for the prediction of drug-specific responses and prognosis in BrCa patients. CUB-domain containing protein 1 (CDCP1) encodes a transmembrane protein that contains three extracellular CUB domains and functions as an effector of SRC family kinases [4]. Previous studies have revealed that CDCP1 is oncogenic in several human cancers via regulating tyrosine phosphorylation-dependent cellular functions, and then promotes tumor invasion and metastasis [5,6]. A growing number of studies uncover the multiple roles of CDCP1 in cancers. CDCP1 is highly expressed in mesenchymal glioma subtypes, which may promote proneural-mesenchymal transformation [7]. Given CDCP1 is highly expressed in RAS-driven cancers, targeting a proteolytic neoepitope on CDCP1 is a pan-cancer approach to control RAS-driven cancers [8]. In addition, CDCP1 is a prognostic biomarker in early non-small-cell lung cancer, and its high expression predicts a poor prognosis [9]. Although several studies have preliminarily investigated the oncogenic role of CDCP1 in BrCa [10,11], systematic analysis based on transcriptomics and its prognostic value in BrCa has not been defined yet. In the current research, we aimed to investigate the expression, prognostic value, and potential molecular role of CDCP1 in BrCa using multiple public datasets and an inhouse cohort. We reported that CDCP1 was remarkably upregulated in BrCa tissues and enriched in the HER2positive and the triple-negative subtypes. In addition, high expression of CDCP1 predicted poor prognosis in BrCa. Moreover, we also performed a systematic analysis of CDCP1 using the transcriptomic data and found that CDCP1 was not only involved in multiple oncogenic pathways but correlated with overexpression of immune checkpoints. Overall, we systematically analyzed the role of CDCP1 and emphasized the remarkable correlation between CDCP1 and immune checkpoints in BrCa. UALCAN Database Analysis. UALCAN (https://ualcan. path.uab.edu/) is an online open-access platform using omics data and clinical information from e Cancer Genome Atlas (TCGA) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC) databases [12]. It could be utilized to analyze transcriptional and protein levels of potential genes of interest between tumor and paratumor tissues and their association with clinicopathologic features. In the current study, the UALCAN tool was utilized to analyze the transcriptional and protein levels of CDCP1 in BrCa and paratumor tissues and its association with clinical stages and molecular subclasses. All the BrCa cases available in the TCGA and the CPTAC subdatabases were included in our study. 2.2. Kaplan-Meier Plotter Database Analysis. Kaplan-Meier plotter (https://kmplot.com/analysis/) is a web-based tool integrating gene expression cohorts, clinical information, and survival data [13]. All cancer samples accessible on the Kaplan-Meier plotter were utilized to assess the prognostic values of CDCP1 in BrCa. e mean expression of probe sets (1554110_at and 218451_at) was used to estimate the CDCP1 expression. BrCa patients were divided into the low-and high-CDCP1 expression groups according to the median level of CDCP1, with the rest of the settings set to default. Kaplan-Meier survival plots were derived to display all of the cohorts. e log-rankP value, 95 percent confidence interval (95%CI), and hazard ratio (HR) were computed and shown online. Correlation Genes Screen and Enrichment Analysis. Linked Omics (https://www.linkedomics.org/login.php) is a web-based tool used to handle the TCGA data [14]. In this research, the Linked Omics was used to screen genes that correlated with CDCP1 in BrCa. Genes with a correlation coefficient ≥ 0.2 or ≤ −0.2 were deemed to be candidates. For all parameters, the default choices were utilized. To identify the CDCP1-related biological functions and pathways, all correlated genes were used for enrichment analysis. We downloaded the h.all.v7.4.symbols.gmt and c2.cp.wikipathways.v7.4.symbols.gmt subclasses from the molecular signatures database [15], which were used as the background. e enrichment analysis was conducted using the R package "clusterProfiler." To obtain the results of gene set enrichment, the minimum gene set was set to 5 and the maximum gene was set to 5000. e top 5 terms were exhibited in this research. Estimation of the Immunological Characteristics of the TME. e RNA-sequencing (RNA-seq) data of BrCa in the TCGA database was obtained from the UCSC Xena (https:// xenabrowser.net/datapages/). e public data was utilized to investigate the immunological features. First, the ESTI-MATE algorithm was conducted to estimate tumor purity, ESTIMATE score, immune score, and stromal score [16], and their correlations with CDCP1 expression were next assessed. Next, several gene markers related to the tumor microenvironment (TME) as well as immune checkpoints were obtained from a previous publication [17] and their correlations with CDCP1 expression were evaluated. Furthermore, the correlations between CDCP1 expression and 150 immune-related genes, including chemokines, receptors, MHC molecules, immunoinhibitors, and immunostimulators, were assessed. In addition, the CIBERSOR method [18] was used to estimate the abundance of tumorinfiltrating immune cells (TIICs) based on gene expression profiles using the R package IOBR, and the correlations between CDCP1 expression and TIICs abundance were also evaluated. Collection of BrCa Specimens. e BrCa (Cat. HBre-Duc159Sur-01) tumor tissue microarray (TMA) was purchased from Outdo BioTech (Shanghai, China). A total of 119 tumor samples and 40 paired paratumor samples were contained in this research. Detailed clinic-pathological and follow-up data were provided by Outdo BioTech. Ethical approval was granted by the Clinical Research Ethics Committee in Outdo Biotech (Shanghai, China). IHC Staining and Semiquantitative Assessment. Immunohistochemistry (IHC) staining was conducted on the above sections according to the standardized procedures. Acquisition of GSE173839 Dataset. e GSE173839 dataset included RNA-seq data of BrCa from 71 patients on the durvalumab/olaparib arm, which were downloaded from the Gene Expression Omnibus (https://www.ncbi.nlm.nih. gov/geo/) [20]. We extracted the expression data of CDCP1 and PD-L1, explored the predictive value of CDCP1 for immunotherapy, and compared its predictive value with PD-L1. Statistical Analysis. All statistical analyses were conducted using SPSS 26.0 and R 4.0.2. All data are presented as means ± SDs. e difference between the two groups was analyzed by Student's t-test or Mann-Whitney test. Survival analysis was performed by log-rank test and Cox regression analysis. Associations between CDCP1 expression and clinic-pathological features were assessed using the chisquare test or corrected chi-square test. Correlation analysis between two variables was analyzed by the Pearson test. All statistical tests were two-sided, and P value ≤ 0.05 was considered statistically significant. CDCP1 was Upregulated in BrCa Tissues. First, we compared the expression levels of CDCP1 in tumor and paratumor samples using the TCGA, the CPTAC, and the in-house cohorts. In the TCGA cohort, the transcriptional level of CDCP1 was notably upregulated in BrCa tissues ( Figure 1(a)). In addition, CDCP1 protein was also overexpressed in tumor samples in the CPTAC cohort ( Figure 1(b)). Moreover, we utilized the IHC staining to detect CDCP1 expression BrCa and paratumor tissues, and the results showed that CDCP1 protein was significantly enhanced in tumor samples (Figure 1(c)-1(d)). Overall, CDCP1 was highly expressed in BrCa tissues, which could participate in the oncogenesis of BrCa. CDCP1 Was Related to the Molecular Type of BrCa. Next, the associations between CDCP1 protein expression and clinicopathological features in BrCa were evaluated in the in-house cohort. As shown in Table 1, the expression of CDCP1 was not related to tumor differentiation, T stage, AJCC stage, and HER2 status. However, CDCP1 was significantly associated with age, N stage, ER status, PR status, molecular type, and survival status. We also compared the expression levels of CDCP1 in different TNM stages and molecular subtypes in the TCGA, the CPTAC, and the inhouse cohorts. e results exhibited that CDCP1 was not varied in tumor tissues with different TNM stages ( Figures 2(a), 2(c), 2(e)), but upregulated in HER-positive and triple-negative subtypes (Figures 2(b), 2(d), 2(f )). Taken together, the expression of CDCP1 was associated with the molecular type of BrCa. Overexpression of CDCP1 Predicted Poor Prognosis of BrCa. Given the notable association between CDCP1 expression and survival status, we subsequently investigated the prognostic value of CDCP1 in BrCa. In the Kaplan-Meier plotter database, high transcriptional expression of CDCP1 was remarkably associated with poor relapse-free survival (RFS), overall survival (OS), and distant-metastasis-free survival (DMFS) (Figures 3(a)-3(c)). In addition, in the in-house cohort, CDCP1 was upregulated in the tumor tissues of patients who died during the follow-up processes (Figure 3(d)). Similarly, high expression of CDCP1 protein expression predicted poor OS in the inhouse cohort (Figure 3(e)). Furthermore, both univariate and multivariate Cox regression analyses revealed that high expression of CDCP1 was an independent prognostic factor in BrCa patients (Table 2). Collectively, CDCP1 was a significant prognostic biomarker in BrCa. Analysis of CDCP1-Related Potential Functions in BrCa. Subsequently, we tried to investigate CDCP1-related functions in BrCa. Genes correlated with CDCP1 in BrCa with correlation coefficient ≥0.2 or ≤ −0.2 were deemed to be candidates (Figures S1A-S1C). en, hallmark and Wikipathways gene set analyses of positively correlated genes (PCGs) and negatively correlated genes (NCGs) were conducted, respectively. PCGs mainly participated in an inflammatory response, TNF-α signaling, hypoxia, epithelial-mesenchymal transition (EMT), and interferon-c response (Figure 4(a)), and was involved in focal adhesion, primary focal segmental glomerulosclerosis, PI3K-AKT signaling pathway, and AGE-RAGE pathway (Figure 4(b)). e enrichment results of Wikipathways were visualized in Figure 4(c). Given that EGFR was as a significant gene that positively correlated with CDCP1, we validated the correlation between these genes in the in-house cohort, and the result exhibited that CDCP1 was significantly correlated with EGFR (Figures 4(d)-4(e)). In addition, the enrichment results of NCGs were scattered, which were exhibited in Figure S2. To sum up, CDCP1 may be related to inflammatory and immune responses via regulating multiple pathways in BrCa. CDCP1 Was Correlated with Immune Checkpoints Expressions in BrCa. Considering the potential relationship between CDCP1 and inflammatory and immune response in BrCa, we next explored the correlations between CDCP1 and gene markers of immune-related events. CDCP1 showed no significant correlation with the stromal score, immune score, and ESTIMATE score ( Figure 5(a)). In addition, CDCP1 was also not correlated with MHC molecules, gene markers of multiple immune cells, but positively related to immune checkpoint expressions, including CD274 Journal of Oncology IL2RA IL6R RAET1E NT5E ICOS TNFSF15 TNFSF14 CD276 TNFSF13B CD48 C10orf54 TMEM173 ICOSLG CD28 CD27 TNFRSF4 IL6 TNFRSF14 TMIGD2 TNFRSF18 TNFSF13 TNFSF18 TNFSF4 CD80 CD86 TNFSF9 PVR CD40 TNFRSF13C LTA ENTPD1 TNFRSF25 CD70 KLRC1 CD40LG ULBP1 TNFRSF9 KLRK1 MICB HHLA2 BTNL2 TNFRSF13B TNFRSF17 TNFRSF8 LGALS9 (PD-L1), CD276 (B7-H3), and VTCN1 (B7-H4) ( Figure 5(b)). In addition, a larger throughput analysis showed that CDCP1 was not significantly associated with immune-related genes and TIICs abundance (Figures 5(c)-5(d)). Since CDCP1 was positively correlated with PD-L1, we also examined whether CDCP1 could be a biomarker for immunotherapy in BrCa. e results showed that CDCP1 and PD-L1 were highly expressed in BrCa tissues with a good response ( Figure 6(a)), and the predictive value of CDCP1 was even higher than PD-L1 in the GSE173839 dataset ( Figure 6(b)). Overall, CDCP1 was related to enhanced immune checkpoint expressions and could predict the response to immunotherapy in BrCa. Discussion CDCP1 has been revealed to be significantly dysregulated in tumor tissues and accelerates progression in several malignancies [21]. CDCP1 is eminently located on the cytomembrane, which lies at the nexus of critical tumorigenic signaling cascades, containing the SRC-PKCδ, PI3K-AKT, WNT, and RAS-ERK axes, the oxidative pentose phosphate pathway, and fatty acid oxidation, making significantly functional contributions to tumor progression and development [21]. In addition, CDCP1 has a notable prognostic role in cancer. Ikeda et al. performed a multivariate Cox regression analysis of 200 lung adenocarcinoma patients and revealed that high-CDCP1 expression was an independent prognostic factor for OS in lung adenocarcinoma [22]. Dagnino et al. suggested that the circulating serum level of CDCP1 was related to the risk of developing lung cancer, especially in patients with tobacco exposure [23]. However, a systematic analysis of CDCP1 in BrCa has not been performed yet. In this research, we reported that CDCP1 was significantly overexpressed in BrCa tissues and highly expressed in the HER2-positive and triple-negative subtypes. Previous research has revealed that CDCP1 is a novel marker of triple-negative breast cancer [24] and promotes tumor progression via reduction of lipid-droplet abundance and stimulation of fatty acid oxidation [25]. In addition, CDCP1 could interact with HER2 and enhance HER2driven tumorigenesis in BrCa [26]. us, the enrichment of CDCP1 might be crucial for the aggressiveness of the HER2-positive subtype. Furthermore, high expression of CDCP1 predicted poor prognosis in BrCa, which could be a novel biomarker for prognostic assessment in BrCa. Moreover, we also performed a systematic analysis of CDCP1 using the transcriptomic data and found that CDCP1 was not only involved in multiple oncogenic pathways, but correlated with overexpression of immune checkpoints. With the rapid development of bioinformatics-assisted tumor immunity studies, immuno-correlations analysis has been emerging as a hotspot in the field of cancer research. A growing number of novel immune biomarkers has been identified [27][28][29]. Most immune biomarkers in the tumor were correlated with the inflamed immune microenvironment, such as enhanced chemokines, MHC molecules, and effective TIICs, and also correlated with immune checkpoint expressions [30,31]. In the current research, we found that CDCP1 was not related to the inflamed immune microenvironment, but positively correlated with immune checkpoint expressions, including CD274 (PD-L1), CD276 (B7-H3), and VTCN1 (B7-H4). us, CDCP1 might be a crucial regulator that contributed to immune evasion via promoting immune checkpoint expressions. It has been reported that CDCP1 is crucial for the activation of RAS in cancer [8], and participates in multiple oncogenic pathways, such as EGF signaling [32] and HGF signaling [33]. In addition, we predicted that CDCP1 was involved in TNF-α signaling, hypoxia, EMT, interferon-c response, PI3K-AKT signaling, and AGE-RAGE signaling. Most of these pathways are associated with the regulation of immune checkpoints in cancer. For example, PD-L1 could be upregulated in ZEB1 and miR-200 dependent manners EMT-activated human breast cancer cells [34]. In addition, immune checkpoint molecules PD-L1 and B7-H3 were notably upregulated during TGF-β1-induced EMT [35]. Although our current study suggested potential relationships of CDCP1 to these pathways, the lack of confirmation from the molecular biology level remained an unavoidable shortcoming of this study. Conclusion In conclusion, we revealed that CDCP1 was highly expressed in BrCa tissues and enriched in the HER2positive and triple-negative subtypes, which also functioned as a novel prognostic biomarker in BrCa. In addition, CDCP1 was positively correlated with immune checkpoint expressions in BrCa, and several possibly related pathways were also suggested. Overall, we systematically investigated the role of CDCP1 in BrCa and provided a possible insight into the CDCP1-mediated overexpression of immune checkpoints. Data Availability All data supporting the results of this study are shown in this published article and supplementary documents. In addition, original omics data for bioinformatics analysis could be obtained from corresponding platforms. Conflicts of Interest e authors declare that have no conflicts of interest. Figure S1. e genes co-expressed with CDCP1 in BrCa.
2022-09-03T15:21:55.608Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "b2e79ea75eb40f32e01388daba97a3fdc2c10679", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jo/2022/1469354.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0cb6ef6856bb988584a7b947d426d0fd8d01dac", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
233312615
pes2o/s2orc
v3-fos-license
Hippocampal and Hippocampal-Subfield Volumes From Early-Onset Major Depression and Bipolar Disorder to Cognitive Decline Background: The hippocampus and its subfields (HippSub) are reported to be diminished in patients with Alzheimer's disease (AD), bipolar disorder (BD), and major depressive disorder (MDD). We examined these groups vs healthy controls (HC) to reveal HippSub alterations between diseases. Methods: We segmented 3T-MRI T2-weighted hippocampal images of 67 HC, 58 BD, and MDD patients from the AFFDIS study and 137 patients from the DELCODE study assessing cognitive decline, including subjective cognitive decline (SCD), amnestic mild cognitive impairment (aMCI), and AD, via Free Surfer 6.0 to compare volumes across groups. Results: Groups differed significantly in several HippSub volumes, particularly between patients with AD and mood disorders. In comparison to HC, significant lower volumes appear in aMCI and AD groups in specific subfields. Smaller volumes in the left presubiculum are detected in aMCI and AD patients, differing from the BD group. A significant linear regression is seen between left hippocampus volume and duration since the first depressive episode. Conclusions: HippSub volume alterations were observed in AD, but not in early-onset MDD and BD, reinforcing the notion of different neural mechanisms in hippocampal degeneration. Moreover, duration since the first depressive episode was a relevant factor explaining the lower left hippocampal volumes present in groups. INTRODUCTION The human hippocampus is known as a brain structure pivotal for memory formation. It is the plasticity of the hippocampus to form memory that makes it particularly vulnerable to damage and volume reduction. In Alzheimer's disease (AD), hippocampal volume is reduced due to neurodegeneration as evidenced in brain MRIs of specific hippocampal subfields (HippSub). A variety of human studies have reported that specific HippSubs such as the cornu ammonis 1-3 (CA1-3), presubiculum or subiculum are more prone to neurodegenerative processes than others (Hanseeuw et al., 2011;La Joie et al., 2013;Carlesimo et al., 2015;de Flores et al., 2015). The degeneration pattern may depend on the AD stage, as indicated by cognitive performance, varying from subjective cognitive decline (SCD) to dementia. HippSub fields are suitable biological imaging markers of early stages of AD, as the presubiculum-subiculum complex (Carlesimo et al., 2015;Jacobs et al., 2020), CA2-3 (Hanseeuw et al., 2011), or CA1 region (de Flores et al., 2015 are often atrophied. Supporting this idea, recent work indicates that lower subicular volumes in patients with memory impairment are related to the grade of ß-amyloid depositions independent of the presence of neurodegeneration assessed by fluorescence desoxyglucose positron emission tomography (FDG PET) (Filho et al., 2021). More broadly, another study confirmed the association of ß-amyloid deposition in conjunction with higher iron content in the medial temporal lobe and subjects' age (even in cognitively unimpaired subjects) in terms of specific HippSub volume decreases, i.e., in the subiculum, CA1/2, CA3/dentate gyrus (DG) subregions (Foster et al., 2020). ßamyloid accumulation is a key underlying mechanism in the loss of hippocampal volume across the spectrum of cognitive impairment in preclinical and clinical AD. Another study suggest that both reduced cerebrospinal fluid (CSF) ß-amyloid 1-42 and elevated CSF tau levels are seen in AD patients who exhibit smaller subiculum volumes (Tardif et al., 2018). This evidence suggests that both tau-based neurodegeneration and ß-amyloid pathology are crucial for HippSub volume loss in patients with AD. Other mechanisms underlying the loss of hippocampal volume might be polygenic, as a higher polygenic risk score for AD was observed in cognitively normal patients in a study by Foo (Foo et al., 2020), possibly depicting preclinical AD. Protective mechanisms might also play a role, such as carrying the TREML2 rs3747742-C polymorphism, which seem related to higher CA1 volumes in cognitively normal subjects . The interrelationship between depression and AD is a well-replicated finding (Heser et al., 2013;Donovan et al., 2018). It remains unclear whether depression is a relevant risk factor for AD (Enache et al., 2011), or if depression is an early manifestation thereof (Singh-Manoux et al., 2017). Furthermore, there is recent evidence that a decrease in hippocampal volume and functional connectivity is an important feature of major depressive disorder (MDD) associated with cognitive impairment (Genzel et al., 2015;Schmaal et al., 2016). Thus, it is of major interest to compare HippSub volumes which might give us hints about common underlying mechanisms in affective disorders and AD. In depressive disorders, diverse mechanisms such as the number of depressive episodes, stressful life events, oxidative stress, childhood physical, or sexual abuse or metabolic changes are potential underlying mechanisms of lower HippSub volumes such as CA1 or dentate gyrus (DG) or fimbria (Treadway et al., 2015;Elvsåshagen et al., 2016;Xu et al., 2018;Weissman et al., 2020;Yuan et al., 2020). These studies depict that in depression, the mechanisms of hippocampal volume loss seem to be even broader than in hippocampal degeneration due to AD's spectrum. HippSub loss does not just concern unipolar depression; it is also present in bipolar disorder (BD); the pattern of subfield loss was considerably more extensive than in controls in a recent multicentric study with 1,472 BD patients (Haukvik et al., 2020). Another recent study indicated one possible common pathogenic mechanism between BD and AD (Berridge, 2013), which is why we added a BD group in our study. BD could could result in a HippSub-specific fingerprint like reduced volume in the CA1 (Cao et al., 2017;Haukvik et al., 2020), cornu ammonis 4 (CA4) (Cao et al., 2017;Haukvik et al., 2020), the granule cell layer (GCL) (Cao et al., 2017;Haukvik et al., 2020), molecular layer (ML) (Cao et al., 2017;Haukvik et al., 2020), subiculum (Sub) (Cao et al., 2017;Haukvik et al., 2020), hippocampal amygdala transition area (Haukvik et al., 2020) and tail (Cao et al., 2017;Haukvik et al., 2020) depending on the duration and type of BD (Cao et al., 2017), but also on antipsychotic and antiepileptic drug history (Haukvik et al., 2020). On the other hand, it has been suggested that depressive symptoms might reduce age-related hippocampal atrophy and result in larger Sub and CA1 subfields (Szymkowicz et al., 2017). However, most studies showed smaller hippocampal volumes due to ongoing depressive symptoms, thus the controversy about how depression's duration relates to HippSub volumes. The aforementioned studies show that the mechanism of hippocampal volume loss might differ even in two distinct affective disorders and AD and that it is not fully understood. However, we wondered whether there might be a similar pattern of HippSub loss in some HippSubs implying similar mechanisms of degeneration. In the current investigation, we thus aimed [a] to analyze HippSub volumes and hippocampal volumes between cohorts with cognitive impairment, early-onset major depression and BD, and [b] to identify potential disorder-specific alterations and any shared trajectories of hippocampal volume decrease in the hippocampus and HippSub in SCD, aMCI, AD, BD, and MDD groups. Our study covers the spectrum ranging from subjective complaints (SCD) to amnestic mild cognitive impairment (aMCI) and AD. SCD patients do not reveal objective cognitive impairment. Therefore, it is worth seeking novel biomarker tools such as hippocampus and HippSub imaging to diagnose early AD more accurately. In addition, we are looking for molecular markers in the CSF such as ß-amyloid and tau protein to detect any underlying pathomechanism for HippSub in AD; a recent study by Tardif (Tardif et al., 2018) proved a relevant relationship between HippSub decline and ß-amyloid and tau-based neuropathology in AD. Our study does not focus on specific HippSubs, as there is controversy about which HippSubs are reduced among different diseases. The intersection between lower HippSub volumes and various diseases associated with cognitive dysfunction is inconsistent in studies of AD's spectrum (Hanseeuw et al., 2011;La Joie et al., 2013;Carlesimo et al., 2015;de Flores et al., 2015;Cao et al., 2017;Szymkowicz et al., 2017;Jacobs et al., 2020), MDD (Treadway et al., 2015;Elvsåshagen et al., 2016;Xu et al., 2018;Weissman et al., 2020;Yuan et al., 2020), and BD (Cao et al., 2017;Haukvik et al., 2020). Therefore, we plan to take a more exploratory look at the volumes of various HippSubs. Furthermore, we aimed to discover whether specific factors show a relevant impact on our HippSub and hippocampal volumes in certain disease groups; i.e., sex, age, disease duration, age at condition onset, number of depressive episodes, duration since first depression, and intracranial volume. In addition, we expected to uncover potential relationships not yet investigated between hippocampal volume and HippSub volumes and duration since the first occurrence of a depressive episode between all groups that might be clinically relevant and thus support the relevance of very early, effective treatment to impede further hippocampal degeneration that might accompany disease progression. By analyzing earlyonset depression and BD patients, we will demonstrate a wide spectrum of time duration in years between the first episode of depression and hippocampal and HippSub volumes to answer how a lifetime's duration of suffering intermittent depressive and no depressive episodes since the first one's occurrence relates to hippocampus volumetry. Analyzing hippocampal volumes in addition to the HippSubs is an important endeavor, as they involve functional aspects of memory such as pattern separation and recognition in AD (Rizzolo et al., 2021), stress sensitization (Weissman et al., 2020), as does the number of depressive episodes in prior life (Videbech and Ravnkilde, 2004). Participants We compared data of two independent cohorts from 137 patients of the DELCODE study and 58 patients of the AFFDIS study in this retrospective investigation. The German DELCODE [Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE, German Center for Neurodegenerative Diseases) Longitudinal COgnitive impairment and Dementia] is assessing cognitive decline and dementia in an ongoing, memory clinicbased, observational, longitudinal, multicentric study (Jessen et al., 2018). The AFFDIS study investigated differential neural correlates in AFFective DISorders (AFFDIS) and medicationrelated changes from 2015 to 2017. For a detailed description of the DELCODE study design and study population, please see Jessen et al. (2018). In short, participants from the DELCODE cohort were grouped into SCD (n = 32; mean age: 72 ± 6.2 years, age range: 60-89 years), amnestic mild cognitive impairment (aMCI) (n = 63; mean age: 72.5 ± 5.9 years, age range: 62-88 years), and AD (n = 42; mean age: 72.9 ± 6.9 years, age range: 61-87 years). The AD patients were selected according to McKhann's criteria (McKhann et al., 2011). Probable AD is diagnosed according to McKhann's criteria (McKhann et al., 2011) when the following deficits and other alternative causes have been excluded: a gradual, not acute onset of symptoms, worsening cognition resulting in dementia with a prominent amnestic presentation of cognitive dysfunction, difficulty finding words and solving problems, defective spatial cognition, impaired reasoning, or judgement. We randomly selected the patients from the DELCODE cohort for comparable size between study cohorts (AFFDIS, DELCODE) and their subgroups. Participants were classified as having SCD in case of self-reported subjective cognitive decline and a neuropsychological test achievement superior than −1.5 standard deviation (SD) on each subtest of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) test battery (according to normative data adapted for age, education and sex) (Jessen et al., 2014(Jessen et al., , 2018(Jessen et al., , 2020. According to research criteria (Jessen et al., 2018), participants with aMCI were defined as those whose neuropsychological performance was below −1.5 SD in the delayed recall test of the CERAD word list, which is indicative of episodic memory. For the HC group (n = 67, age: 54.0 ± 16.7 years, age range: 19-78 years) from the DELCODE study, the same test criteria for SCD were applied, but subjective cognitive concerns were absent. In a subgroup of patients with cognitive impairment in the DELCODE study [21/32 (66%) SCD, 46/63 (73%) aMCI, and 19/42 (45%) AD patients] cerebrospinal fluid (CSF) biomarkers were assessed. As part of the DELCODE protocol, Tau-protein, phosphorylated 181 Tau-protein (pTau181), ß-Amyloid 42, ß-Amyloid 40, and the ratio of ß-Amyloid 42/40 were analyzed in cerebrospinal fluid (CSF) with cut-off values for AD's molecular markers established at the University Hospital in Bonn as previously described (Jessen et al., 2018). AD's molecular signature (AD pathology+) was present if Aß42 or the Aß42/Aß40 ratio in CSF was reduced and Tau protein or pTau181 were elevated in CSF in line with Jack's criteria for biological AD (Jack et al., 2018). Major exclusion criteria were significant sensory impairment, major or neurological psychiatric disorder, current major depressive episode, malignant disease, cerebral ischemia, Vitamin B12 deficiency, and any unstable medical condition. A medical history derived from the participant's and caregiver's self-reports was collected and covered depression history (e.g., age of depression onset, number of previous mood episodes, if applicable). In the AFFDIS cohort, participants with affective disorders were diagnosed with BD (n = 28, age: 54.0 ± 16.7 years, and age range: 26-63 years) and MDD (n = 30, age: 38.2 ± 15.9 years, and age range: 19-65 years), according to the DSM-5 criteria, and were assessed by the Beck Depression Inventory-II (BDI-II), while HC participants were evaluated by the Symptom Checklist-90-R (SCL-90-R) to ensure the absence of psychopathological symptoms. By pooling HC from the two cohorts (DELCODE n = 32, AFFDIS n = 35), the HC group consisted of 67 participants in total. Informed consent was received from all participants. Neuroimaging We used whole-brain T1-weighted images (1 mm isotropic) and high-resolution T2-weighted images (0.5 × 0.5 × 1.5mm 3 ) spanning the hippocampus to segment it into its constituent substructures. These structural images were acquired using 3T MRI Siemens scanner systems [TIM Trio and Verio systems, Skyra, and Prisma system, both the DELCODE and AFFDIS cohorts. We used the already established and reliable method, corroborated by longitudinal studies (Brown et al., 2020;Garimella et al., 2020;Xu et al., 2020), of FreeSurfer (Version 6.0, software: http://surfer.nmr.mgh.harvard.edu/) to segment the whole brain T1-weighted structural images using the default standard recon-all processing stream Fischl et al., 1999). This step usually takes about 7-10 h for each subject image, and outputs the segmentation results from both cortical and subcortical structures. Standard preprocessing comprises brain extraction, B1 bias field correction, segmentation of gray as well as white matter, reconstruction of gray matter-white matter boundary and pial surfaces, labeling of regions in both the cortex and subcortex, and non-linearly co-registering the individual T1's cortical surface to a spherical atlas to allow comparison across subjects. To obtain HippSub segmentation, we employed the higher-resolution T2-weighted scans using the revised module available in FreeSurfer 6.0 (Iglesias et al., 2015;Whelan et al., 2016). The step takes ∼45 min for each subject's hippocampal segmentation and provides a label for the following subregions: hippocampal tail, subiculum (Sub), CA1, fissure, presubiculum (PreSub), parasubiculum (ParaSub), molecular layer (ML), granule cell layer-molecular layer of the DG, CA3, cornu ammonis 4 (CA4), fimbria, and hippocampusamygdala transition area (Hata) region in both hemispheres. After this, we used automated scripts (courtesy of P. Saemann of the ENIGMA consortium [https://enigma.ini.usc.edu]) to extract the HippSub volumes of each hemisphere for further statistical analysis. Finally, we created 2D and 3D (Figure 1) renderings to perform careful quality check (QC) to ensure correct segmentation of all cases before running statistical analysis. Cases of poorly segmented hippocampus or HippSub were absent. Statistical Analysis We performed ANOVA to detect differences between groups and controls in relevant variables such as sex, age, disease duration, age at condition onset, number of depressive episodes, duration since first depression, and intracranial volume (eTIV). We examined the potential contribution of covariates (age, age at condition onset, and eTIV) to the HippSub volumes as they showed significant group differences. Only those covariates exhibiting relevant group differences among all patients were regarded as significant covariates in our HippSub analysis. To investigate volume differences between all groups, we analyzed HippSub volumes from FreeSurfer using ANOVA with group as a factor (SCD, aMCI, AD, BD, and MDD) with and without HC and with covariates age and eTIV. An additional oneway ANOVA was performed only with the cognitive-declinegroups as factor with or without CSF pathology suggestive of Alzheimer's disease (SCD, aMCI, AD, SCD-CSF pathology+, aMCI-CSF pathology+, and AD-CSF pathology+). A further ANOVA was performed for AFFDIS patient groups and their AFFDIS control group with eTIV as covariate. To investigate the potential impact of time since first depressive episode on volume reduction, we ran a linear regression analysis in all patient groups that had history of depression. The length of time since the first depressive episode is defined as the cumulative amount of time someone had been depressed including transient time periods with no depression in their lifetime before hippocampal volume was assessed. Statistical analysis was performed via SPSS (Version 25, IBM Inc., Chicago, Illinois, USA). Graphs were constructed by Sigma Plot (Version 11, Sigma Plot, USA). Statistical analyses were two-sided with a p-level of significance ≤ 0.05, including, if applicable, LSD post-hoc tests including Bonferroni correction. Baseline Characteristics of Groups We pooled HC (n = 67) from the AFFDIS cohort (n = 35) and DELCODE cohort (n = 32) to serve as a reference for potential effects of age-related differences in hippocampus and HippSub volumes. Clinic and demographic data of study participants (n = 195) are presented in Table 1, showing sex, age, onset age of depressive episodes, number of depressive episodes, age at onset of condition, and duration since first depression compared across all groups (HC, SCD, aMCI, AD, BD, and MDD). Past depressive episodes were identified in 7/32 (22%) of SCD, in 5/63 (7.9%) of aMCI and in 4/42 (9.5%) of AD patients. The BP and MDD patients revealed a moderate degree of current depressive mood a s indexed by BDI-II (BDI-II scores: BD: 19 ± 12.8; MDD: 25 ± 11.3). Age (F = 68.9, p < 0.005), disease condition's onset age (F = 90.7, p < 0.005), and the onset age of depressive episodes (F = 4.3, p < 0.005) and the duration of depression (F = 4.4, p < 0.005) differed significantly between groups, whereas sex and number of depressive episodes did not. The eTIV differed significantly between groups (F = 4.98, p < 0.0005). In post-hoc analysis, only SCD and HC differed significantly from BD and MDD patients (post-hoc test: p < 0.05), while the other groups did not (LSD post-hoc test: p > 0.05). However, when comparing the HC in the AFFDIS cohort only with BD and MDD patients in eTIV volume, we detected no significant differences (LSD post-hoc test: p > 0.05). Thus, the eTIV difference was driven by the SCD group compared with BD and MDD patients. Overall, age and eTIV showed relevant group differences among all patients and were considered as relevant covariates for our HippSub analysis as well as linear regression of hippocampus and HippSub volumes in patients with and without controls. Comparison of Hippocampal Subfield Volumes Between Cognitive Decline and Affective Disease Groups Without Controls ANOVA revealed a significant difference (F = 2.24, p < 0.0005, see Table 2 for neuroimaging data of patients and controls) in hippocampus and HippSub volumes between all groups including cognitive decline (SCD, aMCI, and AD) and earlyonset mood conditions (MDD and BD). The hippocampus in both hemispheres exhibited smaller volumes in AD patients, but not in MDD and BD patients (LSD post-hoc test: p < 0.0005; Figure 2A). Bilateral CA1, CA4, DG, ML, Sub, fimbria, and left tail revealed the same pattern of a diminished volume in AD, but not in MDD and BD groups (LSD post-hoc test: p < 0.05, Figures 2B,C). Significantly lower volumes in the left PreSub were observed in aMCI and AD patients when compared to BD (LSD post-hoc test: p < 0.005, Figures 2B,C). No differences between hippocampal volumes in AD vs. BD or MDD patients were identified in bilateral CA3, ParaSub, fissure, hata, and right PreSub regions (Figures 2B,C). Hippocampal Subfield Volumes in Cognitive Decline Groups Considering the hippocampus, aMCI and AD (but not SCD) groups presented significantly smaller volumes bilaterally in comparison to HC (post-hoc tests: p < 0.05, Figure 2A). Moreover, in aMCI and AD groups, but not in SCD group, we detected lower volumes in left CA1, left CA4, left DG, left tail, left PreSub, and bilateral Sub when compared to HC (LSD posthoc test: p < 0.05, Figures 2B,C). In the right CA1, right CA4, right DG, right tail, right PreSub, bilateral CA3, bilateral ParaSub, bilateral fimbria, and bilateral fissure regions (Figures 2B,C) we found no volume differences in HippSub in aMCI and SCD groups compared to HC. In additional subgroup analyses, we investigated subjects presenting neuropathological abnormalities typical of AD. Concerning those DELCODE patients, for 6/32 (19%) patients with SCD, 20/63 (38%) with aMCI, and 16/42 (38%) patients with AD, their CSF pathology suggests AD. When we compared subgroups with a positive AD pathology to those without, we detected no significant between-group differences in HippSub (all p > 0.05, data not shown). Hippocampal Subfield Volumes in Affective Disorder Groups No significant differences were detected on hippocampal and HippSub volumes when we compared MDD and BD groups to HC (p > 0.05). Hippocampal-Subfield Volumes and Duration of Depression To explore the role duration plays in years since depression onset on hippocampus volume in each hemisphere, we conducted a linear regression analysis, and noted that left, but not right hippocampal volume was significantly associated with time since first depressive episode (left hippocampus: F = 6.5, p < 0.05; Figure 3). We explored this effect further in HippSub volumes, and observed no relevant association with the time since first depressive episode and the left Sub, left CA1, left PreSub, left DG, left CA4, left fimbria, right tail, and right fimbria. DISCUSSION The main findings of our investigation are that using MRI data, hippocampal and specific HippSub volumes differed between major cognitive decline due to possible AD and early-onset of unipolar and bipolar disorders. Smaller hippocampus and most HippSub volumes were detected almost exclusively in aMCI and AD groups, while SCD, BD, and MDD groups revealed no significant smaller volumes in relation to HC. Early markers of possible neurodegeneration can therefore be seen predominantly in the left CA1, CA4, DG, tail, PreSub, and bilateral Sub regions, since significant smaller volumes were found in aMCI and ADD groups, but not in early-onset mood disorders (MDD, BD). Of note, the duration in years since first depressive episode was significantly related to the volume of left hippocampus in all patient groups. Based on the present study, the HippSub right CA1, CA4, DG, tail, PreSub, and bilateral CA3, ParaSub, fimbria, and fissure regions seem more resilient against neurodegeneration in aMCI and SCD patients. These findings may partially reflect the existing variability at certain stages of cognitive decline, as other studies have already demonstrated a volume decrease in MCI patients (Zhao et al., 2019). A unique finding in this investigation was the significant difference seen between aMCI and BD in the left PreSub region, which could function as a suitable imaging marker. If replicated, smaller volumes in the left PreSub might prove to be the earliest indications of hippocampal-volume differences due to cognitive impairment distinct from those with bipolar mood disorders. There is evidence that both ß-amyloid and tau pathology assessed via CSF are relevant factors in lower HippSub volumes due to AD's cognitive spectrum (Tardif et al., 2018;Filho et al., 2021). As we failed to detect significant volume differences in patients with cognitive impairment with and without AD-typical CSF pathology, there might be additional mechanisms contributing to HippSub decline in our patients. Nevertheless, we could not exclude the possibility of insufficient power to detect differences, considering the relatively few subgroup samples. Further studies are needed with larger patient cohorts to differentiate the proposed underlying mechanisms of AD in HippSub volume loss. The aforementioned literature suggests other mechanisms of HippSub volume degeneration in the AD spectrum such as genes, iron accumulation, or even neuroprotective factors (Foo et al., 2020;Foster et al., 2020;Wang et al., 2020). Some of these factors may be partly responsible for the PreSub volume loss in AD, aMCI vs BD patients that we detected. Overall, we identified neither smaller hippocampal nor HippSub volumes in early-onset mood disorder groups. That may be attributable both groups' similar age and similar severity of depressive symptoms. Furthermore, another explanation for no relevant differences in HippSub in mood disorder groups might be that structural differences between MDD-BD patients are likely less evident in the hippocampus or HippSub than in other brain regions such as thalamus, dorsolateral, and medial prefrontal cortex as well as parietal regions (Schmaal et al., 2020). The lack of smaller HippSub volumes in MDD and BD might be due to the fact that the AFFDIS cohort recruited patients undergoing antidepressant therapy. As shown lately in a survey by Han et al. (2016), drug-naïve MDD patients revealed a pattern of smaller volumes in Sub, CA2-4, DG in comparison to healthy controls. On the other hand, other factors such as early-life stress, or rs1360780 polymorphism of the FKBP5 gene (referring to the hypothalamic-pituitary-adrenal axis) associated with some smaller HippSub volumes (Mikolas et al., 2019) might also have enabled variation in our sample (data not available). Genetic architecture with different genetic loci (Hibar et al., 2017) could have a major influence on disease-specific HippSub volumes, which might explain the absence of HippSub volume reduction observed in groups with mood disorders. In contrast to our findings, BD patients have also demonstrated reduced hippocampal CA1, GCL volumes . Smaller volumes have been observed in the PreSub and Sub regions in a subgroup of BD patients (Janiri et al., 2019), an evidence to which our left PreSub findings, in contrast to aMCI, appear to be in line with. One factor that might explain why our BD patients revealed no major hippocampal volume reductions is that, differently from ours, their cohort was heterogeneous, and not characterized by a predominantly depressive subtype. Our results, however, support the findings from a recent investigation showing no smaller volumes in MDD patients via high-resolution 7-Tesla MRI (Tannous et al., 2020). As in this study only HippSub volumes and not shape alterations were assessed, therefore we cannot identify if HippSub deformations coinciding with unaltered volumes were seen, as has been reported in MDD (Ballmaier et al., 2008;Cole et al., 2010). Our findings suggest that depression's duration has a significant impact on left hippocampal volume, indicating that the time since first depressive episode plays an important role in hippocampal degeneration. This concurs with the knowledge that lower hippocampal volumes are associated with a poorer clinical outcome and more depressive episodes (Videbech and Ravnkilde, 2004;MacQueen and Frodl, 2011). However, when further exploring specific HippSub volumes, we observed no relationship between the duration since first depressive episode and HippSub volumes. Further studies with larger cohorts should be conducted to identify whether the duration since depressive manifestation affects HippSub volumes in a more relevant manner. The limitations of our study concern the sample size of groups and subgroups, restricting additional conclusions in terms of clinical representation, applicability and neurobiological foundations. For instance, cognitive assessments comparable to DELCODE were not available in the AFFDIS cohort, with which we could have additionally investigated whether cognitive impairment across disorders relate to hippocampus or HippSub volume decline. A further potential limitation is the age difference between groups in both cohorts, with younger patients in the AFFDIS than the DELCODE cohort. Our analyses were controlled for age and eTIV (as covariates), but it would have been interesting to see if differences across patient groups would indeed hold when comparing older participants in mood disorder groups. Future studies addressing this aspect should also consider the potential risk of misclassifying participants with late-onset depression, since depressive episodes can be initial manifestations of neurodegeneration. However, as molecular markers have not yet been assessed in patients with affective disorders or in some patients with cognitive decline and possible AD, no general conclusions about the molecular mechanisms of neurodegeneration can be drawn for our patient groups. Cognitive decline in early-onset depression is usually not clinically associated with the neurodegenerative process, and it is often less severe (Jamieson et al., 2019) and affects specific cognitive subdomains such as language, memory, and cognitive flexibility, as recently reported (Ang et al., 2020). Thus, the manifestation age of depression is clinically relevant for the pattern and severity of cognitive decline, while also being a risk factor for later cognitive decline (Brzezińska et al., 2020). The increasing grade of severity in cognitive decline observed in lateonset compared to early-onset depression age might thus be accompanied by decreasing hippocampal and HippSub volumes. In addition, our findings comprised cross-sectional structural imaging data and not longitudinal comparisons, through which more insight into intraindividual changes in HippSub volumes can be gained. Further studies combining functional data could better elucidate the significance of neuropathological processes in the HippSub for cognitive impairment. Lastly, potential influences of the treatment history on hippocampal and HippSub volumes could not be determined in the absence of comparable information across disorders. Our study showed that hippocampus and HippSub volumes differ between cognitive decline due to possible AD and early-onset mood disorders. The left PreSub is a structure apparently affected in aMCI and AD subjects, but not in BD patients. This sheds new light into a possible marker differentiating correlates of neurodegeneration due to minor and major cognitive decline and BD. Conversely, we detected no relevant field and subfield volume decline in BD and MDD groups. Most strikingly, we found that the time since the first depressive episode was negatively associated with left hippocampal volume in all disorder groups. This time effect is a potentially important hallmark supporting hippocampal volume reduction as a continuum extending from mood disorders, and cognitive deterioration to AD. This finding may advance the comprehension of the relationship between depression and AD. The usage of sophisticated tools, such as machine learning, in identifying multivariate patterns in much larger groups should consider this feature. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by ethical committee's. The DELCODE study protocol was coordinated by the ethical committee of the medical faculty of the University of Bonn and approved by all participating sites ethical committees [Berlin (Charité, University Medicine), Bonn, Cologne, Göttingen, Magdeburg, Munich (Ludwig-Maximilians-University), Rostock, and Tübingen]. The AFFDIS study protocol was approved by the ethical committee of the medical faculty of the University of Goettingen. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article. FINANCIAL DISCLOSURE STATEMENT ASc got funding from Novartis, Diagnostik Netz BB (travel and speaker honoraria) and gained research support from German Federal Ministry of Research (BMBF), Actelion and Helmholtz Foundation Michael J Fox Foundation. CB received honoraria as a diagnostic consultant for Boehringer Ingelheim. DJ has obtained funding for travel from Pfizer GmbH. IK has obtained funding from the German ministry of economic cooperation and development. JP got research support for travel or speaker honoraria from Axon, CHDI, and UK DRI. He received research funding from DFG, BMBF, and UK DRI. JW has obtained research support from the Eli Lilly Advisory Board, Pfizer, MSD, and med Update GmbH (travel and speaker honoraria). He obtained research support from the BMBF. MH received funding for research support from the DFG. OP has obtained research support for travel or speaker honoraria from Schwabe. He has received funding from Eli Lilly, Lundbeck, Genentech, Biogen, Roche, Pharmatrophix, Novartis, Janssen, and Probiodrug. ST has gained support (travel or speaker honoraria) from MSD Sharp and Dohme GmbH Quality circle for physicians in Kühlungsborn and research support from ROCHE, Roche Pharma AG, Lilly Deutschland GmbH, BMBF, and Ministry of Economics of the State Mecklenburg Western Pomerania. AUTHOR CONTRIBUTIONS RG-M designed the study. NH and RG-M wrote the manuscript. ASi, NH, and RG-M analyzed the data. AC, ASi, ASc, ASp, BE-W, CB, CM, DJ, ED, ES, FB, FJ, JH, JP, JW, KB, KF, KS, LD, MM, MT, MH, NR, OP, PD, RG-M, RV, and ST contributed to data collection. All authors critically revised the manuscript. All authors made significant intellectual contributions, reviewed, and accepted this work before submission.
2021-04-21T13:23:05.566Z
2021-04-21T00:00:00.000
{ "year": 2021, "sha1": "b95383a86fd2501c19f7df025db0ea41b0f1c5a6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2021.626974/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b95383a86fd2501c19f7df025db0ea41b0f1c5a6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
248273044
pes2o/s2orc
v3-fos-license
Intimate Partner Violence and South Asian Families in the United States: Towards Bringing Visibility to an Unrecognized Population Intimate partner violence (IPV) is a global public health problem impacting individual, families, and communities. Because IPV occurs in a broader environment within which these families are nested, preluding to their communities and neighborhoods, research must examine the individual and societal level factors critical to bringing behavior change. Stemming from a lack of theoretical underpinnings, predictors in relation to South Asian immigrant women, living in neighborhoods, IPV as a problem is under-reported and stigmatized. A dearth of a better understanding exits that could inductively and deductively build the theoretical frame of reference to examine and assess for intimate partner violence within immigrant communities. This paper uncovers the theoretical underpinnings to comprehend considering the immigration status, neighborhood, and communities, economic statues, and theories to assess social work implications of IPV. To conclude, the paper discusses existing policies, prevention strategies, and interventions are discussed. and wanting to create a flawless public image to their community. Numerous cases would go underreported as result of cultural and saving oneself from a tarnished public image (Marianne &Yoshioka, 2003). Economics of Intimate Partner Violence: Social Policies Survivors of IPV lost nearly eight million days of paid work because of the violence inflicted upon them by current or former husbands, boyfriends, and dates. This loss is the equivalent of more than 32,000 full-time jobs and almost 5.6 million days of household each year productivity as a result of IPV (Centers for Disease Control and Prevention and National Centers for Injury Prevention and Control, 2003). Based on the National Violence Against Women Survey (2018), which looked solely at health care costs, in 1995 nearly two million, IPV-related injuries (including physical assault and sexual assault) were inflicted on women aged 18 or older. Of these, 550,000 required medical attention, a quarter of which required admission to the hospital. An additional 18.5 million mental health care visits occurred after cases of physical assault, sexual assault, or stalking. These health care interventions cost 4.1 billion dollars in 1995. An additional 1.8 billion dollars per year of lost work and productivity, in the household and in the workplace, were incurred (Centers for Disease Control and Prevention and National Centers for Injury Prevention and Control, 2003;. Social policies authorized in 1994 regarding IPV were the Violence Against Women Act (VAWA) which is a legislative milestone aimed at protecting women from violent crime, helping to create stronger criminal justice and community based responses to IPV, sexual assault, dating violence, and stalking. VAWA I and II point significantly to the shortcomings as exclusion of unmarried and undocumented immigrant women, the lack of implementation for U-Visas, and the existence of a still high evidentiary burden (Balram, 2005). These complexities add to the burden on immigrant women and compel them to stay in the abusive relationship with the batterer, continuing its impact on their children who are exposed to the cycle of violence. The houses have in 2013 signed the reformation of the Violence Against Women Act (VAWA) to be reauthorized. Violence Against Women Reauthorization Act 2013 includes amendments that strengthen protections for non-citizen victims of IPV and sexual violence. These have been strongly opposed by the republicans or the conservatives, who are not reflective of criminalization of IPV in families and want to dictate whether the terms of crime IPV should entail rehabilitation or incarceration. They seem to be only considering the symptomatology rather than studying the global public health problem of IPV and it affects on families and children. Such policy decisions and indicators do affect the society since several immigrants are contributing to the US economy from all across the world. In recent years the immigration of women has increased (Foner, 2001), which changes the socio-economics costs of women in IPV and its interventions. The net cost for the prevalence of IPV significantly impacts the United States economy's labor, injury prevention, and human capital costs. IPV is also due in part to the complex array of factors. These factors include gender inequality and social norms around masculinity, and other social determinants such as economic inequality; behaviors problem (such as harmful use of alcohol); and other types of violence (such as child maltreatment) (WHO, 2010). Underpinning of Intimate Partner Violence in Social Work The underpinning of IPV is multidisciplinary in its approach. Each discipline studying the problem of IPV describes it differently. The American Psychological Association (APA, 1996) Taskforce on Violence and the INTIMATE PARTNER VIOLENCE AND SOUTH ASIAN FAMILIES IN THE UNITED STATES 75 Families defined IPV as a pattern of abusive behaviors including a wide range of physical, sexual, and psychological maltreatment used by one person in an intimate relationship against another to gain power unfairly or maintain that persons misuse of power, control, and authority. Psychological studies showed that when one form of violence was found in a family, other forms were more likely to occur and that violence in families has direct relationship with community violence and other forms of aggression and gender based violence. The lawyers in their profession make distinctions amongst different kinds of assaults (Walker, 1999). Danis and Lockhart (2003) argue that the social work profession earned a reputation from the late 1970s through the early 1990s, as uncaring, uninformed, and unhelpful to battered women. Social workers were faulted for blaming the victim, failing to recognize abuse as a problem and failing to make appropriate interventions and referrals. They sort the social work profession to address IPV with being grounded in theory and generating evidence-based practices. Theoretical Relevance of Intimate Partner Violence No single theory can explain the problem of IPV families and its effects on their children. Therefore, the ecological systems perspective with the developmental theory is used for the purpose of this study. The ecosystems-development theory is integrated with other potential relevant theories like Attachment Theory, Trauma Theory, and Social Learning Theory. These can explain the IPV as it exists and its different effects of children's exposure to IPV in families. Bronfenbrenner's (1979) ecological system emphasizes that families are affected by many interconnected and nested set of systems. He identified five specific levels: chronosystems (the changes in characteristics of a person and also in the environment, for example-family structures, socioeconomic status, employment or place or residence in one's course of life), the macro (society) system, the ecosystems (community), the mezzosystem (family), and microsystem (the child). Interactions of all these systems substantially influence the increase or decrease of the risk for child maltreatment in IPV families (Little & Kantor, 2002). Then with the Development Theory the effect of IPV on the child is jointly determined by the interaction between the nature of the IPV and the developmental milestones of the child (Margolin, 2005). Trauma Theory recognizes that the personal loss and threat associated with IPV create a highly stressful environment for children (DeBellis, 2001) that could lead to post-traumatic stress symptoms and, in turn, pathways to other developmental problems (Margolin, 2005). Social Learning Theory posits that behavior is acquired through observation and modeling processes within the family of origin (Bandura, 1986); it has been used to explain the intergenerational transmission of violent behaviors (McCloskey & Lichter, 2003). Children witnessing IPV do not only include watching the violence occur but also hearing the altercations and observing the aftermath, such as seeing bruises/marks and the destruction of property, accompanying the victim to seek medical attention, or observing police intervention or arrests. It is therefore not surprising that exposure to IPV has been consistently reported to have detrimental effects on children and adolescents (Carpenter & Stacks, 2009). Cognitive impairments and negative health outcomes such as poorer school performance change in physiological outcomes such as decreased heart rate and increased salivary cortisol (Saltzman, Holden, & Holahan, 2005). The focus of theories today (Kelly, 2011) has gone through reformulations on women's responses to IPV from historical to contemporary perspective. This is along the four aspects which are personal attributes, definition of response, mental and emotional reaction, and coping styles by the women in IPV. First personal attributes have evolved from weak helpless victim to the survivor, resilient, and survivor. Second, defining a response to the violence has moved from decision of staying or leaving to the ongoing process of seeking safety program for self and children. Third, with mental and emotional reactions earlier emphasis was on the psychological dysfunctional and now it's the focus of complex internal and external factors. These could also be changing socio economic changes impacting the IPV families and survivors. And last and final the coping style of the survivors has grown from passive and static to active and adaptive. The content of IPV in the social work foundation curriculum has been suggested by Danis (2002), who informs issues of intervention related to IPV within families. The classified categories in this intervention in the social work were to be incorporated in the curriculum. Eventually IPV integration with theory and practice was reflective in the purpose of the educational policy and accreditation standards of the Council of Social Work Education (CSWE, 2008, NASW, 2008, as commitment to diversity including age, class, color, culture, disability, ethnicity, gender, gender identity, and expression. Also including immigration status, political ideology, race, religion, sex, and sexual orientation is reflected in its learning environment. Interventions With Intimate Partner Violence These social service interventions emerge from the Duluth Model, power and control wheel (1987) (See Appendix A) with its driven concepts of the strengths-based perspective for the self-determination of clients. The importance of using a strengths perspective in family violence work was emphasized by Bell (2003) who states in her qualitative study on secondary trauma with counselors of battered women. Bell explains that for settings embodying a strengths perspective must acknowledge the people's strengths with expert autonomy in their own experiences of violence. She emphasized on the relationships of collaboration rather than relationships of hierarchical power emphasized by power theorists, then further assisting clients in identifying and building those strengths. Intimate partner violence work is based on this model upholds client self-determination, especially the strengths perspective and an empowerment approach central to feminist theoretical framework. This would require the social workers to walk with clients through their process, offered social supports, guidance, and a safe environment in which to explore options that are troubled with emotional meaning and practical implications. Today social work curriculum, theory and practice have divided its intervention into four main areas. The first being the intimate partner violence screening protocols, second is specialized assessment of the risk IPV poses to children. Thirdly are the applications of specialized IPV assessment to case decision-making. Fourth is working with families affected by IPV. The IPV curriculum encourages the professions in social work learn critical thinking skills, values clarification, and would reduce the tendency towards victim blaming and help look at cultural role amongst collectivistic verses individual communities (Bent-Goodley, 2007;Black, Weisz, & Bennett, 2010;Danis, 2002;Danis & Lockhart, 2003). These clearly impact family formations, relationship status stay or leave the abuser, stay in the same home with the batterer for economic needs of the children, encourage family unification values, and foster maintaining a public image for the community. And now these are incorporate in the curriculum. Advantage and Disadvantages to Women in IPV Having stated the main intervention plans for women in the United States, immigration status plays a key role in accessing these IPV services and addresses the dearth of intervention which would cater to the needs of immigrant women survivors. Several researches claim battered immigrant women are particularly vulnerable for the following reasons: (1) cultural perceptions of intimate partner violence which call on them to construe their individual needs to the interests of family or the community; (2) their limited access to the outside world; and (3) systems and services that do not provide language access or outreach to immigrant communities and effectively silence immigrant victims (Ammar et al., 2005;Orloff & Kaguyutan, 2002;Raj & Silverman, 2003;Dutton et al., 2000;Uekert, et al., 2006). Social isolation, immigration status, language barriers, sociocultural factors, economic insecurity, gender roles, justification and acceptability of abuse are key factors for immigrant women being vulnerable and remaining in abusive relationship with the batterer. And reporting would lead to deportation and with an illegal immigration status the problems continue to coexist (Raj & Silverman, 2002). Disconcerting to know but research has found that numerous immigrant women in abusive relationships, 72% of citizen and LPR spouses, do not file immigration papers for their wives . Particularly, with the South Asian population women population comprises of six countries with their respective understandings on culture of intimate partner violence. The countries of South Asian immigrant women are India, Pakistan, Bhutan, Bangladesh, Nepal, and Sri Lanka. There is a lack of emphasis on cultural diversity approach in theory, research, and practice, which reveals that the power and control wheel may not be the complete explanation to address the existing problems of IPV. The power and control wheel model (1987) states the culture of violence, for example natural order, objectification, submission, force, and coercion, which does not fit the culture of New Zealand Somoan's, a culture that does experience IPV. Researchers may overlook or misunderstand ethnic differences by ignoring the presence of different ethnic groups within their samples, by not including varied ethnic groups, or by having minimal sample sizes (Kasturirangan, Krishnan, & Riger, 2004). Therefore, Duluth Model of power and control wheel (1987) must be reviewed to the needs of IPV survivors from different ethnicities and must be more culturally sensitive. Intervention & Prevention of Intimate Partner Violence: Policy Changes Interventions into intimate partner violence (IPV) that occurs are notoriously unsuccessful (Dutton, 2006). There is definitely a power inequity in the family competing needs, rights, and interests. The power theorists explained that the roots of violence stem not only from within the culture, but also from within the family structure (Straus, 1976). Family conflict, social acceptance of violence, and gender inequality are hypothesized to interact and lead to the intervening in cases of partner abuse, which may then result in the continuation of family violence. The use of violence to address family conflicts is learned during childhood by either witnessing or experiencing physical abuse the ones families itself (Straus, 1977). This ecological framework is used by World Health Organization (WHO, 2010) to describe violence as a global public health problem. This framework integrates research findings and theories from several disciplines, including feminist theory, into an explanatory framework of the origins of gender-based IPV. Within the ecological framework, IPV is understood as a multifaceted phenomenon that is the result of a dynamic interplay among individual, relationship, community, and societal factors that influence an individual's risk to perpetrate or become a victim of violence. Therefore, every individual plays a key role in interacting with both environmental stressors and family internal and external complexities. Formal networks like the government with the policy support families with protection from abusive relationship and further impact their children's development. Also, community and religious faith-based institutions serve as network for help seeking and safety nets. They even come to act as coping mechanisms for individuals at these institutions. They have both risk and protective factors for the survivors and the family. Pre-existing Policies and Disadvantages That Address the U Visa Under the VAWA The Department of Homeland Security (DHS) recognizes that immigrant victims of IPV may remain in an abusive relationship because her immigration status is often tied to the abuser. The Violence Against Women Act (VAWA) in 1994 created a self-petitioning process that removes control from the abuser and allows the victim to submit his or her own petition for permanent residence without the abuser's knowledge or consent. Research suggests a major flaw of VAWA II that it does not afford protection to all battered immigrant women. VAWA II specifically provides relief for married women and widows, or those who are divorced within the past two years due to incidences of IPV. However, VAWA II does not provide relief to unmarried battered immigrant women and their children. Thus, battered immigrant women are not legally married, stalked, or cannot obtain any protection under VAWA II (Orloff et al., 2003;Stoltz, 2004;Balram, 2005). In addition, VAWA II does not provide protection to undocumented or illegal battered or unmarried immigrant women pregnant with their boyfriends. IPV battered immigrant women being married, like all survivors of IPV, do not retain any control over their lives. Consequently, battered immigrant women do not have the free will to remain in their homelands while their abusive husbands choose to move to the United States. While many immigrants may still be unaware of the U Visa, since 2009, USCIS has put a ceiling on the number available annually at 10,000 and this year the USCIS looks on course easily to reach that figure, having received 3,331 applications in the first quarter already. Congress created the U-Visa to encourage immigrants to come forward with information relating to crimes. The U-Visa is available for up to 10,000 individuals per year who cooperate with the investigation of prosecution of perpetrators of criminal offenses (Balram, 2005;Stoltz, 2004). Violence Against Women Reauthorization Act of 2013 Violence Against Women Act to Violence Against Women Reauthorization Act (2013) includes amendments that strengthen protections for non-citizen victims of IPV and sexual violence. Raj and Silverman 2003) state that South Asian immigrant women particularly with an immigration status of non-citizens are more likely to be at the risk from IPV. Here are some key changes affecting immigrants: Stalking was added to the list of crimes covered under the U nonimmigrant status, commonly known as U Visa. Crimes already on the list include abduction, blackmail, incest, rape, sexual assault, and unlawful criminal restraint, among others. The temporary U Visa, which was approved by Congress in 2006, allows immigrants who are victims of crimes to remain in the U.S. while assisting law enforcement officers in prosecuting the offender. Immigrants with U-Visas are eventually eligible to apply for permanent residency and later U.S. citizenship. The House of Representatives on February 28 (2013) passed the version of VAWA that included additional coverage for immigrants, gay, lesbian, transgender and bisexual individuals and Native American victims after some House Republicans attempted to pass their own version that excluded LGBT and minority groups. It passed by more than a required margin, with majority republican opposition. Therefore, it can be concluded that Congress or liberals support the centrality of the family and also respect individual freedom and choices, whereas the republicans who are the conservative continue to propagate against the reauthorization of VAWA 2013. Conservatives are antagonistic for family formations and catering to independence from the family rather than focusing on family unifications. Now with the newer considerations with COVID-19 epidemic, there are some serious implications of IPV on families. In the United States, the contact rate of members effects by IPV increased by 14% from April 2019 to April 2020 (National Domestic Violence Hotline [NDVH], 2020). During the fall of 2020, a few research studies suggest that not much had changed in opinions on perceived vulnerability to COVID threats and willingness to comply with public health prevention strategies between liberals and conservatives. Conservatives were more likely to advocate for personal responsibility and thus see prevention as a strategy for survivors. Liberals eased the cultural resistance of the domestic violence as a family issue and supported legal and social sanctions to protect the survivors rather than perpetrators. According to new research from the house, VAWA reauthorization act of 2022 is expected to expand prevention and protection efforts for survivors, also including those from the underserved communities, with increased resources and training for law enforcement and our judicial system (House, 2022). Further VAWA, a federal law which provides survivors of domestic abuse and sexual violence, standing long-stalled reauthorizations has included in the $1.5 trillion federal spending package on its way to the congress in March, 2022 (NPR Cookie Consent andChoices, 2022). To conclude combating domestic violence, sexual assault, dating violence, and stalking within our communities must not be a liberal or conservative issue. It must rather be a matter of humanity, justice, and compassion. Recommendations: Evidence Based Interventions The provisions of appropriate interventions are determined by its causes, or correlates, of the given social problem. There is a gap between the interventions for IPV and the policies; since the guiding foundation these strategies of intervention are embedded mainly in the feminist Duluth Model or cognitive behavior therapy (Corvo, Dutton, & Chen, 2008). The interventions are focused on Batterer Intervention Program (BIS) based on the feminist framework or cognitive behavior therapy models for managing anger, relationships, and communication skills; IPV interventions need theory-based research evidence (Stuart, Temple, & Moore, 2007). There are numerous empirical studies, literature reviews, and meta-analyses of standard model interventions with perpetrators of IPV having found little or no positive effect on violent behavior (Corvo & Johnson, 2003). According to the theory of planned behavior it is a model developed by Ajzen and Fishbein (1970) that predicts individual's behavior. This model takes from the learning theory framework and an extending into the theory of propositional control (Dunlany, 1967) and the theory of reasoned action by Ajzen and Fishbein (1970). This treatment model includes three components of the planned behavior that are crucial building blocks in prevention and treatment of IPV. These are individual's attitude toward violence, normative beliefs about the acceptability of violence, and perceived behavioral control (Kernsmith, 2005). However, the applicability of these theoretical components in batterer intervention has been largely unstudied (Kernsmith, 2005). The impediments to program development are the IPV certifying agencies that oversee interventions with abuse perpetrators involved in the criminal justice system. These agencies formulate and implement policies that would regulate what structure, duration, and form of intervention is required as a condition of probation for persons found guilty of intimate partner assault and thereby which form of intervention is deemed acceptable by the courts. Hence, program funding is only available to those programs that conform to these policies (Dutton & Corvo, 2006). More appropriately amongst the learning theories-social learning theory by Bandura's (1986) social learning theory is heavily cognitive in its orientation to impact behavior. Addressing the underlying attitudes and beliefs that support violent behavior, including attitudes about the acceptability of violence and perceptions of consequences for behavior is significant to change. International migrants irrespective of gender are produced, they are patterned, and they are embedded in specific historical phases (e.g., witnessing violence and understanding it differently). Acknowledging these phases, open up the immigration policy question beyond the familiar range of border control, family reunion, naturalization and citizenship law. These are the three aspects opening up (Waldinger & Lichter, 2003), which are central to the immigrant's families welfare and social policies in social work practice. The National Intimate Partner and Sexual Violence Survey (NISVS, 2010) included information about Lesbian, Gay, Bi-sexual and Transgender (LGBT) people for the first time. Both lesbians and bisexual women experience IPV more frequently than heterosexual women. Gay men experience IPV slightly less frequently than straight men, 26% of gay men reported that they were abused by an intimate partner (Walters, Chen, & Breiding, 2013), and liberals are for individual freedom and responsibility. Actions towards developing policies and strategies for effective implementation of programs must be deliberated attempts to address the issue of IPV. This will require a framework for joint policy, strategy development and prioritizing effectiveness programs. The planned steps towards design and implementation will be informing practice. Staring with creating an action plan to ensure delivery, developing professional skills, undertake further training and establishing effective networks. And once the programs for IPV are implemented, planned and implement appropriate evaluation must be done to ensure quality and evidence-based practice.
2022-04-21T15:25:35.129Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "50deb98ddd5983d49298134ff8a9efae58d5dcd7", "oa_license": null, "oa_url": "https://doi.org/10.17265/1548-6605/2022.02.002", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "297342553974e8e898ac4d5030548d3396a8d1ec", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
248878430
pes2o/s2orc
v3-fos-license
A Real-time Simulation-based Practical on Overcurrent Protection for Undergraduate Electrical Engineering Students From prior studies, it is evident that computer-aided education can be beneficial in various ways. It has also been observed that class sizes are becoming larger in engineering disciplines as the importance of engineering education is realized more by the communities. Concurrently, in an outcome-based learning environment, practical classes have become more important to better facilitate student learning, preparing them to the relevant requirements of real-life tasks. This study aimed at developing a simulation-based practical for electrical engineering students with an improved educational process while addressing issues of large class sizes vis-à-vis expansion of physical laboratories and noted that it can be as effective as a traditional practical and accommodating large class sizes efficiently. In this tutorial paper, an industry-standard software is used to educate students on distribution system protection through a practical together with a standard assessment method with a model of feedback for evaluation. The paper contributes with an improved educational process through the practical in providing the students with experiences similar to industry work to meet the requirements of professional bodies and industries considering primary teaching and learning outcomes as the focus of outcome-based education system. Also, it delivers as a better platform for the students connecting lectures and practical sessions more effectively with the adopted methodology and approach. Further, it can provide the students exciting opportunity to best practice on systems that are highly related to industry applications in addition to encouraging them for advancing their education in the related niche areas of power systems protection. I. INTRODUCTION Practical laboratory sessions are in general integral parts of teaching and learning that provide hands-on experiences to stimulate interest, amalgamate theoretical aspects and opportunities for investigations and experimentations [1]. An engineering curriculum without practical sessions would not possibly be able to deliver the fundamental aspects of the aims and objectives of the same. The sessions at the laboratories promote objectives such as interaction with technical equipment, understanding methods of scientific enquiry, develop observational skills, analyzing experimental data with accuracy [1]. Like others, undergraduate engineering students too go through a certain amount of laboratory experience where they get engaged with demonstrations of various theories taught in the class including the procedure of training for scientific investigations [1]. Hence, laboratory activities in engineering degree programmes follow processes such as direct instructions to the students [2] who are required to follow the provided instructions using instructional methods to arrive at the answers in various forms such as plots, tables, and explanations. There are different categories of established laboratory instructions e.g. expository, inquiry, discovery and problem-based. The mostly used one is the traditional (expository) type due to its advantages such as minimal involvement of instructor, low cost and timeefficient [2], [3]. However, acquiring practical experiences in the laboratory through traditional bench-work is not the only way and neither necessarily the best way [1]. At the age of availability of so many different ways of doing a practical laboratory, especially, with the advent of educational and commercial software being made available to the schools for teaching and learning purposes bringing academic staffs and students close to new horizons of teaching and learning. On the other hand, one of the recent concerns in engineering education is that it needs to be more design oriented and exciting to the students. A survey report shows that undergraduate electrical and control laboratory experiences need to improve [4]. The reason might be, in general, in most of the engineering faculties the same type of laboratory experiments are repeated that were designed long ago provided with necessary instructions [5]. The instructions are mostly technical directing the students to go through the steps and obtain the expected results. Apart from technical instructions, some of the practical instruction-sheets also contain the aspects of precautionary measures and safety that need to be observed during the practical session to avoid any harm or injury to any of the personnel or equipment involved. Another important concern raised indicates that practical works in the laboratories lack of relevance to the job requirements of graduates as they are old stereotype experimentations [5]. In fact, as some of the traditional practical were developed long ago and using very old equipment, they do not necessarily excite the students much apart from very few exceptional students who rightfully take the opportunity to inquire beyond expectations. Further, the real-life industries have been equipped with the most modern and sophisticated technologies in terms of both hardware and software. Certainly, the existing gap between what is being adept in schools and that in industries needs to be bridged rationally. Additionally, one of the most important requirements of electrical engineering students of these days is that they must be equipped with software skills to solve engineering problems using computer algorithms as it saves time, money and can provide the efficient predictions of a similar physical system. According to Engineering Council of South Africa (ECSA) that sets and monitors standards to international norms to ensure the quality of engineering education in South Africa [6], the engineering students should demonstrate competence to use computer software packages for computation, modelling, simulation and information handling [7]. These requirements should be helping the students in the process of enhancing their personal productivity and teamwork as a whole which is one of the most important requirements in industry as every single employee works as a part of a team and contributes to the end-product or outcome in many different forms. According to [8], acquiring skills in industry-standard software should be mandatory within accredited engineering degrees as demanded by The Institution of Engineering and Technology amongst other professional engineering institutes. Therefore, it is imperative that new approaches have to be explored. This can include but not limited to wide range of new teaching and laboratory material, computer simulation, software assisted instructions, software tools to demonstrate both theory and real-life problems [4] [9]. At present, many universities are using alternative approaches for traditional laboratory for undergraduate courses [1], [2], [9] [10]. This in turn helps to motivate the students as they come across these new approaches that provide them with experiences and the flavor of their future workplaces. Nonetheless, the laboratory activities are being reformed due to the fact that education process itself is going through evolution with the advent of new technology resources [2] made available to the academic institutions, especially in outcome-based educational approach [2], [10], [11]. The curriculums are being improved almost in every academic institute at regular intervals as and when there is a need to introduce new specialist areas of teaching and practical. However, it needs to be considered that it is almost impossible to convert all the traditional practical to other forms that have more advantages [2]. It would be irrational too not to have any traditional experimentation which would simply deprive the students from having experiences of the same although the students might gain a lot of knowledge and experiences from other forms of practical experiments. As the education has become one of the most important prerequisites for all the stakeholders in the society, it is obvious that there are large numbers of students in many classes in many academic institutions across the globe including undergraduate engineering programmes. It is challenging and unreasonable at the same time to expand some of the laboratories with increased number of students. Also, some laboratories are very expensive in terms of specialist equipment, and specialist technicians, and large areas of floor space required to maintain those [1]. As advancement of software development has been playing a great role in educating students of traditional engineering subjects such as electrical, mechanical, or civil engineering, it would be wiser and impactful to bring computer software support to experience an attitudinal change among the students because appropriate use of software permits the students to gather knowledge with the opportunity to explore the equipment to be used beforehand in a safe manner and real-life machinery way [12]. As the faculties in electrical power systems are challenged with problems in providing the students of large class-size with true nature of power systems, a feasible route is to allow them to utilize the modeling and simulation tools [13]. So, a perception to develop a simulation-based practical where students can utilize the existing computer laboratories was conceptualized. It was also felt that the practical studies should be more relevant to the best practices in industries. This motivated the development of a practical on distribution system protection using overcurrent relays which is one of the most fundamental types of protection in electrical power systems. The reason for choosing a practical on protection system is that protection is essential all power systems networks and equipment [14], [15], [16], [17], [18], [19] and every electrical engineering student needs to assimilate its significance and implementation appropriately. The offered practical is similar to the real industry-problem, employs necessary teaching approach and utilizes RSCAD, which is a proprietary software suit of RTDS Technologies so as to impart the experiences of real-world problem in more meaningful ways. In [20], the author presented a simulation study of overcurrent protection coordination of a distribution system which is useful in education and training of future power systems engineers. Emphasizing on this particular focus of education, instilling knowledge and skill-sets, the author aimed to improve the process of learning in a step-by-step manner; the author has also included some of the fundamental but important aspects such as inclusion of instantaneous overcurrent relay and a system configuration where there is a parallel feeder in between buses which accommodates an additional and significant learning that both feeders (in parallel) must be tripped for a fault at the downstream bus. Further, verification of time of operation (including grading margin) of the relays with their types and inverse characteristics that can be chosen from the configuration has been one of the focal points of this proposed practical. Further, in this paper, a teaching and learning methodology has been accommodated including a simple but useful model of assessment method and evaluation of module feedback. The practical includes operation of instantaneous, and phase time overcurrent relays with standard characteristics, simulation of fault conditions, calculations and verifications of time multiplier settings (TMS), operating times of relays and corordination of relays including grading margins. Hence, the students gain technical skills in setting protective relays, verifying their operations and also in technical communication as they produce a technical report based on this practical. Further, the approach undertaken ensures that the learning outcomes of the new approach remains the same compared to the traditional ones and health and safety of the students [1] are accounted for, in fact improved as the students would only be using computers in the computer laboratories or their own computers at remote locations and there would be no electrical hazards present from the practical itself. The paper is organized in the following manner: Section II depicts the adopted methodology focussing on teaching and learning outcomes, while Section III illustrates the selection of software and practical considering the overview of the software used, selection of practical area and practical undertaken while Section IV provides an executive summary of the practical undertaken with reasoning, teaching and learning approach, teaching and learning outcomes, and the results the students are expected to achieve in a step-by-step method. Section V considers the assessment method adopted while Section VI shows the evaluation and assessment adopted considering the feedback from students. Section VII presents achievements, impacts and future aspirations. Finally, section VIII concludes the work presented in the paper. II. ADOPTED METHODOLOGY This work being a part of outcome-based education, utilizes the most commonly used Bloom's taxonomy in determining the teaching strategies to support and improve the process of learning in such a way that the students progress in terms of cognitive levels [21], [22], [23], [24]. Similar to the typical aspects of any traditional educational program, aim(s), teaching process and evaluation are included in this proposed work in line with cognitive levels of knowledge, comprehension, application, analysis, synthesis and evaluation of Bloom's taxonomy as shown in Figure 1 [21], [22], [23], [24]. The main concept of the adopted methodology following cognitive learning process is that the academic instructor design the educational aims according to educational objectives related to the subject matter and choose appropriate teaching process ensuring that the students get through successfully from one cognitive level to the other. The students are provided with the necessary information and knowledge on the subject matter on overcurrent protection in the classroom lecture and training on the software before they engage themselves with the practical. in the very next step, they are also taught on the methodology for efficient analyses to ensure that the comprehension level is achieved. In the very next step, the students apply the acquired knowledge and skill onto the assigned practical and achieve the desired results and they should be able to analyse the results they obtain followed by synthesis where they integrate some planned ideas for the last sections of the practical where they would have to demonstrate the competence in protection coordination. In this process, the students should be critically evaluating the relevant decisions with necessary supports and evidences. They prepare (create) an exclusive technical report based on the practical exercise demonstrating their acquired knowledge, skills (related to psychomotor domain), understanding of the subject matter including communication skill in terms writing from the perspectives of exit level outcomes. The effectiveness of the process is primarily determined by the process, steps followed by the educator and the students [21], [22], [23], [24]. Based on all this fundamental principles, the adopted mythology was formulated and utilized in this work for the proposed practical including assessments which are demonstrated in the following sections of the article. Recognition of information and principles learnt Comprehension III. SELECTION OF SOFTWARE AND PRACTICAL The demand of skills required is ever growing in the field of electrical engineering. Especially, the power systems area demands a lot of study through simulation, as it is practically impossible to have and study a large and real power system network connected with numerous controllers, and with obvious dynamic nature of the power system itself associated with continuously variable power generation, transmission and distribution with various types of connected consumers. Although there are a number of electromagnetic transient simulation software available for power system studies such as Matlab, PSCAD, Electrical Transient Analyzer Programme (ETAP), DigSilent PowerFactory, Power World, OPAL-RT, most of the software used for power system studies are nonreal-time in which the users study a certain phenomenon on a given power system model for the time period of interest. Also, some of them do not have the necessary components such as protective relays for simulation of a power system proection case studies in a realistic manner. For example, Matlab and PowerWorld provide library blocks for standard power systems studies but do not support protection aspects. Features for protection studies are available in DigSilent PowerFactory, PSCAD, and ETAP and allow range of protection concepts including overcurrent protection, however, these are of nonreal-time type. Although OPAL-RT provides a real-time scenario, RTDS/RSCAD outperforms in many aspects such as the protective relays, protection transformers, circuit breaker, faults, dial, slider, push button blocks in the library resemble the real-life commercial ones with necessary settings and controls making it much easier for implementation and simulation of protection studies together with fault analyses simply using the existing library blocks providing the essential experience equivalent of a practical using real equipment. Further, RTDS/RSCAD is very popular as it is the world's benchmark for real-time power system simulation and popular in many countries such as United States of America, China, India, South Africa to name a few across the world. RTDS/RSCAD provides technical supports and solutions to the users as and when necessary both locally and from the central support system including training material and resources, especially, it provides immense supports to the academic institutions in terms of licencing and discounts. The software can be installed on a number of desktop/laptop computer with Windows Operating System and simulation studies can be performed from remote locations. Further, real-time digital simulator (RTDS) and RSCAD make it possible to have a realtime study, in a manner similar to a real power system operates, enabling analytical studies to be performed much faster than off-line simulation programs [25]. In such a real-time simulation platform, a user can interact with the system, dynamically to observe any changes in quantities of interest in real-time. Further, RTDS and RSCAD allow the users to physically connect hardware such as protective relays, controllers where inputs signals are taken out of software simulation connecting the physical equipment and output signals from the physical equipment are sent back to the simulation through RTDS hardware. Therefore, learning simulation studies and mastering skills provide the electrical engineering students experiences of real-world power network scenarios. A. OVERVIEW OF THE SOFTWARE USED The RTDS was originally developed at the Manitoba HVDC Research Centre in Canada in the 1980s with the first commercial installation of the RTDS took place in 1993 which is now owned by RTDS Technologies. RTDS is the world's benchmark for performing real-time digital power system simulation [25] consisting of custom hardware and all-in-one software RSCAD designed to carry out real-time electromagnetic transient simulations which continuously operates in real-time to provide accurate results. The software suit RSCAD is used to prepare and run simulation, and to view and analyze results. The RSCAD suit includes a number of modules such as Draft, Runtime, CBuilder, Multiplot, Cable, T-Line that are used for simulation study of power systems models together with physical hardware such as controllers, protective relays interacting with the simulation model during simulation runtime [25], [26]. For a power system circuit to be studied using RTDS, the Draft module is used together with the Runtime module. The Draft module has a drawing canvas which is used to develop the power system models that consist of various power systems components such as generator, motor, transformer, transmission line, voltage source, resistance, inductance, capacitance, dynamic load together with controls components such as mathematical functions, logical functions, signal generation, signal processing, data conversion, metering and protection. To run a simulation case, RunTime software module is used where the users set up the interface with the power system that is built in the Draft programme using various components such as meter, plot, switch, push-button, slider, light etc. [26]. B. SELECTION OF SIMULATION-BASED PRACTICAL AREA -REASONING The students attending the practical would be from electrical engineering undergraduate degree programme where they study fundamental subjects such as natural sciences, mathematics and electrical engineering principles before entering into core subjects such as electrical machines, power systems, power electronics, control systems, high voltage engineering. The students attend a certain number of practical hours as per the subject credits and guidelines in line with requirement of learning outcomes. In general, as for the number of laboratory hands on is one for each subject area whereas the number of students may vary in the range of 50-70 over the years in an established programme. The associated students study power systems as one of the core subjects starting from fundamentals of power systems to theoretical aspects of protection of power systems are encompassed. Naturally, the power systems subject aims to provide appropriate hands-on and simulation experiences to stimulate interest, amalgamate theoretical aspects and opportunities for experimentations and investigations. In this paper, the area of power systems protection is chosen for the practical as it is one of the fundamental types of protections in electrical engineering [14], [15], [16], [17], [18], [19]. To prepare and stimulate the minds of future electrical engineers, it is vital that laboratory experiences complement the theoretical aspects taught in the class as well to demonstrate engineering from the first principles but not forgetting electric utility practicalities because that is where graduate students would apply the knowledge they acquire. Use of simulation-based practical for electrical engineering is ergonomically advantageous compared to traditional practical where a set of practical equipment are available to a group of students. For the simulation of a practical, a student would need a computer system with the simulation software installed in it [4]. This eliminates the need for actual practical equipment and laboratory space as well because the students would use the existing computer laboratories. This also reduces the involvement of technical staffs associated with traditional practical [4]. As practical laboratory experience is extremely important in engineering to develop skills and competencies required by industries [5], this simulation-based practical has been purposefully and carefully designed considering teaching and learning outcomes as well as industry needs such as the future power engineers should be capable of designing a protection scheme for a distribution system using all the necessary protective gears that include selection of protection transformer ratios, selection of types of relays required, setting of overcurrent relays, consideration of overcurrent and fault conditions at various locations, selection of pick-up currents and time multiplier settings, selection of inverse characteristics, operating time calculation of a relay in various operating/fault conditions including a circuit breaker failure, grading margin and protection coordination. The main reason is that if a student is introduced to a real-life industryproblem, the experiences can become manifold as it can motivate and stimulate the students to a great extent. As a matter of fact, this can be benefitting the academic and technical staffs involved in the process as well. Although this is not the first real-time simulation-based practical in the world, the endeavour is to introduce the students to the power systems protection area to exploit the advantages of the RTDS/RSCAD that can simulate power system protection aspects in a similar manner to real-life scenario. Finally, the narrowed down area arrived at was simulation of distribution system protection using RTDS/RSCAD. The configuration of the distribution system was adopted from [27]. The distribution system, the relay settings, breakers, sources, fault locations, etc. are based on an example case developed in [27] by using actual data taken from a modern numerical relay of the very own manufacturer. However, the same teaching and learning objectives can be achieved using other network configurations such as an IEEE bus system with respective protective gears of similar features and careful design of the protection scheme. It would be possible develop all types of overcurrent protection circuits using all the necessary components and run/test as real-life projects. The students could be provided with lectures sessions on overcurrent conditions in electrical distribution systems, and the use of protective devices such as fuses and overcurrent relays together with different characteristics of overcurrent relays, their application areas. Further, they could be tutored with examples of calculations of fault currents at various locations of the system and reasons of failure of a relay and/or circuit breaker to isolate the faulty part of a power system although there is a persisting fault in it. They also need to be taught on grading margin required if a relay fails to detect a fault or to pick-up or a circuit breaker fails to open in the event of a fault detected by the relay, the relay in the upstream of the system must detect the same fault condition and ensure the opening the associated breaker after certain time delay so as to protect the system and the equipment connected to it from any possible thermal and mechanical damage as consequences. C. SIMULATION-BASED PRACTICAL UNDERTAKEN -COGNITIVE THINKING Prior to the practical, the students be provided with practicalsheets to prepare themselves for the practical and write-up a pre-practical report which be assessed and taken into consideration as a part of examination. The students do the practical with the guidance of the associated lecturer and on completion submit a post-practical report which would also be assessed. With this traditional approach, the students are having limited approach apart from they are being helped out by the academic staff and obviously none of the students would try to repeat even a part of the experiments to verify the previously obtained results which could in turn develop a very good observational skill in them. Therefore, the main aim of this simulation-based practical is to provide the students with the followings: • Use of industry-standard software and acquire skill in solving power systems problems quickly and efficiently, and meeting ECSA Exit Level Outcomes (ELOs) or graduate attributes (GAs) requirements. • Help the students to observe similarities/dissimilarities for the quantities of interest which in turn can instil thoughtfulness in the students' mind. • Avoiding errors in results due to metering imperfections (due to wrong scale choice, wrong choice of meter (e.g. low power factor watt-meter), vertical/horizontal orientation unless all meters used are digital, or calibration of the meter). • Production of end-results from the software (snap-shot or continuous) which are much more accurate as compared to plotting a graph or preparing a table with some selected data values. This also serves a saving in student-time to prepare their post-practical report. • Making the practical hundred per cent safe (from any electrocution). With simulation-based practical, it is very easy to change the system, as compared to traditional ones where actual equipment is required for any change in practical set-up. This allows the academic staff to create a number of practical cases for different students and they would be doing the same practical but with different sets of software equipment which is almost impractical in a traditional practical. Also, the academic staff can help the students during the practical being outside of the laboratory using remote-desktop functionality of computer operating systems. In addition, this approach serves a saving in technicians-time generally involved in practical that can be utilized for helping the students for many other useful purposes. Apart from the design of a simulation based practical for final year electrical engineering students using industrystandard software this work also examined the specific content of the practical as it would be related to final year module where ECSA ELOs or GAs are assessed on the basis of experimentation, data analysis and investigation [7]. This was ensured through a combination of the teaching and industry experience of the academic staff involved and by knowledge of educational theories to warrant the teaching and learning interesting, and meeting industry needs at the same time. IV. DESCRIPTION OF PRACTICAL UNDERTAKEN Utilizing the adopted methodology and from the perspectives of the learning outcomes of the practical exercise, the students have to conduct simulation studies and investigations on a given system designed by the academic instructor in the field of power systems protection and prepare a written technical report upon completion of the task presenting the expected results and analyses for assessment purposes. The more focused area and aim of the practical are to familiarize the students with the working of overcurrent relays, especially, phase instantaneous, time overcurrent relays with inverse time characteristics e.g. IEC standard inverse, very inverse characteristics. The students would be acquainted with the coordination of relays with their applications for protection of a distribution system using real-time digital simulation software RSCAD. The students need to have prior knowledge in the area of overcurrent protection which they are taught in the lecture sessions with examples using overcurrent relays and the procedure for protection coordination for a given power distribution system. Special instructions alongside the conventional instruction-sheet on how to use RTDS and RSCAD efficiently need to be provided. The students should also be informed with the use of RTDS racks on their availability and effective utilization of time to get their system ready for simulation. The distribution system considered for the practical is as shown in Figure 2 [27] with the diagrammatic representation of the exercise in Figure 3 where there is a phase instantaneous relay (Relay F1) and phase time overcurrent relays (Relay F2, and Relays 1-4). All the relays require the user to set the pickup value of the relay on the software system. However, for the Relays 1-4, time multiplier setting is also a requirement. The phase time overcurrent relays can be set to the desired characteristics, e.g. IEC standard inverse or very inverse as shown in Table 1 where TD is time dial options which is also known as time multiplier setting and M is multiples of pick-up current. Modern numerical relays allow the use of user-defined tripping characteristics, besides IEC and ANSI standard characteristics. [14], [19]. Many other relevant and useful characteristics can be found in [19]. It is intended that the primary protection (the main/dedicated protection) is to clear the fault as fast as possible to reduce any damage to the system while the backup protection must operate should there be a VOLUME XX, 2017 9 failure in primary protection scheme. However, the operation of backup protection causes isolation of larger part of a network as compared to the primary protection. Therefore, to reduce undesired disconnection of any part of a network, appropriate coordination is necessary [15], [17], [28]. Overcurrent protection schemes are much more economical and can be employed without any expensive accessories to provide faster, reliable and selective fault clearing as compared to other types such as distance and differential protection systems [14], [19]. In terms of proper protection coordination, incorporated grading margin is chosen typically and it depends on the circuit breaker operating time, dropout time, and safety margins. Using overcurrent relays, selectivity can be achieved through time grading although it might cause high tripping time in some applications depending on the grading path. The issue of higher tripping time with heavier faults due to grading margins can be overcome by the application of inverse definite minimum time relays which can clear heavier faults at the upstream with decreased operating time. However, grading time should also be considered carefully to ensure selectivity [14]. Figure 3, for the practical exercise, students are required to understand and demonstrate working of phase instantaneous and phase time overcurrent relays. Once the students acquire sufficient knowledge and confidence on the operational aspects of these relays, they should be able to set phase time overcurrent relays and achieve proper protection coordination considering grading margin for fault conditions at different buses. A. TEACHING AND LEARNING APPROACH Apart from previous teaching session in the class room lectures on the overcurrent protection aspects, at the beginning of each practical session, the academic staff involved should provide a short introduction and training to the students on RTDS/RSCAD including power systems components, and protection devices including their controls and necessary measurements. The session would also cover few small example cases on how to develop a power system network on the Draft module of RSCAD, wise selection of component blocks from various sections of library. One such snap-shot is shown in Figure 4 where a voltage divider circuit has been developed from RSCAD example [26] using a three-phase voltage source, bus connectors, resistances, and ground connection. It also shows the use of a three-phase root mean square (r.m.s.) block to illustrate how to measure r.m.s. voltage of three-phase nodes. The students would be introduced to the RunTime module after error free compilation of the Draft circuit. On the RunTime canvas, the students would learn how to use control components such as slider to control voltage, frequency or phase angle of the voltage and how to display them e.g. using meters, plots. A snap-shot the RunTime is shown in Figure 5. It also depicts different plots that show the real-time results for the simulation. B. TEACHING AND LEARNING OUTCOMES In the process of developing the practical, the broad technical objectives are listed as follows: • To understand the working of a phase instantaneous relay (Relay F1) and phase time overcurrent relay (Relay F2). • Setting Relays 1-4 for three-phase fault conditions at Buses B and C and ensuring proper protection coordination of relays. From the broad objectives and perspectives of ECSA ELOs or GAs, the following learning outcomes are structured: • Students are required to conduct the simulation study using the specified software to determine or predict the specific outputs of the study. • Students are required search related literatures and evaluate them as to how the knowledge gathered helped them in the related simulation study. • Students are tasked to obtain specific results based on simulation study and analyse them technically. • Students are supposed to ensure appropriate selection of equipment or tools for the simulation study. • Students should be interpreting the results obtained and conclude with proper reasoning. • Students should be able to communicate technically on the simulation practical. C. EXPECTED SIMULATION RESULTS FROM THE STUDENTS As mentioned earlier, the students would have to demonstrate competence in understanding overcurrent protection for the distribution system considered, they would have to study the following cases and produce expected results as follows: 1) WORKING OF PHASE INSTANTANEOUS OVERCURRENT RELAY (F1) Relay F1 is a phase instantaneous overcurrent relay which compares the actual value of current with the pick-up value set by the user and generates a trip signal for the associated circuit breaker to open and isolate the faulty part of the system once the actual value of current exceeds the pick-up value without any intentional time delay. In this case study, the students would be expected to understand how the pick-up value of a phase instantaneous relay can be set in realistic manner and increase the circuit current and hence the current seen by the relay for it to generate the trip signal upon exceeding the pick-up value. In this regard, the students would run the simulation first and then increase the electrical load (using the slider) to a certain value (e.g. 3.6 MW as shown in Figure 6) so that the current seen by the relay (4.963 A) is less than the pick-up value (5.5 A) of the relay. Therefore, they should observe that the relay would not trip and load current still be supplied as shown in Figure 6. Hence, the TRIPF1 remains at '0' indicating that the relay has not generated the trip signal and the BRKF1 remains at '1' indicating that the Breaker 1 remains in closed condition. The students should be observing the pick-up value of the relay and current seen by the relay during the operation. In the next step, the load is increased to value (e.g. 4.1 MW) that will cause the current through the relay coil from the previous value of 4.963 A to exceed the pick-up value (5.5 A) causing the relay to trip instantaneously which is shown in Figure 7. The students also need to understand and explain that there is no time delay between the occurrence of overcurrent, generation of the trip signal and opening of the circuit breaker as the relay is of instantaneous type. 2) WORKING OF PHASE TIME OVERCURRENT RELAY (F2) Relay F2 is a phase time overcurrent relay which compares the actual value of current with the pick-up value set by the user and generates a trip signal for the associated circuit breaker to open and isolate the faulty part of the system once the actual value of current exceeds the pick-up value with an intentional time delay determined magnitude of the current seen by the relay, pick-up value, time multiplier setting and the chosen inverse characteristic e.g. standard inverse, very inverse of the relay. In this case study, the students would be expected to understand how a phase time overcurrent relay can be set in a realistic manner and increase the circuit current and hence the current seen by the relay for it to generate the trip signal upon exceeding the pick-up value and verify the operating time of the relay. To work with the Relay F2, the students need to open the relay box RelayF2 and follow the instructions. BrkF2 is the switch for blocking/de-blocking the relay, CLOSEF2 is to close the CB after tripping, LoadF2 is to change the load on the feeder, IBURF2 shows the r.m.s. value of CT secondary current, curve type dial shows the selection of characteristic curve (e.g. very inverse) for the relay, IpF2 and TmF2 are the pick-up value and time multiplier settings of the relay. The settings completed by the students are observed in Figure 8. FIGURE 8. Settings of Relay F2 In the next step, the students should be instructed to change the LaodF2 to 3.0 MW from its initial value of 2.3 MW, observe the phenomenon and explain it. Verification of the operating time of the relay showing both the values obtained from calculations and simulations should be completed at this stage. The simulation results shown in Figure 9 for the change of LoadF2 to 3.0 MW where IBURF2A, IBURF2B, IBURF2C are the instantaneous values of currents in the associated Relay F2 and IRF2rms is the r.m.s. value of the current seen by the relay. It also shows that the BRKF2 is initially at '1' indicating that it is in 'closed' condition and goes to 'open' condition due to the initiated overcurrent condition. The trip signal TRIPF2 generated by the RelayF2 is shown in the Figure 9. Figure 10 illustrates determination of operating time of the Relay F2 which would be compared with the calculated value for verification purposes. Using the formula of the chosen inverse characteristic, in this case the very inverse characteristic (the equation is shown in Table 1) the student would verify the operating time of the relay calculated from the current (5.52 A) and for a pick-up value of 5.0 A (found to be 1.298 seconds) and that obtained from simulation results as shown in Figure 10 (found to be 1.2926 seconds). In the following step, the students are supposed to understand the significance of time multiplier setting (TMS) which would be increased to 0.02 from the initial value of 0.01. The results are shown in Figure 11 and the operating times (which is now larger than the previous one) are found to be very close to each other; calculated value using very inverse characteristic equation as shown in Table 1 being 2.596 second whereas simulation results showed it to be 2.511 seconds as shown in Figure 11. This clearly demonstrates how the students would effectively achieve the results and understand significance of the TMS and its impact on the operating time of the relay with constant load current of the system. FIGURE 13. Relay F2 operation with calculated TMS The verifications through observing the simulation results as shown in Figures 12 and 13 respectively for the operation of the Relay F2 and determination of the operating time of the same (found to be exactly 2 seconds) should be done by the students as they should be instructed to understand the effect of load current on the operating time as well. Therefore, they would have to demonstrate it with another value of load and same TMS. Furthermore, the students are supposed to set the Relay F2 to operate at 2 seconds with of 3 MW load on the feeder. Therefore, the students need to calculate the TMS (using the equation shown in Table 1) for this relay and demonstrate that the relay takes time to operate which is close to 2 seconds. 3) DEMONSTRATION OF SETTING RELAYS 1-4 FOR FAULTS AT BUSES (B AND C) AND ACHIEVING PROPER PROTECTION COORDINATION BETWEEN THEM Relays 1-4 are phase time overcurrent relay for which all the necessary setting are required to be set by the user as was explained for the Relay F2. This case study is for the students to understand the requirements, implementation and achievement of proper coordination of the associated relays. The first instruction for this cases study for the students is to demonstrate the tripping of the Relays 1 and 2 at 't" seconds with a pick-up value of 1 A and a three-phase fault BUS C ('t' is supplied to the students during the practical and different students would have different values of 't'). For this, the students are supposed to find the TMS of Relays 1 and 2 first which they should obtain using the fault current, pick-up value and the inverse characteristic formula. For an example, value of t=2 seconds, the fault current should be 10 A (IBUR1) as shown in Figure 14 and the corresponding calculated TMS to be 0.6733 (using the equation shown in Table 1). IBUR3 and IBUR4 are the currents seen by the Relay 3 and Relay 4 respectively. FIGURE 14. Fault currents for a fault at BUS C The results obtained from the simulation as shown in Figures 15 and 16 should be obtained once a fault is applied with the calculated value of the TMS for Relays 1 and 2. Figure 15 shows that the signal FLTB is at '0' while FLTC is at '1' indicating that there is no fault at BUS B but at BUS C. Figure 16 shows the trip signals TRIP1 and TRIP2 from Relay 1 and Relay 2 respectively at the same instant. It also should be noted by the students that Relays 1 and 2 are in parallel feeders connected to the faulty bus and hence they must operate at the same time to open the respective circuit breakers (BRK1 and BRK2) which is shown in Figure 17. The following step is to achieve protection coordination for the same fault at the same location in case Relays 1 and 2 and/or respective circuit breakers (CB) fail to operate due to various reasons (failure/s in CB, tripping mechanism, tripping voltage, protective relay etc.). In such a case, the Relay 3 has to operate with a grading margin of 0.4 seconds. Hence, the TMS for the Relay had to be calculated (using the formula shown in Table 1 and grading margin) and found to be 0.808. Figures 18 and 19 show the non-operation of the Relays 1 and 2 and respective breakers while the operation of the Relay 3 and breaker 3 for the same fault at BUS C. The students are also supposed to note the time (do the calculation using the equation shown in Table 1 for verification) when the breaker 3 opens as it includes the grading margin. Finally, the students are required to grade the Relay 4 for a fault at BUS B with a grading margin of 0.4 seconds. To achieve this, the operating time the Relay 3 is required for a fault (FLTB) at BUS B only with no fault at BUS C as shown in Figure 20. Figure 21 shows the trip signal (TRIP3) from Relay 3 for a fault at=0.5 seconds giving the operating time to be 2.23 seconds. Hence, the operating time of the Relay 4 should be 2.63 seconds. This assists to determine the TMS (using the equation shown in Table 1) They illustrate that there was no trip signal issued from the Relay 3 and hence Relay 4 had to pick-up to cause circuit breaker 4 to open to isolate the fault. Again, the students are required to take note of the operating time of the Relay and grading margin included in it. In this example case, the grading margin obtained from simulation was found to be 0.385 seconds which is very close to 0.4 seconds (an error of +3.75%). All this case studies implemented on a real-time industry standard application software with components featuring all the practical aspects of real equipment, on the settings and operations of phase instantaneous and phase time overcurrent relays, their testing for operations including operating time, proper coordination of a number of associated relays completed by the students provide them with a great deal of necessary knowledge and skill-sets required of them to work in the practical field as a professional electrical engineer, especially, in the niche area of protection of power systems which is of utmost importance. V. MODEL OF ASSESSMENT METHOD To ensure that the students tested effectively considering all applied levels of Bloom's domains by aligning the assessment method with the exercise and techniques the following assessment criterion be considered to evaluate the students on completion of the practical: • The extent to which the students are able to correctly conduct different sections of the study under carefullycontrolled changes in operating condition is assessed via the correctness of the results obtained. • To the extent on the students' ability to explain the reasons for the particular findings in each section of the investigation in terms of theoretical material covered in the course. • To the extent the students are able to analyse data and results, and document them in the post-practical report supported by necessary theoretical background. • To the extent the students are able to select and use appropriate equipment, meter, and features of the software in use. • To the extent the students are capable of drawing appropriate conclusion based on obtained evidences or reasonable responses. • To the extent the students are able to communicate technically in written form the purpose, process and outcomes of the study. All the above assessments should be internally done by the academic staff involved and externally examined to meet the ELO or GA requirements. VI. EVALUATION AND ASSESSMENT -FROM MODULE FEEDBACK Examination and evaluation of ELOs or GAs could be conducted using traditional methods. Performance this proposed approach using the software-based simulation practical, could be compared with a similar traditional one. Most importantly, comments made on completion-of-course questionnaires by the students would indicate on the design of the practical, and its suitability for final year electrical engineering students. A standard method is used to prepare a feedback form for the students using logically developed structured set of questions on the practical work attended by them as shown in Table 2. The reasons being it is a typical and natural way to reach out to the students as it is easier, simpler, less time consuming and less expensive. Further, the data collected could be stored in a database for necessary expert evaluation in terms of relevant quantitative analyses. Table 2 would show the responses (the numbers in the rows would show the number of students response) corresponding to various aspects to a five-point scale according to standard procedure with credits of 1=strongly disagree, 2=disagree, 3=neutral, 4=agree, and 5=strongly agree. The mean for each aspect would be shown together with overall mean for the practical (a standard: a mean of 4.5-5.0 exceeds expectations, 4.0-4.45 meets expectation, 3.5-3.95 requires monitoring while 0-3.45 requires action). Observing the Table 2, it would be easy to measure the performance if it meets its prospects, and expectations including respective scores. VII. ACHIEVEMENTS, IMPACTS AND FUTURE ASPIRATIONS The introduction of the presented practical has achieved immense response and really been an attracting one among the student body, both internally and externally (to the university; a number of relevant industrial organizations came forward in a philanthropic manner to donate a number of various related equipment and imparted state-of-the-art advanced knowledge in the process to further such activities), as a springboard for them to learn on and become specialists in the area of power systems protection. A number of final year electrical engineering capstone projects were formulated based on this practical and students gathered knowledge not only on overcurrent protection, but also on different types of generator protection, transmission line protection, distribution system protection along with distributed energy resources, and batteries. Many of such projects were awarded prizes in the electrical engineering programme. A number of motivated MSc students have graduated in the area of power systems protection. To add to that an inspired candidate has pursued PhD and graduated in the power systems protection area with a number of top-tier journal publications. Also, currently there is a student pursuing PhD after completion of MSc in electrical engineering in this field who initially started with overcurrent protection at the undergraduate level. Therefore, the presented practical has shown its educational benefits and significant impacts on the engineering education including higher degrees as it improves the process of teaching and learning of the very fundamental aspects of power systems protections in electrical engineering along with skill development in the very niche areas. Based on the experiences, the followings can be considered as future aspirations: • Development of an MSc degree course in power systems protection encompassing the fundamentals and advanced aspects protection systems in all areas of power generation, transmission and distribution systems. • Development of MSc research topics with concepts such as optimization of overcurrent protection coordination of a distribution system using various optimization algorithms e.g. genetic algorithm, particle swarm optimization, firefly algorithm; communication assisted protection scheme using overcurrent relays for a distribution system with renewable energy resources focusing on selectivity and speed of the protection system; application of superconductive fault current limiter in a distribution system with appropriate protection coordination ensuring reliability of the protection system; use of directional overcurrent relays for protection of a distribution system with distributed energy resources and in micro grid environment; protection coordination issues on with directional overcurrent relays in various network configurations; protection and control of renewable energy based micro grid with different energy storage devices such as batteries, super capacitors; protection coordination of a distribution system with PV-based generations; reducing miscoordination due to injection of distributed energy resources using adaptive and optimization techniques; arc furnace protection using overcurrent relays; development of protection schemes of a distribution system for charging electric vehicles; distribution management system with sharing information for enhanced operation, control and protection coordination. • Further, development of undergraduate practical and/or final year capstone projects based on the concepts such as of ring main protection, parallel feeder protection, differential protection of generator and transformer, automatic synchronization of a generator, development of supervisory control, and data acquisition system along with protection of a distribution system, distance protection of transmission lines, motor protections. VIII. CONCLUSION The presented work is an illustration of effectiveness of simulation-based practical in electrical engineering whereby the emphasis is on the process of preparing one of such electrical power systems practical. The structure, teaching and learning approach, and assessment method for developed practical are demonstrated and analyzed. The findings of the considered approach of doing practical using real-time software would be very insightful to the students that resembles the hands-on experience on a real power system network. This practical would encourage the academic staffs and students to learn on various other subject matters in electrical engineering practical that are highly related to industry applications. This exercise undoubtedly can improve the process of developing the practical itself including teaching and learning approach and exit level outcomes and meeting the requirements of the professional bodies and institutes. In future, this endeavour would also accommodate the academics to allow the academic staffs and students to develop and implement other practical sessions in electrical engineering. A number of capstone projects developed in electrical engineering based on similar concepts were found to be very useful and exciting to the student body in terms of gathering knowledge and developing skill-sets. Additionally, a number of final year students were taken to the next levels at MSc/PhD in electrical engineering in the same area of the practical and it would be a continuous process of improvement since they would learn the way to study and gain experiences in niche areas. An MSc in Power Systems Protection degree can also be in the perspectives as a cherry on top. The exercise serves as a true springboard for inspiration for continuous future developments.
2022-05-19T15:13:41.485Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "34ecc08135df8914a9bfa4bc3cb7bd86849587df", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09775942.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "582707935a3d86df8e94ddf00f5e33c882abf26f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
16774925
pes2o/s2orc
v3-fos-license
Stool Microbiome and Metabolome Differences between Colorectal Cancer Patients and Healthy Adults In this study we used stool profiling to identify intestinal bacteria and metabolites that are differentially represented in humans with colorectal cancer (CRC) compared to healthy controls to identify how microbial functions may influence CRC development. Stool samples were collected from healthy adults (n = 10) and colorectal cancer patients (n = 11) prior to colon resection surgery at the University of Colorado Health-Poudre Valley Hospital in Fort Collins, CO. The V4 region of the 16s rRNA gene was pyrosequenced and both short chain fatty acids and global stool metabolites were extracted and analyzed utilizing Gas Chromatography-Mass Spectrometry (GC-MS). There were no significant differences in the overall microbial community structure associated with the disease state, but several bacterial genera, particularly butyrate-producing species, were under-represented in the CRC samples, while a mucin-degrading species, Akkermansia muciniphila, was about 4-fold higher in CRC (p<0.01). Proportionately higher amounts of butyrate were seen in stool of healthy individuals while relative concentrations of acetate were higher in stools of CRC patients. GC-MS profiling revealed higher concentrations of amino acids in stool samples from CRC patients and higher poly and monounsaturated fatty acids and ursodeoxycholic acid, a conjugated bile acid in stool samples from healthy adults (p<0.01). Correlative analysis between the combined datasets revealed some potential relationships between stool metabolites and certain bacterial species. These associations could provide insight into microbial functions occurring in a cancer environment and will help direct future mechanistic studies. Using integrated “omics” approaches may prove a useful tool in identifying functional groups of gastrointestinal bacteria and their associated metabolites as novel therapeutic and chemopreventive targets. Introduction A healthy gastrointestinal system relies on a balanced commensal biota to regulate processes such as dietary energy harvest [1], metabolism of microbial and host derived chemicals [2], and immune modulation [3]. Accumulating evidence suggests that the presence of microbial pathogens or an imbalance in the native bacterial community contributes to the development of certain gastrointestinal cancers. A causal relationship between gastric cancer and Helicobacter pylori has been established [4], leading to the hypothesis that other host-associated organisms are involved in cancer etiology. An association between colorectal cancer (CRC) and commensal bacteria has been suspected for decades. For example, Streptococcus infantarius (formerly S. bovis) became diagnostically important after it was recognized that bacteremia due to this organism was often associated with colorectal neoplastic disease [5,6]. However, early studies associating genera of bacteria with colon cancer risk were limited to culture-based methods that did not reflect the complexity of the gastrointestinal microbiota [7][8][9]. Development of highthroughput sequencing has facilitated detailed surveys of the gut microbiota, and a more thorough and complex colorectal cancer (CRC)-associated microbiome is emerging. Sobhani et al. [10] found that the Bacteroides/Prevotella group was over-represented in both stool and mucosa samples from individuals with colon cancer compared to their cancer-free counterparts. They also found that Bifidobacterium longum, Clostridium clostridioforme, and Ruminococcus bromii were underrepresented in samples from these individuals and concluded that a lack of correlation between tumor stage/size with the over-represented populations suggested a contributory role of the bacteria in tumor development. Two additional studies, published concurrently, examined the microbiota present in the tumor mucosa and adjacent healthy tissue of individuals with colon cancer and both studies revealed an overrepresentation of Fusobacterium spp [11,12], while others have revealed an abundance of Coriobacteria and other probiotic species [13,14]. The question remains whether over-representation of particular microbial species in stool and mucosal samples is indicative of a contributory role in the development of CRC or a consequence of the tumor environment. Although a causal role of intestinal biota in CRC development has not been demonstrated, there is evidence to suggest that induction of pro-inflammatory responses by commensals contribute to tumor initiation and development [10,14]. Production of genotoxins and DNA damaging superoxide radicals are also mechanisms by which commensals can contribute to CRC development [15]. Alternatively, it has been hypothesized that certain probiotic bacteria act as tumor foragers, taking advantage of an ecological niche created by the physiological and metabolic changes in the tumor microenvironment [14]. To clarify the role of intestinal biota in the development of CRC, it will be necessary to move beyond taxonomic overrepresentation and examine changes in the CRC associated microbiome in a more functional context. One important functional parameter is how commensal organisms contribute to the flux of metabolites and the breakdown of dietary components. Thus, metabonomics, the study of global changes in metabolites in response to biological stimuli [16], is being applied to identify and characterize the functional microbiome that drives metabolic changes associated with different diets, genotypes, and disease states [17][18][19]. Stool metabolite profiles have been validated as a means of assessing gut microbial activity [20] and the current study contributes to the growing list of gut microbes in the CRC microbiome, but also utilizes a metabonomics approach to identify potential microbiome-metabolome interactions. Ethics Statement All individuals provided written informed consent prior to participating in the study. All study protocols were approved by Colorado State University (Protocol numbers 10-1670H and 9-1520H) and Poudre Valley Hospital-University of Colorado Health System's Institutional Review Boards (Protocol numbers 10-1038 and 10-1006). Sample Collection and DNA Extraction Stool samples were collected from healthy individuals (n = 11) and recently diagnosed colon cancer patients (n = 10) prior to surgery for colonic resection (Table 1-note: not all samples were subjected to all analyses. See Table 1 footnote). Exclusion criteria for all participants included use of antibiotics within two months of study participation, and regular use of NSAIDS, statins, or probiotics. Individuals that reported chronic bowel disorders or food allergies/dietary restrictions were also excluded from the study. Additional exclusion for CRC patients included chemotherapy or radiation treatments prior to surgery. Stool samples were provided for analyses prior to administration of any preoperative antibiotics or bowel preparation. Samples were transported to the laboratory within 24 hours after collection by study participants. Stool samples were homogenized, and three subsamples were collected with sterile cotton swabs. DNA was extracted from all samples using MoBio Powersoil DNA extraction kits (MoBio, Carlsbad, CA) according to the manufacturer's instructions and stored at 220uC prior to amplification steps. Pyrosequencing Analysis Amplification of the V4 region of the bacterial 16S rRNA gene was performed in triplicate using primers 515F and 806R labeled with 12-bp error correcting Golay barcodes [21]. Twenty ml [22] using the default settings unless otherwise noted. Briefly, sequence reads were (i) trimmed (bdiff = 0, pdiff = 0, qaverage = 25, minlength = 100, maxambig = 0, maxhomop = 10); (ii) aligned to the bacterial-subset SILVA alignment available at the Mothur website (http://www.mothur.org); (iii) filtered to remove vertical gaps; (iv) screened for potential chimeras using the uchime method; (v) classified using the Green Genes database (http://www.mothur.org) and the naïve Baysian classifier [23] embedded in Mothur. All sequences identified as chloroplast were removed; (vi) sequences were screened (optimize = minlength-end, criteria = 95) and filtered (vertical = T, trump = .) so that all sequences covered the same genetic space; and (vii) all sequences were pre-clustered (diff = 2) to remove potential pyrosequencing noise and clustered (calc = onegap, coutends = F, method = nearest) into OTUs [24]. To remove the effect of sample size on community composition metrics, sub-samples of 1250 reads were randomly selected from each stool sample. After clustering sequence reads into OTUs (i.e., nearest-neighbors at 3% genetic distance) or phylotypes (i.e., sequences matching a common genus in the Green Genes Database), the replicate sub-samples were averaged to yield a single community profile for each sample. Sample size independent values for alpha diversity community descriptors such as observed species richness (S obs ), Chao1 estimates of total species richness (S Chao ), Shannon's diversity (H') and evenness (E H ), and Simpson's diversity (1-D) and evenness (E D ) were determined by fitting a 3-parameter exponential curve [y = y0+ a(1-e 2bx )] to rarified parameters over a range of 100 to 1250 sequence reads, where the asymptotic maxima is equal to the sum of y0 and a. Effective number of species were calculated as S H = exp (H') for the Shannon's index and S D = 1/D for Simpson's. All sequence data is publicly available through the Sequence Read Archive (SRA) under study accession number ERP002217, which is available at the following link: http://www. ebi.ac.uk/ena/data/view/ERP002217. Nontargeted Metabolite Profiling and Data Processing Methods One hundred milligrams of lyophilized stool sample were extracted two times with 1 ml of 3:2:2 isopropanol:acetonitrile:water spun at 14,000 rpm for 5 minutes and the supernatants were combined. The extract was dried using a speedvac, resuspended in 50 mL of pyridine containing 15 mg/mL of methoxyamine hydrochloride, incubated at 60uC for 45 min, sonicated for 10 min, and incubated for an additional 45 min at 60uC. Next, 50 mL of N-methyl-N-trimethylsilyltrifluoroacetamide with 1% trimethylchlorosilane (MSTFA +1% TMCS, Thermo Scientific) was added and samples were incubated at 60uC for 30 min, centrifuged at 30006g for 5 min, cooled to room temperature, and 80 mL of the supernatant was transferred to a 150 mL glass insert in a GC-MS autosampler vial. Metabolites were detected using a Trace GC Ultra coupled to a Thermo DSQ II (Thermo Scientific). Samples were injected in a 1:10 split ratio twice in discrete randomized blocks. Separation occurred using a 30 m TG-5MS column (Thermo Scientific, 0.25 mm i.d., 0.25 mm film thickness) with a 1.2 mL/min helium gas flow rate, and the program consisted of 80uC for 30 sec, a ramp of 15uC per min to 330uC, and an 8 min hold. Masses between 50-650 m/z were scanned at 5 scans/sec after electron impact ionization. For each sample, a matrix of molecular features as defined by retention time and mass (m/z) was generated using XCMS software [25]. Features were normalized to total ion current, and the relative quantity of each molecular feature was determined by the mean area of the chromatographic peak among replicate injections (n = 2). Molecular features were formed into peak groups using AMDIS software [26], and spectra were screened in the National Institute for Technology Standards (www. nist.gov) and Golm (http://gmd.mpimp-golm.mpg.de/) metabolite databases for identifications. SCFA determination. Stool samples were extracted for short chain fatty acids by mixing 1 g of frozen feces with acidified water (pH 2.5) and sonicated for 10 min. Samples were centrifuged and filtered through 0.45 mM nylon filters and stored at 280uC prior to analysis. The samples were analyzed using a Trace GC Ultra coupled to a Thermo DSQ II scanning from m/z 50-300 at a rate of 5 scans/second in electron impact mode. Samples were injected at a 10:1 split ratio, and the inlet was held at 22uC and transfer line was held at 230uC. Separation was achieved on a 30 m TG-WAX-A column (Thermo Scientific, 0.25 mm ID, 0.25 mm film thickness) using a temperature program of 100uC for 1 min, ramped at 8uC per minute to 180uC, held at 180uC for one minute, ramped to 200uC at 20uC/minute, and held at 200uC for 5 minutes. Helium carrier flow was held at 1.2 mL per minute. Peak areas were integrated by Thermo Quan software using selected ions for each of the short chain fatty acids, and areas were normalized to total signal. Statistical Analysis Differences in bacterial phylotypes and global metabolites between samples from healthy individuals and colon cancer patients were determined using AMOVA and student's t-tests with a significance cutoff of ,0.01. Phylotypes and metabolites that were significantly different between groups were further refined by removing markers that had fewer than 25 total reads (bacteria) or borderline background signals (metabolites) or that were present in fewer than 3 individual samples. Short chain fatty acid concentrations were determined in two separate chromatographic runs, so a weighted mean was calculated for each quantified compound and statistical differences between stool samples from healthy individuals and colon cancer patients were determined using a mixed model ANOVA with experiment representing a random effect and disease status as a fixed effect (XLSTAT 2011.1, Addinsoft Corp, Paris, France). Correlations between metabolites and bacteria were determined using Pearson's r with a moderate correlation denoted by an r$0.50 and a strong correlation denoted by an r$0.70. Alpha and Beta Diversity in Stool Biota Typical community descriptors of alpha diversity for molecular microbial data include actual and estimated OTU richness, and indices of population diversity and evenness. In systems where pathogens are introduced (e.g. Helicobacter pylori), there are marked decreases in estimates of diversity and evenness [27] suggesting that these indices may be useful predictors of infection. We examined these parameters in stool samples from healthy individuals and those with CRC to see if they could be used as predictors of disease state.We observed no significant differences at the 3% genetic distance in the average diversity or evenness of stool microbial communities from healthy individuals compared to those with CRC (Table S1). The average coverage obtained from 1250 reads per sample was 84% and 86% in healthy and colon cancer samples respectively. The average effective diversity of each group suggested a trend toward higher bacterial diversity in stool samples of healthy individuals (S H = 63 , S D = 20) compared to those from CRC patients (S H = 46 , S D = 15); however, the interindividual variation was too great to achieve statistical significance. Based on these data, we suggest that alpha diversity descriptors of stool microbiota are not indicative of disease state in CRC; although a limitation of this study is that only stool samples and not tissue mucosa were analyzed. However, despite inherent differences in stool and mucosal microbial communities our findings are consistent with other published reports of total bacterial diversity and evenness estimates between CRC and healthy stool and tissue/mucosasamples [10]. This inter-individual variation was also apparent in estimates of beta diversity, where a low degree of similarity in overall microbial community composition between individuals was observed as determined using the unweighted Jaccard distance (J class ) to compare community membership ( Figure 1A) and Yue and Clayton's [28] index (H YC ) to compare community structures ( Figure 1B). Because of this variation, no patterns in the overall community composition were noted between stool samples from CRC patients and healthy individuals. Taxonomic Differences between CRC and Healthy Stool Samples The disease status of study participants did not drive overall community structure of the stool microbiota, and the composition and relative abundance of the major phyla were similar, although there was a non-significant trend towards higher Verrucomicrobia in samples from colon cancer patients ( Figure 2). There were also higher levels of Synergetes in the cancer group, but this was driven by a single individual with an extremely high proportion of this phyla and was not representative of the entire sequenced cancer population. However, at the genus/species level there were a number of OTU's that were significantly under-represented in the stool of colon cancer patients compared to healthy individuals ( Table 2). These include several Gram-negative Bacteroides and Prevotella spp. that have previously been isolated from human stool, but are not well characterized with regards to their role in intestinal function or general health. Two of the Prevotella species identified were not only under-represented, but were completely absent from the colon cancer samples analyzed. Prevotella was a dominant genera reported in stool from children in a rural community in Burkina Faso but absent from a cohort of Italian children, and the study authors hypothesized that Prevotella helped maximize energy harvest from a plant-based diet [29]. Therefore, it is possible that the higher levels of Prevotella in the healthy cohort may reflect differences in the intake of fiber and other plant compounds compared to the individuals with colon cancer. At the genus level, Shen et al [30] found the Bacteroides spp. to be enriched in colonic tissue from healthy individuals when compared to adenoma tissue. Lachnospiracae and members of the genera Dorea and Ruminococcus were also previosly reported as dominant phylotypes driving differences between healthy and cancerous tissue samples [13]. The other OTUs that we identified such as the Dialister spp. and Megamonas spp. have not previously been reported in association with colon cancer; however, decreased populations of Dialister invisus have been reported in Crohn's disease [31]. There were fewer identifiable bacteria that were overrepresented in the colon cancer population (Table 3). Most notably, we observed that the mucin-degrading bacteria, Akkermansia muciniphila, which represented a relatively large percentage of the total sequences, was present in a significantly greater proportion in the feces of colon cancer patients. This bacterium is a common member of the colonic microbiota and was recently shown to be reduced in irritable bowel syndrome and Crohn's Disease [32]; however a more recent report showed increased A. muciniphila in ulcerative colitis-associated pouchitis [33]. Two types of mucins, MUC1and MUC5AC, are reportedely overexpressed in colon cancers [34], suggesting that our observed CRC-related increases in A. muciniphila populations may be due to increased substrate availability. Citrobacter farmeri, which can utilize citrate as a sole carbon source was also higher in samples from colon cancer patients, but represented a much smaller proportion of the total bacterial sequences. Citrobacter farmeri is among a group of gut bacteria that includes multiple pathogenic species like Salmonella and Shigella, and which has arylamine N-acetyltransferase activity that may be involved in activation of carcinogens and xenobiotic metabolism [35]. Age and BMI represent other factors that play a role in shaping the intestinal microbial communities. Several reports have demonstrated a correlation between the ratio of Bacteroidetes to Firmicutes and obesity [1]. We conducted linear regressions between the relative abundance of each of the taxa that significantly differed between CRC and healthy stools (see Tables 2 and 3) and BMI and saw no significant correlations (Table S2). In addition, aging has been associated with a decrease in protective commensal anaerobes, such as Feacalibacterium prausnitzii, and an increase in E. coli [36]. We did find a negative correlation between the age of participants and Dorea formicagens (R 2 = 0.354; p = 0.041) and Ruminococcus obeum (R 2 = 0.434; p = 0.020), both members of the Clostridium XIVa group, suggesting that differences between cohorts with regard to these two species may be a result of differences in the mean age of participants in each group rather than CRC disease status. To our knowledge, a decline in the population of Clostridium XIVa group members has not been previously associated with aging, but has been associated with dysbiosis related to intestinal inflammatory conditions such as Crohn's disease [37]. None of the other bacterial taxa identified were correlated with age (Table S3). Therefore, we conclude that the majority of taxa that significantly differed in stool samples between healthy and CRC cohorts was a result of disease status and not of differences in age or BMI. Short Chain Fatty Acid Analysis Short chain fatty acids (SCFA), particularly butyrate, are widely studied microbial metabolites reported to have anti-tumorigenic effects [38]. SCFA's are readily absorbed and utilized in host tissues so detection in stool is typically considered an indication of production in excess of that which can be utilized by the host [29]. We and others [10,13] have observed that species of butyrate producing bacteria, such as Ruminococcus spp. and Pseudobutyrivibrio ruminis, were lower in stool samples from CRC patients compared to healthy controls. Therefore, we quantified several short chain fatty acids from frozen stool samples. The three major SCFAs produced as microbial metabolites, acetate, propionate, and butyrate, were all detected as were valeric, isobutyric, isovaleric, caproic, and heptanoic acids. Among these, acetic and valeric acids were significantly higher in stool samples from CRC patients (p,0.0001 and p = 0.024 respectively) while butyric acid was significantly higher in the feces of healthy individuals (p,0.0001; Figure 3). No differences in propionic acid were detected between the two groups. Butyrate is regarded as one of the most important nutrients for normal colonocytes, and alone or in combination with propionate it has been shown to reduce proliferation and induce apotosis in human colon carcinomas [39]. Although acetate is an important SCFA for maintaining colonic health and as a precurser molecule for endogenous cholesterol production, elevated levels of this metabolite have previously been associated with CRC in humans [40]. Acetate can be used to produce butyrate and proportional differences in these metabolites between CRC and healthy samples may reflect a depletion of colonic microbes that can carry out this reaction in CRC samples or it may be a result of degradation of butyrate to acetate under low colonic pH associated with CRC. We also observed significantly higher relative concentrations of isobutyric (p,0.0001) and isovaleric acid (p = 0.002) in samples from individuals with CRC ( Figure 3). These two SCFA's result from bacterial metabolism of branched chain amino acids valine and leucine, which were also higher in CRC stool samples (Table 4), and may account for the significant increases observed in these two SCFAs in the CRC population. Global Stool Metabolites Stool samples allow for evaluation of bacteria residing in the intestinal lumen, and therefore, stool small molecules are considered to result from co-metabolism or metabolic exchange between microbes and host cells [13]. Global metabolite profiling performed herein on lyophilized stool samples provided insights into the relationship between microbial populations and metabolites, and lend to the identification of novel CRC metabolic biomarkers. The supervised multivariate analysis technique, Orthogonal Projection to Latent Structures-Discriminant Analysis (OPLS-DA), which facilitates interpretation by separately modeling predictive and orthogonal (non-predictive) variables, was used to determine if non-targeted GC-MS profiles were predictive of disease state of the donor. The OPLS-DA demonstrated satisfactory modeling and predictive capabilities for this dataset (R2Y = 0.986; QY2 = 0.927), revealing a distinct separation between stool metabolic features of the two groups (Figure 4), suggesting that presence or absence of CRC is an important factor driving the variability in stool metabolites. Compared to healthy controls, stool metabolome analysis revealed 11 amino acids that showed a 41-80% increase in stool samples of individuals with CRC ( Table 4). Reasons that could account for this CRC-associated increase in amino acid concentrations may include, but not be limited to differences in protein consumption patterns, inflammation-induced reduction in nutrient absorption, and increased autophagy associated with tumor cells resulting in accumulation of free amino acid pools [41]. Microbial degradation of dietary proteins in the distal colon is a putreficative process that results in the production of toxic amines, and may account for the increased free amino acids we observed in CRC stool samples. An increased concentration of all amino acids except glutamine was previously reported in stomach and colon tumor tissues compared to healthy tissue [42]. The authors hypothesized that tumor cells may exhibit increased glutaminase activity resulting in glutamine conversion to glutamate. Consistent with these findings, we also saw a large increase, approximately 76%, in glutamate without a corresponding increase in glutamine in stool samples from colon cancer patients. Another recent study using NMR to identify and detect metabolites from stool water extracts from healthy and CRC samples showed that the CRC samples had approximately 1.5-fold higher levels of cysteine, proline, and leucine [43]. The increased concentrations of proline, serine, and threonine that were observed in CRC samples could also be the result from degradation of intestinal mucins, which are primarily comprised of glycoproteins rich in these amino acids [44]. This is consistent with the enrichment of Akkermansia muciniphila, a mucin-degrading bacteria, observed in CRC stool samples; although we saw no strong correlations between the relative proportion of these bacteria and specific amino acid concentrations. There were higher levels of glycerol as well as several unsaturated fatty acids detected in the stool samples of healthy individuals. Human cancer cells have a known transport system for the uptake of glycerol, suggesting stool glycerol may be lower in CRC because it is being taken up by the tumor cells. Alternatively, bacterial lipases present in healthy individuals may facilitate the metabolism of dietary and endogenously produced triacylglycerols, resulting in the final degradation products of glycerol and free fatty acids. In addition to glycerol, fatty acids most closely matching metabolomic signatures for linoleic acid, and stereoisomers of oleic acid were also higher in controls (Table 4). Finally, ursodeoxycholic acid (UDCA), a secondary bile acid produced by intestinal bacteria was approximately 63% higher in healthy individuals compared to CRC. While several bile acids such as lithicolic acid and deoxycholic acid have been associated with tumorigenesis, UDCA has shown chemopreventive effects in preclinical and animal models of CRC [45]. Correlation analysis of the microbiome and metabolome data revealed strong associations between some members of the stool microbiota and candidate metabolites. Bacteroides finegoldii, two Dialister spp., and P. ruminis were strongly correlated, and Bacteroides intestinalis and Ruminococcus obeum were moderately correlated with increased stool free fatty acids and glycerol ( Figure 5). These same bacteria were inversely associated with a cholesterol derivative and one or more of the amino acids that were overrepresented in stool samples from CRC patients. The two Ruminococcus spp. also showed a strong positive correlation with the presence of UDCA, in concurrence with previous reports that Ruminoccoccus species exhibit 7a-and 7b-hydroxysteroid dehydrogenase activities to produce this metabolite [46]. Two of the bacterial genera overrepresented in CRC, Phascolarctobacterium and Acidiminobacter showed a strong positive association with the amino acids phenylalanine and glutamate, and were moderately correlated with increased serine and threonine ( Figure 5). Glutamate can be utilized by these bacteria as a substrate, but their association with serine and threonine could also be indicative of involvement in mucin degradation or putrificative processes in the colon and warrant further study. Extensive attempts to characterize CRC microbiota have led to new hypotheses as to how the gut microbiota influences CRC development. One hypothesis suggests that there are ''driver bacteria'' with pro-carcinogenic features that contribute to tumor development and ''passenger bacteria'' that may outcompete drivers to flourish in the tumor environment as the cancer progresses [47]. Available metabolites, those produced by bacteria and those that they utillize as substrates will largely drive these host-microbiome interactions. Integrating metabolome and microbiome datasets is a novel approach towards finding new directions to functionally characterize the microbiota in terms of their metabolic activity relative to cancer will greatly assist in our understanding of this complex host-microbe interaction. Supporting Information Table S1 Comparison of observed and estimated OTU richness and diversity and evenness indices between microbial communities from stool of CRC patients and healthy adults.
2016-03-01T03:19:46.873Z
2013-08-06T00:00:00.000
{ "year": 2013, "sha1": "d26b157e3268a09613efece11b139b4aacbc99ae", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0070803&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d26b157e3268a09613efece11b139b4aacbc99ae", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
68690617
pes2o/s2orc
v3-fos-license
Noninvasive Mechanical Ventilation: Models to Assess Air and Particle Dispersion Respiratory failure is a major complication of viral infections such as severe acute respiratory syndrome (SARS) [1], avian influenza H5N1 infection [2], and the 2009 pandemic influenza (H1N1) infection [3]. The course may progress rapidly to acute respiratory distress syndrome (ARDS) and multi-organ failure, requiring intensive care. Noninvasive ventilation (NIV) may play a supportive role in patients with severe viral pneumonia and early ARDS/acute lung injury. It can act as a bridge to invasive mechanical ventilation, although it is contraindicated in critically ill patients with hemodynamic instability and multi-organ dysfunction syndrome [4]. Transmission of some of these viral infections can convert from droplets to airborne during respiratory therapy. particles generated during tidal breathing [ 8 ], NIV may disperse potentially infected aerosols, especially when patients cough and sneeze frequently, contributing to nosocomial transmission of infl uenza. Pulmonary tuberculosis (TB) is well known to spread by the airborne route. A recent study showed that a small number of patients with pulmonary TB (28 %) produced culturable cough aerosols [ 9 ]. Thus, it is important to examine the exhaled air directions and dispersion distances during application of NIV to patients with respiratory failure via commonly used face masks. The data can improve our understanding of and knowledge about infection control. Such knowledge can facilitate the development of preventive measures to reduce the risk of nosocomial transmission during application of NIV to high-risk patients with respiratory infections. Methods As there is no reliable, safe marker that can be introduced into human lungs for experimental purposes, the laser smoke visualization method and the human patient simulator (HPS) model have been adopted as the method for studying exhaled air dispersion during application of various types of respiratory therapy in hospital medical wards, including the negative-pressure isolation room [ 10 -13 ]. NIV and Lung Model The HPS represents a 70-kg adult man sitting on a 45°-inclined hospital bed ( Fig. 2.1 ). The HPS contains a realistic airway and is programmed to remove oxygen and inject carbon dioxide into the system according to a preset respiratory exchange ratio and oxygen consumption. The lung compliance can also be changed to simulate different degrees of lung injury during chest infection. By varying the oxygen consumption (200, 300, and 500 ml/min) and lung compliance (70, 35, and 10 ml/cmH 2 O), these sets of values produce a range of tidal volumes, respiratory rates, and peak inspiratory fl ow similar to those of patients with minimal (essentially normal lung function), moderate, or severe lung injury, respectively. For example, lung compliance is set at 35 ml/cm H 2 O and oxygen consumption at 300 ml/min to mimic mild lung injury. Tidal volume and respiratory rate are regulated so a respiratory exchange ratio of 0.8 is maintained during measurements. Typically, this is achieved with a tidal volume of 300 ml and a respiratory rate of 25 breaths/min [ 10 -13 ]. Lung compliance and airway resistance also responds in a realistic manner to relevant respiratory challenges. The HPS produces an airfl ow pattern that is close to the in vivo situation. It has been applied in previous studies to simulate human respiration [ 14 -17 ]. Deliberate leakage from the exhalation ports of the Mirage mask (ResMed, Bella Vista, NSW, Australia) [ 10 ], ComfortFull 2, and Image 3 masks (Respironics, Murrysville, PA, USA) [ 11 ] fi rmly attached to a high-fi delity HPS (HPS 6.1; Medical Education Technologies, Sarasota, FL, USA) has been evaluated. NIV was applied using a bilevel positive airway pressure device (VPAP III ST; ResMed) via each mask. The inspiratory positive airway pressure (IPAP) was initially set at 10 cmH 2 O and gradually increased to 18 cmH 2 O. The expiratory positive airway pressure (EPAP) was maintained at 4 cmH 2 O throughout the study [ 10 , 11 ]. Flow Visualization Visualization of airfl ow around each NIV face mask was facilitated by marking the air with smoke particles produced by a M-6000 smoke generator (N19; DS Electronics, Sydney, Australia), as in our previous studies [ 10 -13 ]. The oil-based smoke particles, measuring less than 1 μm in diameter, are known to follow the airfl ow pattern precisely with negligible slip [ 18 ]. The smoke was introduced continuously to the right main bronchus of the HPS. It mixed with alveolar gas and then was exhaled through the airway. Sections through the leakage jet plume were then revealed by a thin, green laser light sheet (532 nm wavelength, continuous-wave A laser beam located on the right side of the bed lateral to the human patient simulator illuminates the exhaled air particles leaking from the exhalation ports of the face mask in the coronal plane. A camera was positioned along the sagittal plane at the end of the bed to capture lateral dispersion of exhaled air illuminated by the laser device. Positions of the camera and the laser device would be exchanged when the exhaled air dispersion from the face mask is examined along the sagittal plane mode) created by a diode-pumped solid-state laser (OEM UGH-800 mW; Lambda Pro Technologies, Shanghai, China) with custom cylindrical optics to generate a two-dimensional laser light sheet [ 10 -13 ]. The light sheet was initially positioned in the median sagittal plane of the HPS and subsequently shifted to paramedian sagittal planes. This allowed us to investigate the regions directly above and lateral to the mask and the patient [ 10 -13 ]. All leakage jet plume images revealed by the laser light sheet were captured by a high-defi nition video camera-Sony high-defi nition digital video camcorder (HDR-SR8E; Sony, Tokyo, Japan); ClearVid complementary metal oxide semiconductor sensor (Sony) with a Carl Zeiss Vario-Sonnar T* Lens (Carl Zeiss, Jena, Germany)-with optical resolution of 1,440 × 1,080 pixels per video frame. The normalized smoke concentration in the plume was estimated from the light intensity scattered by the smoke particles [ 10 -13 ]. Image Analysis The normalized smoke concentration in the mask leakage air was estimated from the light scattered by the particles. The analysis was based on scattered light intensity being proportional to the particle concentration under the special conditions of constant-intensity laser light sheet illumination and monodispersion of small (submicron) particles [ 18 ]. In short, the thin laser light sheet of near-constant intensity illuminated the smoke particle markers in the mask airfl ow leakage. Smoke particles scattered laser light perpendicular to the light sheet. The pictures were then collected and integrated by the video camera element and lens [ 10 -13 ]. Image Capture and Frame Extraction A motion video of at least 20 breathing cycles for each NIV setting was captured and individual frames extracted as gray-scale bitmaps for intensity analysis. Frames were extracted at time points starting from the beginning of each inspiration to generate an ensemble average for the corresponding instant of the respiratory cycle [ 10 -13 ]. The time at which the normalized concentration contours spread over the widest region from the NIV mask was chosen for the ensemble average to estimate the greatest dispersion distance. This was found to be approximately at the mid-respiratory cycle [ 10 , 11 ]. Intensity Averaging and Concentration Normalization All gray-scale frames were read into a program specifi cally developed for these studies [ 10 -13 ] (MathCad 8.0; MathSoft, Cambridge, MA, USA) [ 19 ] along with the background intensity images obtained with the laser switched off. The background intensity image was subtracted from each frame, pixel by pixel, to remove any stray background light. The pixel intensity values were averaged over all frames to determine the average intensity. The resulting image was the total intensity of light scattered perpendicular to the light sheet by the smoke particles. It was directly proportional to the smoke concentration under the conditions mentioned above. The image was normalized against the highest intensity found within the leakage jet plume to generate normalized particle concentration contours [ 10 -13 ]. As the smoke particles marked air that originated from the HPS's airways before leaking from the mask, the concentration contours effectively represent the probability of encountering air around the patient that has come from within the mask and the patient's respiratory system. The normalized concentration contours are made up of data collected from at least 20 breaths. A contour value of 1 indicates a region that consists entirely of air exhaled by the patient, where there is a high chance of exposure to the exhaled air, such as at the mask exhaust vents. A value near 0 indicates no measurable air leakage in the region and a small chance of exposure to the exhaled air [ 10 -13 ]. Results The results are presented with reference to the median sagittal plane. Noninvasive Positive-Pressure Ventilation Applied via the ResMed Mirage Mask With the ResMed Mirage mask, a jet plume of air escaped through the exhaust holes to a distance of approximately 0.25 m radially during application of IPAP 10 cmH 2 O, with some leakage from the nasal bridge. The leakage jet probability was highest about 60-80 mm lateral to the sagittal plane of the HPS. Without nasal bridge leakage, the plume jet from the exhaust holes increased to a 0.40 m radius circle, and exposure probability was highest about 0.28 m above the patient. When IPAP was increased to 18 cmH 2 O, the vertical plume extended to about 0.5 m above the patient and the mask, with some horizontal spread along the ward roof [ 10 ]. Noninvasive Positive-Pressure Ventilation Applied via the ComfortFull 2 Mask With the ComfortFull 2 mask, a vertical, cone-shaped plume leaked out from the mask exhalation diffuser and propagated well above and almost perpendicular to the patient at an IPAP and an EPAP of 10 and 4 cmH 2 O, respectively. The maximum dispersion distance of smoke particles-defi ned as the boundary with a region encountering <5 % normalized concentration of exhaled air (light blue contour smoke concentration scale)-was 0.65 m, whereas that of a high concentration (containing >75 % normalized concentration of exhaled air, red zone, and above) was 0.36 m. There was no signifi cant room contamination by exhaled air (as refl ected by the blue background in the isolation room) other than the exhalation jet plume [ 11 ]. When the IPAP was increased from 10 to 14 cmH 2 O, the maximum exhaled dispersion distance of low-concentration exhaled air was similar at 0.65 m, but that of high-concentration exhaled air increased to 0.40 m, with contamination of the isolation room. Also, there was some exhaled air concentration outside the exhalation jet plume. When IPAP was increased to 18 cmH 2 O, the dispersion distance of lowconcentration exhaled air was 0.85 m, whereas that of high-concentration exhaled air increased to 0.51 m along the median sagittal plane. More background contamination of the isolation room by smoke particles was noted at higher IPAPs owing to interaction between the downstream ceiling-mounted ventilation vent and the upstream exhaled air from the HPS (images at left in Fig. 2.2 ) [ 11 ]. Noninvasive Positive-Pressure Ventilation Applied via the Image 3 Mask Connected to the Whisper Swivel The Image 3 mask required an additional exhalation device (whisper swivel) to prevent carbon dioxide rebreathing. The exhaled air leakage was much more diffuse than that with the ComfortFull 2 mask because of the downstream leakage of D.S.C. Hui exhaled air through the whisper swivel exhalation port. At an IPAP of 10 cmH 2 O, the maximum dispersion distance of a low concentration in exhaled air (light blue zone on the smoke concentration scale) was 0.95 m toward the end of the bed, whereas that of a medium concentration (containing >50 % of the normalized concentration of exhaled air, green zone, and above) was about 0.6 m along the median sagittal plane. As the IPAP was increased from 10 to 14 cmH 2 O, the exhaled air with a medium concentration increased to 0.95 m toward the end of the bed along the median sagittal plane of the HPS [ 11 ]. When the IPAP was increased to 18 cmH 2 O, the exhaled air with a low concentration dispersed diffusely to fi ll up most of the isolation room (i.e., beyond 0.95 m, as captured by the camera), whereas that with a medium concentration, occupying wider air space, was noted to spread 0.8 m toward the end of the bed, with accumulation of a high concentration of exhaled air (red zone on scale) within 0.34 m from the center of the mask, along the median sagittal plane of the HPS (images on the right in Fig. 2.2 ) [ 11 ]. Discussion There is no reliable, safe marker that can be introduced into human lungs for experimental purposes. Hence, the maximum distribution of exhaled air, marked by very fi ne smoke particles, from the HPS during application of NIV using three face masks was examined by the laser smoke visualization method on a high-fi delity HPS model. The studies showed that the maximum distances of exhaled air particle dispersion from patients undergoing NIV with the ResMed Ultra Mirage mask was 0.5 m along the exhalation port [ 10 ]. In contrast, the dispersion distances of a low, normalized concentration of exhaled air through the ComfortFull 2 mask exhalation diffuser increased from 0.65 to 0.85 m at a direction perpendicular to the head of the HPS along the sagittal plane when IPAP was increased from 10 to 18 cmH 2 O. There was also more background contamination of the isolation room at the higher IPAP [ 11 ]. Even when a low IPAP of 10 cmH 2 O was applied to the HPS via the Image 3 mask connected to the whisper swivel exhalation port, the exhaled air leaked far more diffusely than from the ComfortFull 2 mask, dispersing a low normalized concentration of 0.95 m along the median sagittal plane of the HPS. The higher IPAP resulted in wider spread of a higher normalized concentration of smoke around the HPS in the isolation room with negative pressure [ 11 ]. Simonds et al. [ 20 ] applied the laser visualization method to assess droplet dispersion during application of NIV in humans with an optical particle sizer (Aerotrak 8220; TSI Instruments, High Wycombe, UK) and showed NIV as a droplet-(not aerosol-) generating procedure, producing droplets measuring >10 μm. Most of them fell onto local surfaces within 1 m of the patient. Noninvasive ventilation is an effective treatment for patients with respiratory failure due to COPD, acute cardiogenic pulmonary edema, or pneumonia in immunocompromised patients. However, evidence supporting its use in patients with pneumonia is limited. NIV was applied to patients with severe pneumonia caused by a 2009 pandemic infl uenza (H1N1) infection with a success rate of about 41 %. Although there were no reported nosocomial infections [ 21 ], there is a potential risk of applying NIV to patients hospitalized with viral pneumonia on a crowded medical ward with inadequate air changes [ 7 ]. In this regard, deliberate leakage via the exhalation ports may generate droplet nuclei and disperse infective aerosols through evaporation of water content of respiratory droplets, resulting in a superspreading event. Nonetheless, NIV was applied using a single circuit to treat patients effectively with respiratory failure due to SARS in hospitals with good infection control measures (including installation of powerful exhaust fans to improve the room air change rate and good protective personal equipment at a level against airborne infection). There were no nosocomial infections among the health care workers involved [ 22 , 23 ]. In contrast, a case-control study involving patients in 124 medical wards of 26 hospitals in Guangzhou and Hong Kong identifi ed the need for oxygen therapy and use of NIV as independent risk factors for superspread of nosocomial SARS outbreaks [ 6 ]. Similarly, a systematic review has shown a strong association between ventilation, air movement in buildings, and airborne transmission of infectious diseases such as measles, tuberculosis, chickenpox, infl uenza, smallpox, and SARS [ 24 ]. These studies of infection with the HPS model [ 10 , 11 ] and in humans [ 20 ] have important clinical implications for preventing future nosocomial outbreaks of SARS and other highly infectious conditions such as pandemic infl uenza when NIV is provided. NIV should be applied in patients with severe community acquired pneumonia only if there is adequate protection for health care workers because of the potential risk of transmission via deliberate or accidental mask interface leakage and fl ow compensation causing dispersion of a contaminated aerosol [ 10 , 11 ]. Pressure necrosis may develop in the skin around the nasal bridge if the NIV mask is applied tightly for a prolonged period of time. Many patients loosen the mask strap to relieve discomfort. Air leakage from the nasal bridge is defi nitely a potential means of transmitting viral infections. Fitting a mask carefully is important for successful, safe application of NIV. Addition of a viral/bacterial fi lter to the breathing system of NIV, between the mask and the exhalation port, or using a dual-circuit NIV via full face mask or helmet without heated humidifi cation may reduce the risk of nosocomial transmission of a viral infection [ 11 , 25 ]. In view of the observation that higher ventilator pressures result in wider dispersion of exhaled air and more air leakage [ 10 , 11 ], it is advisable to start NIV with a low IPAP (8-10 cmH 2 O) and increase it gradually as necessary. The whisper swivel is an effi cient exhalation device to prevent carbon dioxide rebreathing, but it would not be advisable to use such an exhalation port in patients with febrile respiratory illness of unknown etiology. This is especially true in the setting of an infl uenza pandemic with the high potential of human-to-human transmission for fear of causing a major outbreak of nosocomial infections. It is also important to avoid the use of high IPAP, which could lead to wider distribution of exhaled air and substantial room contamination [ 11 ]. There are some limitations regarding the use of smoke particles as markers for exhaled air. The inertia and weight of large droplets in an air-droplet two-phase fl ow would certainly cause them to have less horizontal dispersion than occurs with the continuous air carrier phase during which the particles travel with increased inertia and drag. However, evaporation of the water content of some respiratory droplets during coughing or sneezing when exposed to NIV may produce droplet nuclei suspended in air, whereas the large droplets fall to the ground in a trajectory pathway [ 10 -13 ]. As smoke particles mark the continuous air phase, the data contours described refer to exhaled air. The results would therefore represent the "upper bound" estimates for dispersion of the droplets-which would be expected to follow a shorter trajectory than an air jet due to gravitational effects-but not fully refl ect the risk of large-droplet transmission [ 10 -13 ]. In summary, the laser visualization technique using smoke particles as a marker in the HPS model is a feasible means of assessing exhaled air dispersion during application of NIV and other modes of respiratory therapy [ 10 -13 ]. Substantial exposure to exhaled air occurs within 1 m of patients undergoing NIV in an isolation room with negative pressure via the ComfortFull 2 mask and the Image 3 mask connected to the whisper swivel exhalation port. It must be noted that there is far more extensive leakage and room contamination with the Image 3 mask, especially at higher IPAPs [ 11 ]. Health care workers should take adequate precautions for infection control. They especially must pay attention to environmental air changes when providing NIV support to patients with severe pneumonia of unknown etiology complicated by respiratory failure .
2019-03-06T14:19:31.432Z
2013-05-29T00:00:00.000
{ "year": 2013, "sha1": "464610f6bdf82c1ac237c66e4ef674591823aa88", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-7091-1496-4_2.pdf", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "493b6a14dfba84e256d98f334b2b127daab22b98", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
54968316
pes2o/s2orc
v3-fos-license
Brown seaweed extract enhances rooting and roots growth on Passiflora actinia Hook stem cuttings Passiflora actinia Hook (common name: ‘maracujá do mato’) is an important medicinal species due to significant sedative and anxiolytic activities. In order to commercially exploit the plant, however, studies on propagation to improve rooting in stem cuttings are needed. The present study was designed to evaluate the effect of the brown seaweed (Ascophyllum nodosum (L.) Le Jol.) extract when applied on P. actinia stem cuttings bases. Five concentrations of the extract in distilled water were evaluated: 0% (pure distilled water), 10%, 20%, 30% and 40%. The experimental design was completely randomized with 4 repetitions and 12 cuttings per experimental unit. Cuttings were evaluated 45 days from planting. Data were analyzed through polynomial regression analysis and Pearson’s correlation coefficients were calculated for all the variables. On average, 51.27% rooting was achieved. Rooting percentage increased linearly according to the brown seaweed extract concentrations. When compared to the control treatment, about 10% increase in rooting was observed at the treatment with 40% seaweed extract. Leaf retention response to A. nodosum increasing concentrations was also adjusted in a positive linear model. A 15.6% increase in leaf retention was observed at the 40% seaweed concentration when compared to the control. Positive correlations were observed for leaf retention and rooting percentage and leaf retention and roots length. The immersion of P. actinia stem cuttings bases in A. nodosum extract at a concentration of 40% promote positive effects on rooting and facilitate the species propagation. INTRODUCTION Passifloraceae family comprises about 19 genera and 530 species dispersed in tropical and subtropical regions, especially in America and Africa.The genus Passiflora is the most representative of the family, with approximately 400 species (BERNACCI, 2003).Passiflora species are widely used in traditional medicine, mainly as sedatives, anxiolytics and anticonvulsants (DHAWAN et al., 2004).Studies report the presence of diverse phytochemical compounds in Passiflora, such as flavonoids, phenolic acids, coumarins, phytosterols, cyanogenic heterosides, maltol and alkaloids (ZUANAZZI and MONTANHA, 2004).Among the bioactive compounds conferring the calming effects attributed to the genus, the C-glycosyl flavonoids (SANTOS et al., 2016), chrysin (ZANOLI et al., 2000) and maltol (AOYAGI et al., 1974) are some of the most important. Passiflora actinia Hook (common name: 'maracujá-domato') is a Brazilian native species widely distributed in Southern states (SANTOS et al., 2016).This species stands out due to significant sedative and anxiolytic activities, mainly related to the leaves major compound isovitexin, which acts on benzodiazepine gamma-aminobutyric acid-A receptors (SANTOS et al., 2016;LOLLI et al., 2007).In addition to the secondary metabolites of medicinal interest, the species is extensively appreciated for human and animal feeding (LIMA et al., 2007) and also widely recognized for its use as a rootstock for commercial passionfruit farming, due to its tolerance to low temperatures (PIRES et al., 2009). The study of suitable methods for propagation is one of the primordial aspects for the agronomic exploration of vegetal species (NUNES GOMES and KRINSKI, 2016b).Vegetative propagation by stem cuttings is one of the most widely used propagation methods in commercial production of various medicinal, fruit and ornamental crops.Among the advantages of this type of propagation, the reproduction of stock plants characteristics, uniformity of populations and operational ease stand out (HARTMAN et al., 2011).Several factors can influence vegetative reproduction success.Types of cuttings, substrates, environment humidity, hormonal balance and stock plants health, physiological conditions, and genetic characteristics are some of the main aspects pointed out as influent for stem cuttings adventitious rooting (ZUFFELLATO-RIBAS and RODRIGUES, 2001;NUNES GOMES and KRINSKI, 2016a;BISCHOFF et al., 2017;PIGATTO et al., 2018). In addition, exogenously applied plant regulators may have positive effects on stem cuttings rooting.In a previous study on P. actinia vegetative propagation, the use of ethanol or indolebutyric acid (IBA) did not promote increasing in neither rooting nor roots development (KOCH et al., 2004).In this context, the study of different plant growth regulators with auxinlike effects can provide important tools to improve this species propagation.Some biostimulants are reported to have these effects, promoting rooting and roots growth in several plant species, and represent a growing tendency for use in sustainable agriculture (NARDI et al., 2016). The brown seaweed Ascophyllum nodosum (L.) Le Jol.extract is classified as a plant growth regulator and its effects on plant growth and development vary with the extract concentration, mode of application and plant species (CRAIGIE, 2011).In terms of composition, the extract of brown seaweed is a natural source of macro and micronutrients (N, P, K, Ca, Mg, S, B, Fe, Mn, Cu and Zn), amino acids (alanine, aspartic and glutamic acid, glycine, isoleucine, leucine, lysine, methionine, phenylalanine, proline, tyrosine, tryptophan and valine), cytokinins, auxins, and abscisic acid (KOYAMA et al., 2012).Some of these components, mainly auxins and tryptophan (auxinprecursor) and micronutrients like B and Zn, are largely recognized as important tools to improve adventitious rhizogenesis in stem cuttings (ZUFFELLATO-RIBAS and RODRIGUES, 2001;TAIZ and ZEIGER, 2013). Considering this context, the present study was conducted to evaluate the effects of treating Passiflora actinia stem cuttings bases with different concentrations of the brown seaweed extract. MATERIAL AND METHODS Passiflora actinia plant material (branches with leaves) was collected during morning period on September 18, 2016 in Curitiba, state of Parana, Brazil (25°24'53"S, 49°18'12"W, 934 m altitude).The region climate is classified as Cfb, characterized by mild summers, cold and dry winter, rains evenly distributed throughout the year and frequent occurrence of frosts, according to the Köppen classification. The plant material was moistened and carefully disposed in black plastic bags to be transported to the greenhouse where the stem cuttings were made.Branches with 2.3±0.3 mm diameter were selected to prepare cuttings 10 cm long, with a straight cut at the apex and a bevel (diagonal) cut at the base.One leaf reduced to half of its original area was kept on the apex of each cutting (Figure 1A and 1C).Subsequently, the propagules underwent phytosanitary treatment in solution with 0.5% sodium hypochlorite during 10 minutes and then were washed in running water for 5 minutes. After sanitary treatment, stem cuttings had their bases immersed for 2 minutes in solutions of brown seaweed extract diluted in distilled water at the following concentrations: 0%, 10%, 20%, 30% and 40% (e.g.: 40% solution= 0.4 mL seaweed extract diluted in 0.6 mL distilled water).The experimental design was completely randomized, with 5 treatments, 4 replications and 12 cuttings per experimental unit, totaling 240 stem cuttings.The extract used in this experiment was a commercial concentrate water-soluble liquid extract, manufactured by Acadian Seaplants ® from the brown algae Ascophyllum nodosum (L.) Le Jol. Following treatments, stem cuttings were planted in 120 cm³ polypropylene containers filled with previously moistened commercial substrate Tropstrato HT ® (Vida Verde -Tecnologia em Substratos™, Brazil).After planting, cuttings were kept in a greenhouse with intermittent misting until evaluation.Greenhouse air relative humidity was kept higher than 80% and the temperature ranged between 20 and 30 °C during the period of the experiment. After 45 days from planting, plants were evaluated regarding the following variables: rooting percentage (cuttings with roots longer than 0.1 cm), average number of roots per cutting, average length of roots per cutting (cm), calli formation percentage (cuttings with undifferentiated mass of cells at the base, as seen in figures 1B and 1D), leaf retention (cuttings that kept the original apical leaf) and, ultimately, survival percentage.Treatments variances were evaluated regarding homogeneity by the Bartlett test at 5% probability and, when homogeneous, data were submitted to polynomial regression analysis (5% and 1% probability).Assistat 7.7 (SILVA and AZEVEDO, 2016) statistical software was used to perform these analyses.Pearson's correlation coefficients were calculated for all the variables using R software (R CORE TEAM 2016). RESULTS AND DISCUSSION According to variance analysis, there was a significant dose-dependent effect for all analyzed variables in Passiflora actinia Hook stem cuttings treated with the brown seaweed extract (Figure 2).Products derived from marine algae represent a relatively recent technology in Brazil and are a potential alternative for agronomic applications, mainly by the promotion of plant growth and development (DURAND et al., 2003). According to the scientific literature, P. actinia stem cuttings root easily, achieving rooting rates of at least 40% without exogenously applied plant growth regulators (CHAVES et al., 2004;KOCH et al., 2004;ALBUQUERQUE JUNIOR et al., 2013).The present study corroborates these findings, with an average 51.27% rooting percentage (Figures 1A and 2A).Despite the good performance in the control treatment, rooting percentage increased linearly according to the brown seaweed extract concentrations.When compared to the control treatment, about 10% increase in rooting was observed at the treatment with 40% seaweed extract (Figure 2A).The same linear responses, according polynomial regression analysis, were verified for average roots length and number of roots per cutting (Figures 2B and 2C, respectively).The positive effects of Ascophyllum nodosum extracts in stem cuttings rooting and roots growth can be primarily attributed to the presence of auxins, since these extracts are known to contain considerable amounts of indole-3-acetic acid (SANDERSON et al., 1987).It is a well-known fact that exogenously applied auxins act on the activation of vascular cambial cells, promoting adventitious roots emission and growth in stem cuttings (HARTMANN et al., 2011). In addition to the exogenous auxin source, according to the data reviewed by Koyama et al. (2012), Ascophyllum nodosum extracts have significant amounts of amino acids such as aspartic and glutamic acids, glycine, tyrosine, and tryptophan.These amino acids have been reported to enhance rooting and roots number in plants species when applied in culture media and/or are correlated to better rooting performance in stem cuttings (ORLIKOWSKA et al., 1992;DUTRA et al., 2002). Another important component of brown seaweed extract are macro and micronutrients such as B and Zn (KOYAMA et al., 2012), two of the most used mineral nutrients to improve rooting in stem cuttings.Zinc can enhance rooting because it is involved in the biosynthesis of tryptophan, an auxin precursor (SCHWAMBACH et al., 2005).Boron, in turn, plays an important role in cell elongation and is considered a rooting cofactor, since it acts synergistically with endogenous auxin, facilitating its transport through the cell membranes (SANTOS et al., 2010;TAIZ and ZEIGER, 2013).These components taken together can explain the better performance for rooting roots growth in P. actinia stem cuttings and also highlight the possibilities for using these extracts as alternative plant growth regulators. Regarding calli formation, the responses to Ascophyllum nodosum extract concentrations were represented in a quadratic model, with a higher callogenesis percentage (65%) in the control treatment (Figure 2D).Despite the high rates of calli formation, it is important to mention that adventitious roots in P. actinia did not differentiated from calli, it is, the species underwent direct rhizogenesis.In some cuttings, both calli and roots formation were observed, with no direct correlation between both processes (Figure 1B and 1C).According to Hartmann et al. (2011) the processes of calli and adventitious roots formations are independent and their simultaneous occurrence is explained by the fact that both involve intense cell division and depend on favorable environmental conditions.The high callogenesis, percentage, in this case, can be an indicative of an adequate rooting environment. The adequate rooting environment can also be observed by the high survival percentages, with an average 89.09% rate.Considering this variable, linear model was, among those tested, the one that presented statistical significance (p ≤ 0.05) (Figure 2F).However, due to the low coefficient of determination value (R² = 0.46), the model equation does not present reliability to explain the plant response to the extract, similarly to the results reported by Fragoso et al. (2017) on cherry tree stem cuttings treated with different IBA concentrations.It is possible to affirm for this variable, however, that P. actinia cuttings present high resistance to mortality, that the rooting environment allowed adequate conditions for cuttings survival and, ultimately, that the seaweed extract did not jeopardized plant material survival. Leaf retention response to A. nodosum increasing concentrations was adjusted in a positive linear model (Figure 2E).An increase of 15.6% was observed in leaf retention at the 40% seaweed concentration when compared to the control.The physiological response of leaf maintenance in P. actinia cuttings may be attributed to the presence of several natural cytokinins in A. nodosum extract, mainly zeatin, dihydrozeatin, isopentenyl adenine and isopentenyl adenosine (SANDERSON and JAMESON, 1986).It is a well-known fact that cytokinins coming up through the xylem to the leaves play an important role in retarding leaf yellowing, blade abscission, petiole abscission and, to a lesser extent, pod development (GARRISON et al., 1984). The effects on leaf maintenance can also be a reasonable explanation for the positive effects of A. nodosum in P. actinia stem cuttings rooting.The studies from Albuquerque Junior et al. ( 2013) and Lima et al. (2007) clearly demonstrate a positive correlation between leaf maintenance and rooting in this species.On the present study a significant positive correlation was also observed for leaf retention and rooting percentage and leaf retention and roots length (Figure 3).The presence of leaves is an important feature for rooting, especially when using herbaceous or semi-hardwood stem cuttings, because adventitious roots initiation and development are dependent on auxins, carbohydrates, and rooting cofactors that are supplied primarily by the leaves on this type of propagules (BONA and BIASI, 2010). To the best of our knowledge, the present study is one of the first attempts to use Ascophyllum nodosum extract as plant growth regulator to promote rooting in stem cuttings from medicinal species and demonstrates the versatility of this product for use in several segments of sustainable agriculture. CONCLUSIONS Passiflora actinia stem cuttings root easily, have elevated survival rates and present direct rhizogenesis.Immersion of stem cuttings bases for 2 minutes in a concentration of 40% Ascophyllum nodosum extract enhances rooting percentage, roots number and roots length as well as promotes a higher rate of leaf retention.Leaf retention has a positive correlation with rooting percentage and roots length in P. actinia stem cuttings. 0000-0002-7793-216X : implementation and final evaluation of the greenhouse study. Figure 1 . Figure 1.Passiflora actinia Hook stem cuttings after 45 days from planting.A: general overview of the effects of different concentrations of brown seaweed extracts.B: calli and roots formation in one stem cutting, evidencing that adventitoious roots did not differentiate from calli.C: Rooted stem cutting evidencing the leaf retention.D. Stem cuttings with calli formation and no rooting. Figure 2 . Figure 2. Polynomial regression analysis and respective equations, coefficients of determination (R 2 ) and coefficients of variation (CV) for the variables rooting percentage (a), number of roots per cutting (b), roots length (c), calli formation (d), leaf retention (e) and survival (f) in Passiflora actinia Hook stem cuttings treated with increasing concentrations of the brown seaweed extract.*significant at 5% probability.**significant at 1% probability. : CONTRIBUTIONS E.N.G. 0000-0002-7999-070X : Conception of the study, implementation and final evaluation of the greenhouse study, data analysis and interpretation, drafting and critical revision of the article, final approval of the version to be published.L.M.V. 0000-0002-9336-860X : Conception of the study, implementation and final evaluation of the greenhouse study, material and methods section writing.J.C.T. 0000-0002-4333- 3521 : implementation and final evaluation of the greenhouse study, correlation analysis, results and discussion section writing.M.M.T. 0000-0002-9946-2701 : implementation and final evaluation of the greenhouse study, correlation analysis and introduction section writing.R.L.G. 0000- 0002-7493-0732 : implementation and final evaluation of the greenhouse study, introduction section writing.C.M.F.0000-0003-1224-6772 Final evaluation of the greenhouse study, correlation analysis, results and discussion section writing.R.C.B.M.
2018-12-11T07:11:30.706Z
2018-10-10T00:00:00.000
{ "year": 2018, "sha1": "99aa6dbb8ea2caafccb80814cfb0da9388a165cb", "oa_license": "CCBY", "oa_url": "https://ornamentalhorticulture.emnuvens.com.br/rbho/article/download/1221/1279", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "99aa6dbb8ea2caafccb80814cfb0da9388a165cb", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
85069998
pes2o/s2orc
v3-fos-license
INHERITANCE OF RESISTANCE TO Fusarium oxysporum f . sp . phaseoli BRAZILIAN RACE 2 IN COMMON BEAN Aiming to obtain information concerning the genetic control of the resistance of the bean (Phaseolus vulgaris L.) to the fungus Fusarium oxysporum f. sp. phaseoli, six crosses involving three resistant lines (Carioca MG, Esal 583 and Esal 566) and four susceptible to the fungus (Carioca, CNFC 10443, Uirapuru and Esal 522) were developed. The parental lines, the controls (Carioca MG and Carioca) and the F1, F2 and F3 generations were evaluated for reaction to Fusarium. For inoculation, root cuttings were immersed in a spore suspension. The evaluations were performed at 21 days after inoculation, by scale of notes ranging from 1.0, to 9.0 and genetic and phenotypic parameters were estimated. The heritability in the narrow sense ranged from 0.34 to 0.42 and in the broad sense of 0.76 to 0.97, showing that selection should be easy, since efficient inoculation and selection methods are used. The average degree of dominance was around 1.0 indicating the presence of dominance in the control of the character, although additive effects are also expressive. INTRODUCTION Fusarium wilt, caused by the fungus Fusarium oxysporum f. sp.phaseoli Kendrick & Snyder, is a serious vascular disease of common bean (Phaseolus vugaris L.) in Latin America, in Africa and in the northwestern United States (Buruchara & Camacho, 2000).In Brazil, this disease has attracted special interest in the last years, due to a higher degree of mechanization in the fields, successive plantings and more than one common bean harvest per year.This disease causes vascular wilt, due to vessel colonization.The main re-flex symptom is the progressive yellowing from the lower towards the upper leaves. Information on the number of pathogen races and the variability within races is limited.Seven pathogen races, well-distributed across the continents, are documented in the literature.In Brazil, there are indications of the predominance of one race, designated race 2 (Alves-Santos et al., 2002). The existence of variability in the reaction to this pathogen was reported at several occasions (Salgado & Schwartz, 1993;Woo et al., 1996;Sala et al., 2006;Pereira et al., 2008).There are some reports of ge-Sci.Agric.(Piracicaba, Braz.), v.66, n.6, p.788-792, November/December 2009 netic control of resistance; the first was probably the study of Ribeiro & Hagedorn (1979), who identified the presence of only one gene with a dominant allele conferring resistance.More recently, other results were reported for the genetic control of resistance (Salgado et al., 1995;Cross et al., 2000;Brick et al., 2004).No report was found on the genetic control of resistance to this pathogen using exclusively germplasm developed in Brazil, nor any information about the use of the estimated mean and variance components to study this trait. Thus, this study was conducted with the objective to obtain information on the genetic control of resistance to race 2 of Fusarium oxysporum f. sp.phaseoli. MATERIAL AND METHODS The experiments were conducted in an experimental area located in Lavras, State of Minas Gerais, Brazil.Seven common bean lines previously evaluated for the reaction to Fusarium oxysporum f. sp.phaseoli (Pereira et al., 2008) were chosen for the crosses.The group of resistant lines were Carioca MG, Esal 566 and Esal 583 and the group of susceptible lines were Carioca, Uirapuru, CNFC 10443 and Esal 522, all with carioca grain type, with exception of Uirapuru, with black grain. F 1 seeds were sown in the field in August 2006 to obtain the F 2 generation.The F 2 seeds and of the parental lines were sown again in December 2006 to obtain the F 3 generation and for maintenance of the parental lines.Part of the F 1 seeds were stored for use in later evaluations. Five plants of the parents and of the controls (Carioca, susceptible and Carioca MG, resistant), 10 -16 plants of the F 1 , 100 plants of the F 2 and 200 plants of the F 3 generation were evaluated.The methodology of root immersion in spore suspension was used for inoculations of cuttings of the root system (Pastor-Corrales & Abawi, 1987).Sowing was performed in styrofoam trays with 128 cells filled with a horticultural substrate (Plantmax ® ) that were placed in a greenhouse for plant germination and growth. We used an isolated from the pathogen, collected in the cultivar Carioca in the 2005/2006 season.After isolating the pathogen was maintained in BOD, with temperature of 24 ± 1°C, under continuous lighting.To promote the sporulation, the conservation of the fungus was held in the test tube containing culture medium BDA, immersed in mineral oil.The suspension of spores was prepared minutes before each inoculation. The plants were inoculated 9-10 days after sowing (first pair of unifoliolate leaves fully expanded).For this purpose, the plants were removed from the trays, the root system carefully washed in tap water and 1/ 3 of its length cut off with a pair of scissors and immersed in the spore suspension (10 6 conidia mL -1 ) for five minutes.Roots of the control plants were immersed in distilled water and then transplanted to pots containing substrate.During the evaluations the pots were kept in a climate chamber at 22°C ± 2°C, with a photoperiod of 12 hours.The plants were watered every two days and fertilized with 1.0 g NPK fertilizer (8-28-16), ten days after inoculation. In all experiments, the plants were evaluated 21 days after inoculation (DAI), based on the disease severity index developed by the CIAT (Pastor-Corrales & Abawi, 1987), in which: 1 = no leaf or vascular symptom, 3 = 1% to 10% of symptomatic leaves, slight wilting of plants and vascular hypocotyl discoloration, 5 = 11% to 25% symptomatic leaves, moderate plant wilting and vascular discoloration up to the first node, 7 = 26% to 50% of symptomatic leaves, severe plant wilting and vascular discoloration on the entire stem and petiole; 9 = dead plant.Lines with mean scores between 1.0 and 3.0 were considered resistant, between 3.1 and 6.0 moderate and between 6.1 and 9.0 susceptible (Pastor-Corrales & Abawi, 1987;Salgado & Schwartz, 1993;Elena & Papas, 2002). The analyses consisted in (i) estimating the mean component, based on the model without epistasis, using a procedure similar to that proposed by Cruz et al. (2004) and in (ii) estimating the components of variance for the crosses where the F 3 generation was available , using the method of the iterative weighted least squares, according to Cruz et al. (2004).The estimates were obtained using software Mapgen (Ferreira & Zambalde, 1997). RESULTS AND DISCUSSION The parents used confirmed the resistance or susceptibility reaction detected in previous evaluations (Pereira et al., 2008).The reactions of the two control lines were also confirmed in all experiments evaluated. The model used to estimate the mean components, containing only m (average of the contribution of the homozygous loci), a (the algebraic sum of the effects Sci.Agric.(Piracicaba, Braz.), v.66, n.6, p.788-792, November/December 2009 of the homozygous loci measured as deviations from the mean, additive effect) and d (deviations of the heterozygous from the mean, dominance effect), was sufficient to explain all variation observed.The estimates of the coefficient of determination (R 2 ) were higher than 99% (Table 1), indicating, as above mentioned, a well-fitting model.No report of mean component estimates was found for traits associated to pathogen resistance, using grade scales in common bean.However, there are reports of mean components estimated for resistance to Phakopsora pachyrhizi (Asian rust) in soybean (Ribeiro et al., 2007).In most cases these authors also found a well-fitting model with only m, a and d, that is, without epistasis.It is important to mention that these authors used a 0% to 100% disease severity grade scale for leaves. The estimates of m, a and d of the crosses involving the parents contrasting for resistance showed that these values were very similar.The m estimate, for example, varied from 4.57 (cross Carioca MG × Carioca) to 4.92 (Carioca MG × CNFC 10443).The errors associated to the m estimate were also of small magnitude, with values below 10% of the estimate in all cases (Table 1). In general, the same observations as for m are valid for the estimate of a, i.e., of the additive effect.They were also similar for the crosses, although the error variation associated to estimate a was greater.Even then the associated error may also be considered small (below 20% in all cases). The estimated contribution of the loci in heterozygosis, presence of dominance (d), were also similar to the (a) estimates (Table 1).The average level of dominance varied from 0.61 (cross ESAL 583 Uirapuru) to 1.01 (ESAL 566 × Esal 522), indicating dominance in the trait control (Cruz et al., 2004).Since the estimate of d was negative, it was inferred that the dominance is in the sense of conferring resistance, since, by the criterion of grade scales, plants with less symptoms have lower grades. For some crosses where the F 3 generation was available, the components of genetic and phenotypic variance were estimated (Table 2).In all cases, the model fit well, with R 2 higher than 0.92.The estimated additive variance ( ) was similar in the crosses: = 4.72 in cross Carioca MG × Carioca and = 5.69 for Carioca MG × Uirapuru.The confidence interval (CI) was of small magnitude and the lower limit of the range was always positive, indicating that differed from zero. The estimated dominance variance ( ) for cross Carioca MG × Carioca was nearly twice as high as .In the cross Carioca MG Uirapuru the value was similar to .The CI of was also of small magnitude and the lower limit positive, indicating that differed from zero.The estimates of environmental variance ( ) were lower than the component of genetic variance, allowing the conclusion that the environmental influence on the trait is small.It is important to mention that the plants to be inoculated were rigorously controlled, in the system of inoculation and post-inoculation, and that the plant development in the pots occurred under controlled environmental conditions. The estimated average degree of dominance was, in all cases, higher than that obtained by the mean component.The estimates of the component of variance are normally associated to a more pronounced error than the mean component (Bernardo, 2002).This is most likely the reason why the estimates of average degree of dominance, especially in the cross Carioca MG Carioca, were higher than 1.0.However, as stated above for the mean components, dominance may be inferred in the trait control. The heritability estimates in the narrow sense ( ) were higher than 34% (Table 2).Considering that the evaluation was performed in plants, this heritability can be considered of mean magnitude.Unfortunately, no report was found of estimate in plants in the pathosystem Fusarium oxysporum f. sp.phaseoli -common Table 1 -Estimated mean components for the trait severity grades of Fusarium oxysporum f. sp.phaseoli, the standard error associated to each estimate, average level of dominance and the coefficient of determination (R 2 ). 1/ significant estimates by the t test at the 5% probability; 2/ significant estimates by the t test at the 10% probability; NS/ non-significant. Table 2 - Estimates of additive ( ), dominance ( ) and environmental ( ) variance, with its lower (LL) and upper (UL) limits, heritability in the narrow () and broad sense ( ), coefficient of determination (R 2 ) and average level of dominance of the severity grades (1 a 9) of Fusarium oxysporum f. sp.phaseoli in crosses between common bean lines.
2019-01-03T02:10:37.796Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "8cd93076d4da96f2e21fac4608568a688b65c622", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/sa/a/FqZNdwqDZsg8gxM5bdq8kyd/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8cd93076d4da96f2e21fac4608568a688b65c622", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
8282487
pes2o/s2orc
v3-fos-license
Ultrasound versus fluoroscopy-guided medial branch block for the treatment of lower lumbar facet joint pain Abstract The aim of this study was to compare the mid-term effects and benefits of ultrasound (US)-guided and fluoroscopy (FL)-guided medial branch blocks (MBBs) for chronic lower lumbar facet joint pain through pain relief, functional improvement, and injection efficiency evaluation. Patients with chronic lumbar facet joint pain who received US (n = 68) or FL-guided MBBs (n = 78) were included in this retrospective study. All procedures were performed under FL or US guidance. Complication frequency, therapeutic effects, functional improvement, and the injection efficiency of MBBs were compared at 1, 3, and 6 months after the last injection. Both the Oswestry Disability Index (ODI) and the verbal numeric pain scale (VNS) improved at 1, 3, and 6 months after the last injections in both groups. Statistical differences were not observed in ODI and VNS between the groups (P > .05). The proportion of patients who reported successful treatment outcomes showed no significant differences between the groups at different time points. Logistic regression analysis showed that sex, pain duration, injection methods, number of injections, analgesic use, and age were not independent predictors of a successful outcome. US guidance was associated with a significantly shorter performance time. US-guided MBBs did not show significant differences in analgesic effect and functional improvement compared with the FL-guided approach. Therefore, by considering our data from this retrospective study, US-guided MBBs warrant consideration in the conservative management of lower lumbar facet joint pain. Introduction Lumbar facet joints have been implicated in the cause of chronic pain in 15% to 45% of patients with chronic low back pain (LBP), [1][2][3][4][5] which was based upon their responses toward controlled, diagnostic blocks according to the criteria set by the International Association for the Study of Pain. [6] Treatment effects for facet join pain in 3 types of intervention, including intraarticular injection, medial branch nerve block (MBB), and neurolysis using radiofrequency, have been reported. [5] Appropriate management methods for the facet joint pain are still in argument. [7][8][9][10][11][12] The long-term therapeutic effects of intra-articular injections for the facet joints have not been satisfactory compared with neurolysis using radiofrequency. [7,13] However, they showed that MBBs can be used as an alternative to neurolysis using radiofrequency. [5,7,13] MBBs using computed tomography (CT) or fluoroscopy (FL) have been performed for the diagnosis and treatment of clinical facet joint pain. [14] The patients may be exposed to considerable radiation dose to identify the symptomatic joint or to rule out facet joint pain during MBBs using CT or FL. However, an ultrasound (US)-guided approach can be safe and reliable without radiation exposures and special spaces for radiation devices. [15] Recently, US-guided MBBs demonstrated high success rates, cost-effectiveness, and fewer complications than conventional methods. [15][16][17][18] However, in previous studies, only needle location, safety, and the short-term therapeutic effects were observed. Hence, this Editor: Kazuo Hanaoka. SHH and KDP contributed equally to this work as first authors. Financial disclosure statements have been obtained, and no conflicts of interest have been reported by the authors or by any individuals in control of the content of this article. We certify that no party having a direct interest in the results of the research supporting this article has or will confer a benefit on us or on any organization with which we are associated. The authors report no conflicts of interest. retrospective study aimed to evaluate mid-term pain, and the functional improvement of the patient facilitated by US-guided MBBs in comparison to FL-guided MMBs. In addition, the incidence of adverse event, US treatment outcomes and efficiency (decreased performance time) were also evaluated as secondary outcomes. Study design This retrospective, comparative study of chart data maintained patient privacy and data confidentiality throughout the research process. Approval from the institutional review board of the corresponding author's affiliated university was obtained. This study did not have direct contact with the study group and all the patient identifiers were discarded from the data set at the time of initial collection, thus we got a waiver of informed consent. Subjects Potential participants were those who received US or FL-guided MBBs at the outpatient clinic of the rehabilitation center from January 2013 to December 2014. Their baseline information was checked by self-assessment questionnaires about pain level and functional status. Electronic clinical records and questionnaire responses were retrospectively reviewed to determine data compliance and inclusion criteria. We selected patients 18 years of age or older who had received a US or FL-guided MBBs for the chronic lumbar spinal pain treatment. And the pain duration should be at least 3 months over. A local anesthetic block was used to diagnose lumbar facet joint pain. [3,5] Patients who did not respond to conservative care including analgesic medication and physical therapy for at least 4 weeks were included in this study. They all reported at least 5 points on the verbal numeric pain scale (VNS) and experienced pain on most days for 3 months over. Patients with a previous history of spinal stenosis, herniated lumbar disc, psychological problems, any active infectious or inflammatory diseases, rheumatoid disorders, or neurological diseases such as lumbar radiculopathies, Parkinson disease, and stroke were excluded. Patients who had received lumbar-related surgical treatment were also excluded. Injection methods FL or US-guided MBBs used in the treatment of symptomatic lower lumbar facet joint pain was common practice in our service. Before the provision of consent, the patient received detailed information regarding the procedure, expected benefits, and risks. The procedures were performed by a physician (Y. Park) with more than 7 years of experience with US and FL-guided procedures. All treatments were performed as an outpatient procedure. Accuvix XQ (Samsung Medison, Seoul, Korea) with a linear probe at 6 to 12 MHz was used as US device. In accordance with the standard practice, we performed a US examination to identify all important structures before skin disinfection wrap the US transducer in a sterile cap. Starting at the sacrum, the sonographic long-axis view begins with the the L5 spinous process and median S1 crest (SC). (B) Long-axis view of the lumbar spine with L4/L5 and L5/S1 facet joint contours and the S1 (arrow) dorsal foramen. (C) Long-axis view showing the L4 and L5 transverse processes and the sacral ala (SA). Upper edge of the transverse process, or the sacral ala, immediately lateral to the superior articular process is the correct anatomical target (arrow). (D) Short-axis view of the sacrum showing the S1 median crest (arrow head) and the surface of the sacrum (arrow). (E) Short-axis view of L4/L5 segment showing the interspinous ligament (ISL), L5 superior articular process (SAP), and L4 transverse process (TP). Target point on between the L5 superior articular process (SAP) and the L4 transverse process for an approach toward the right-sided L4 medial branch (arrow). (F) Short-axis view of the lumbosacral segment showing the interspinous ligament (ISL), S1 superior articular process (SAP), the sacral ala (SA), and the iliac crest (IC). Target point is between the S1 superior articular process (SAP) and the sacral ala for an approach to the right-sided L5 dorsal ramus (arrow). 1A). [19] Once the longaxis midline is obtained, the transducer is gently shifted laterally until a "saw-tooth" hyperechoic line is seen; this bony structure represents the superior and inferior articular process of the lumbar laminae (Fig. 1B). [19] As the probe is moved further in the lateral direction, a hyperechoic dotted line representing the transverse processes (TPs) appears and a hyperechoic soft tissue between them (Fig. 1C). The TPs were counted caudally from the highest mark to the 5th lumbar vertebrae and sacrum. After the completion of the long-axis view scan, then the transducer was rotated into a short-axis view to delineate the sacrum as a bony landmark distinguished by the first distinct midline bony protuberance at the level of the S1 median sacral crest (Fig. 1D). [15,19] From the aforementioned landmark, the transducer was then moved cephalad to visualize the junction of the S1 superior articular process (SAP) with sacral ala as the anatomical target for MBBs (L5 dorsal ramus) (Fig. 1E) and the angle between the SAP and TP as the L3/4 MBBs (Fig. 1F). After sterile skin preparation, a 23-gauge, 9-or 11-cm spinal needle was directed in the US plane at an angle of about 45°to 60°t o the skin, advancing laterally and medially until the needle tip reached the target and bony contact ( Fig. 2A). [15,16,19] A longaxis paravertebral view was obtained to identify the position of the needle in the cephalad margin of the TP (Fig. 2B). [15,16,18,19] The L5 dorsal ramus block can be technically challenging due to the height of the iliac crest. If the iliac crest covered the field of view, the injection was performed using an out-of-plane (OOP) approach. [19] After identification of the needle position following negative aspiration test for blood, a volume of 1 mL of a mixture of 1% lidocaine (0.5 mL) and dexamethasome (5 mg/mL at 0.5 mL) was injected under real-time US guidance with short-axis view. During injection, it was necessary to check for appropriate needle placement by observing for hypoechoic expansion resultant from the injectate via real-time US. A failure to properly identify hypoechoic expansion may indicate improper placement or intravascular injection. Following this, the needle would be repositioned. All FL-guided MBBs were performed on prone-positioned patients using a posterior approach. At the level L3 to L4, MBBs are done by targeting the junction of the upper border of the TP and SAP. [20,21] The L5 dorsal ramus is blocked in the groove between the ala of the sacrum and the SAP of S1. [20,21] The MBBs were performed on a minimum of 2 nerves to block a single joint. Spine needle 22-G (Spinocan; BRAUN, Melsungen, Germany) was placed on the anatomical target; 0.2 mL of the nonionic contrast medium Omnipaque 300 (GE Healthcare, Carrigtohill Co., Cork, Ireland) can be injected to test for the incidence of venous uptake. If venous uptake occurred, the needle was readjusted by 1 to 2 mm and the test was repeated. [21] If there was no venous uptake, a volume of 1 mL of a mixture of 1% lidocaine (0.5 mL) and dexamethasone (5 mg/mL at 0.5 mL) was injected into the target nerve ( Fig. 3A, B). Basically, patients received 2 consecutive therapeutic injections at a 2-week interval. The following list indicates the patient satisfaction scores after the therapeutic injections. The patient satisfaction score was measured on the 5th-grade scale after 2 weeks of the first injection (<0, indicating no effect at all; 1, indicating a bad response; 2, indicating a fair response; 3, indicating a good response; and ≥4, indicating an excellent response). Each score, with reference to the patients' experience of pain alleviation, is indicated thusly: "excellent" meaning that the patient was "satisfied with the treatment result as expected"; "good," indicating that the pain relief was "not as much as expected but willing to try this treatment next time when pain redevelops"; "fair" indicating that the treatment "had some effect but not enough to choose the same treatment next time when pain re-develops"; and finally, "bad," indicating that the treatment had the "same effect as the prior treatment or worse." However, there were some exceptions to the 2 successive injection protocols. Patients who showed significant improvement for the pain (≥50% improvement on the VNS score) did not get a second injection. A second injection or reevaluation was not considered if the pain worsened, no change, or the patient satisfaction score was less than or equal to the "Fair" grade. If the patient satisfaction score is "Good" despite a VNS score improvement of less than 50%, a second injection was scheduled. As all patients did not show improvement with treatments such as anti-inflammatory drugs and 4-week physical therapy, there were no restrictions on the duration of previous exercise programs, pharmacotherapy, or return to work. Specific physical therapy, occupational therapy, brace, or other specific interventions have not been utilized. Review of the clinical date We used a standardized chart abstraction form to collect demographic data, treatments, pain severity, analgesic use, and functional evaluation. Follow-up interviews were performed by nursing staff who were not involved in the procedure and were performed during the visit at 1, 3, and 6 months postinjection. Outcome measurement was assessed by the Oswestry Disability Index (ODI), and the VNS. ODI was one of the most commonly used disease-specific measurement tool for patients with LBP. [22] ODI is calculated on the basis of each score, which consists of 10 items. Each of the 10 items is scored from 0 to 5, and the sum is added and multiplied by the factor of 2. Therefore, the ODI ranges from 0 to 100. When using VNS, the patient was asked to rate the pain from 0 to 10, with 0 and 10 representing "no pain" and "worst pain," respectively. There were a total of 11 integers inclusive of 0 and 10. [23] Successful outcomes were defined as patients with a VNS score of more than 50% improved and ODI improved by more than 40%. Patients who failed to meet these criteria or who, subsequent to undergoing MBBs, underwent an invasive procedure during a follow-up period were considered to have failed the treatment. Patients with successful treatment outcomes were referred to be responsive and patients without successful treatment outcomes were referred to be nonresponsive. Independent variables such as injection method, number of injections, pain duration, sex, and age were recorded on the medical chart. Predictive variables, such as the classification of the patients' age, were categorized into 5 age groups: those who were <39 years old, and those between the age brackets of 40 to 49, 50 to 59, 60 to 69, and >70 years old accordingly. Pain duration was also treated as a potential predictor and classified as acute or subacute, chronic for less than 6 months or more than 6 months. [24] The performance time and the number of needle passes were recorded. For the US guidance, performance time was defined as the time interval between the point of contact of the US probe with the patient's skin and the completion of the injectate. [25,26] For FL guidance, performance time was defined as the temporal interval between the first radiographic image and the end of the second injection. [25,26] We checked if there were any adverse events such as severe back pain just after injection, facial flushing, or vasovagal reaction sign. Each patient handed the questionnaire after the injection and asked them to complete it within 48 hours, and returned it after 2 weeks of follow-up visits. Statistics Age, body mass index (BMI), pain duration, analgesic use, and the number of injections were compared using Pearson Chisquare and Wilcoxon rank-sum test were used. At each time point, VNS and ODI were compared by repeated-measures analysis of variance (ANOVA), and the Bonferroni correction was performed for post-hoc comparison. The Pearson Chi-square was used to test the differences in proportions. Fisher exact test was used wherever the expected value was less than 5. Univariate analysis was performed to evaluate the relationship between the possible outcome predictors and the therapeutic effect by using Pearson Chi-square test. Logistic regression analysis was performed to assess whether the injection method, the number of injections, sex, pain duration, analgesic use, and age were independent predictors of a successful outcome. Statistical analyses were performed with SAS Enterprise Guide 4.1 (SAS Institute Inc. 2006). The level of statistical significance was set as P < .05. Results Of the 214 MBBs, using either US (n = 94) or FL (n = 120), performed during this study, the inclusion criteria were met for 146 (68.2%) injections. Fifty-four (25.2%) injections were excluded, as the patient did not complete the follow-up survey. Fourteen (6.5%) injections were excluded because patients were withdrawn owing to an incidence of rheumatoid disorders (n = 6) and stroke (n = 8). Finally, 68 patients in the US and 78 patients in the FL group were included in this study (Fig. 4). The average age of the patients was 57.5 ± 10.3 years in the FL group and 57.9 ± 10.6 years in the US group, without significant difference. There were no significant differences in general characteristics such as sex, BMI, duration of pain, use of analgesic, and number of injections (Table 1). ODI and VNS showed significant improvement at 1, 3, and 6 months after the last injection in both groups. No significant difference in ODI and VNS between the 2 groups was present at baseline, or at 1, 3, and 6 months after the last injections ( Table 2). For the period, 16 patients were reinjected and a single patient underwent surgery at the US-guided group for 1 month. Fifty-two patients (76.5%) were successfully treated. Meanwhile, 18 patients were reinjected and 2 patients underwent an invasive procedure at 1 month in the FL-guided group. Fifty-eight (74.4%) patients showed successful treatment outcome. At 3 months, 9 patients were reinjected and 43 (63.2%) patients showed successful outcomes in the US-guided group. In addition, 9 patients were reinjected and 49 (62.8%) patients showed successful outcomes in FL-guided group. At 6 months, 8 patients were reinjected and 35 (51.4%) patients showed successful treatment outcome in the US-guided group. Furthermore, 7 patients were reinjected and 1 patient underwent invasive procedure, with a total of 41 (52.5%) patients showing successful Han et al. Medicine (2017) 96: 16 Medicine treatment outcome in the FL-guided group (Fig. 4). There was no significant difference in treatment success rates between the 2 groups at each evaluation period. Injection method, sex, age, duration of pain, analgesic use, and number of injection were not independent predictors of the MBBs efficacy as indicated using univariate and multiple logistic regression analyses (P > .05) (Tables 3 and 4). The performance time was significantly lower with US than with FL (at 323 vs 430 seconds; both P < .001). There was no clinically significant decrease in the use of analgesics (nonsteroidal anti-inflammatory drug, NSAID, and opioid) between the 2 groups at 6 months after injection. Immediately after the procedure, a vasovagal reaction was present in 4 patients in the US group and 6 patients in the FL group. Two patients in the US group and 3 in the FL group showed a transient headache (P > .05). Overall, 3 in the US group and 5 in the FL group reported temporary pain aggravation (back or the lower extremity) 48 hours after injection during 2-week follow-up. None of the patients reported headache suggesting postlumbar puncture syndrome, decompensated heart disease, or diabetes. No case of infection or hematoma was reported for 2 weeks after the procedure. Blood aspiration before injection was reported in 7% of the FL group and 0% of the US group. Intravascular contrast spread was observed in 6% of the FL group. Discussion This retrospective study showed clinically meaningful and significant improvements in all parameters at the end of a midterm period in both FL and US group. Traditionally, MBBs have been performed with FL or CT guidance. However, these methods require an exposure to radiation, have a higher cost relative to other methods, and involve bulky devices. [18] In contrast, US provides an imaging Values are mean ± standard deviation. BMI = body mass index. Table 2 Comparison of verbal numeric pain scale (VNS) and Oswestry Disability Index (ODI) from baseline to 1, 3, and 6 months after the last injection. form that is unrelated to radiation exposure and with radiation exposure and identifies soft tissue targets. [18] Greher et al [15] have reported that 50 bilateral US-guided approaches to the lumbar medial branch were performed in 5 embalmed cadavers, in which 45 of the 50 needle tips were located at the correct target point. Shim et al [18] reported that in 20 patients diagnosed with lumbar facet joint mediated pain, 101 needles were positioned by the US toward the lumbar segment under FL control, in which 96 needles were positioned correctly and 2 injections had an intravenous distribution of the contrast agent. The mean pain score on the VNS decreased from 52 to 16 after the block. Greher et al [16] performed 28 US-guided lumbar MBBs in 5 patients. They reported a high success rate, no complications, no sign of a nerve root block, and no incidence of other neurologic symptoms. Two of the five patients had no pain during the evaluation period after 30 minutes and 3 had a 50% reduction in pain scores. Compared to the FL-guided procedure, the US-guided procedure had several limitations. The resolution of US is limited in the deeper layers due to the physical characteristics of the waves. It could be difficult to check if the needle tip is located at the target point. There are several techniques to resolve this problem. First, Marhofer and Chan [27] described the trichotomous technique of alignment, rotation, and tilting movements of the US transducer while scanning to allow for the improved placement of the needle tip and shaft for better visualization. For the second method, correct injection into the target lesion was similarly identifiable by high shadow filling conferred by US contrast agents. [28] With regard to the third method, hydrolocalization was the term attributed by Bloc et al. [29] This maneuver was generally performed by observing the movement of the surrounding tissue with moving the inserted needle. The second limitation is that US-guided MBBs cannot clearly detect intravascular injections or inadvertent foraminal spreads. FL-guided MBBs at the lumbar level of the spine are, on average, likely to be intravascular at an incidence of 3.7% of the procedures performed. [30] Therefore, severe complications can occur if arterial corticosteroid injection is performed. Although the location of small vessels can be visualized by color Doppler, it is difficult to detect small and deep vessels in obese people. Even if we did not visualize a small critical vessel through US, we could not necessarily exclude its presence. In the case of adding corticosteroid, nonparticular corticosteroid is recommended for the safety. In the case of infiltration of particular agent into microvessel cannot be identified with US alone, it can cause neurological damage. The L5 dorsal ramus block can be technically challenging due to the elevation of the iliac crest. [19] If the iliac crest was obscuring the view of the US, the injection would be performed using a short-axis OOP approach. [19] The OOP approach has difficulties with accurately targeting the site of interest, as the on-screen visualization of the hyperechoic dot may ambiguously represent either the needle tip or the needle shaft. [31] In this study, both the short-axis and long-axis views were used to determine the position of the target lesion to ensure for appropriate needle tip placement. When the L5 dorsal ramus block was performed in the OOP approach, the transducer was located at the L5/S1 level in the short-axis view. [19] The angle between SAP S1 and the sacral ala is centered on the image. The needle was inserted directly caudal to the midpoint of the transducer toward the caudocephalad direction until the tip of the needle contacted the bone. Then, the transducer was rotated to obtain the long-axis view, followed by positioning at the sacral ala within the plane of the TP. Hypoechoic expansion generated by the injectate from the needle tip was checked via real-time US. If there was a failure to identify this phenomenon, the needle was guided again to ensure for appropriate needle placement, as to avoid unintentional intravascular injection. In this study, the performance time was significantly quicker with US than with FL. This may have occurred as a result of 2 reasons. Firstly, fluoroscopic imaging requires anteroposterior (AP), lateral, and oblique views for the appropriate placement of the needle, which is a critical step for safe needle placement and for the correct identification of the target lesion. This can be timeconsuming. Furthermore, as the procedure involved intermittent FL, an ample amount of time was required, in addition to the requirement of lengthier performance times to allow for the injection of the contrast media. In contrast, US allowed for the visualization of the contours of the root of the SAP; these were immediately identifiable in shortor long-axis view and were less affected by the patient's position. In addition, the procedure was quicker because the injection was performed under real-time US showing the needle. There are some limitations to the current study. First, this study was a retrospective designed study. We selected patients with extensive inclusion and exclusion criteria outlined in the Methods section, but were still able to include patients' heterogeneity in this study. In addition, we could not entirely exclude the patient Table 3 Univariate analysis for the possible outcome predictors for injection effectiveness at follow-up. participation in other treatments such as medication or physical therapy during follow-up periods from our study. Second, both procedures were performed by 1 physician, and thus, this study only reflected the operator experience of one 1, thereby limiting the generalizability of the study results. Third, we could not exclude a placebo effect for the analgesic treatment effect from lack of control group. Finally, whether or not the injectate was properly injected into the targeted area in US-guided MMBs was not checked by FL. This way may have affected the result. Lastly, the BMI of the patients included in this study was relatively low, and US may not have provided good images of these obese patients. Greher et al [15] reported that the quality of the US image was still adequate even in patients who had a BMI of 36 kg/m 2 ; however, recently, Rauch et al [32] controversially provided strong support that MBBs cannot be performed via US guidance in obese patients. As this study lacked obese participants whose BMI were equal to, or greater than, 30 kg/m 2 , in addition to there being a lack of significant differences observed in the BMI of patients between groups, the results may not have been affected. In conclusion, the US-guided procedure did not show significant difference in treatment outcomes for pain reduction and functional improvements compared with the FL-guided procedure, but lacked the associated risks of radiation exposure. Therefore, by considering our data from this retrospective study, US-guided MBBs are deserving of consideration for the conservative management of lower lumbar facet joint pain.
2018-04-03T02:26:26.479Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "88c52f1ffb0e383d95af2edd5a4aacfb3344e425", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000006655", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "88c52f1ffb0e383d95af2edd5a4aacfb3344e425", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11800236
pes2o/s2orc
v3-fos-license
Transcription blockage by stable H-DNA analogs in vitro DNA sequences that can form unusual secondary structures are implicated in regulating gene expression and causing genomic instability. H-palindromes are an important class of such DNA sequences that can form an intramolecular triplex structure, H-DNA. Within an H-palindrome, the H-DNA and canonical B-DNA are in a dynamic equilibrium that shifts toward H-DNA with increased negative supercoiling. The interplay between H- and B-DNA and the fact that the process of transcription affects supercoiling makes it difficult to elucidate the effects of H-DNA upon transcription. We constructed a stable structural analog of H-DNA that cannot flip into B-DNA, and studied the effects of this structure on transcription by T7 RNA polymerase in vitro. We found multiple transcription blockage sites adjacent to and within sequences engaged in this triplex structure. Triplex-mediated transcription blockage varied significantly with changes in ambient conditions: it was exacerbated in the presence of Mn2+ or by increased concentrations of K+ and Li+. Analysis of the detailed pattern of the blockage suggests that RNA polymerase is sterically hindered by H-DNA and has difficulties in unwinding triplex DNA. The implications of these findings for the biological roles of triple-stranded DNA structures are discussed. INTRODUCTION In addition to the canonical double helix described by Watson and Crick over 60 years ago (1), DNA can adopt a number of non-canonical conformations (reviewed in (2)). These unusual DNA structures appear to play important roles in major DNA transactions (reviewed in (3)). Of particular interest are the effects of these structures on transcription, as this can influence gene expression and trigger a number of regulatory responses (reviewed in (4)). Among the DNA structures that affect transcription are DNA triplexes, formed when the Watson-Crick DNA duplex accumulates a third strand via Hoogsteen-type hydrogen bonding. Because only purines are capable of participating in both Watson-Crick and Hoogsteen-type hydrogen bonding, triplex formation requires sequences in which one strand contains a sufficiently long stretch of purines (while the complementary strand contains a long stretch of pyrimidines). These are usually referred to as homopurinehomopyrimidine sequences. There are two major types of triplexes, Hoogsteen and reverse Hoogsteen, in which the third strand interacts with the purine strand within the duplex via Hoogsteen or reverse Hoogsteen base pairing, respectively. The first type usually contains protonated cytosines in the homopyrimidine third strand, which causes it to depend upon acidic pH for stabilization, while the second type usually has a homopurine third strand and is very stable under physiological pH and ionic conditions. Because of that, the reverse Hoogsteen triplex is a more likely candidate for biologically relevant roles (for detailed review of various types of triplexes and their properties see (5)). One way to form a triplex at a homopurinehomopyrimidine DNA region is to add an appropriate DNA (or its analog) oligonucleotide to serve as the third strand of the triplex. These intermolecular triplexes are widely recognized as potential tools for artificial gene regulation, mutagenesis and other genetic manipulations (reviewed in (6,7)). Specifically, they have been shown to arrest transcription elongation such that the blockage occurs as RNA polymerase (RNAP) attempts to enter the triplex-forming region (8,9). This observation is easy to explain, given that binding of the third strand additionally stabilizes the Watson-Crick duplex, making it difficult for RNAP to unwind the three-stranded region. H-DNA is an intramolecular DNA triplex, which does not require an 'external' third strand, (reviewed in (5)). It is formed by two adjacent homopurine-homopyrimidine DNA stretches that are mirror images of one another (H-palindrome), where one stretch serves as an acceptor and another as a donor of the third strand ( Figure 1A). During H-DNA formation, the donor stretch dissociates into two single strands. One strand winds back down the major groove of the acceptor stretch forming the triplex via Hoogsteen-type base pairing, while the other remains single-stranded. Both Hoogsteen and reverse Hoogsteen versions of H-DNA have been experimentally detected, and several nomenclatures to distinguish these two isoforms of H-DNA were suggested (reviewed in (5)). Here we have only studied the reverse-Hoogsteen structure (see below), referring to it as 'H-DNA'. The net energetic balance of base pairing (i.e. formation of the reverse-Hoogsteen base pairs within the acceptor region versus disruption of the Watson-Crick base pairing within the donor region together with the base pairing disruptions and distortions in the center and at the borders of the H-palindrome) is in favor of B-DNA. Therefore, in the absence of additional factors facilitating H-DNA formation, the equilibrium strongly shifts toward B-DNA, rendering the B-to-H transition virtually undetectable under these conditions. One factor that strongly shifts the equilibrium in favor of H-DNA is negative supercoiling. This is because the formation of H-DNA is topologically equivalent to unwinding the entire H-palindrome and this unwinding partially 'absorbs' negative superhelical tension (reviewed in (5)). Therefore, the presence of negative supercoiling favors B-H transitions even in solution conditions that otherwise favor B-DNA. A priori, one would expect H-DNA to affect transcription similarly to intermolecular triplexes: RNA polymerase pausing upstream of the H-palindrome when it bumps into the triplex. Surprisingly, the actual blockage pattern is just the opposite: blockage occurs downstream of an Hpalindrome (10,11). To explain this paradox, it was hypothesized that H-DNA is formed behind the transcribing RNA polymerase owing to an increased level of negative supercoiling upstream of the enzyme (12). As a result, RNA polymerase becomes sterically sequestered at the downstream flank of the H-palindrome (10,11). Since disruption of mirror symmetry in the H-palindrome destabilizes H-DNA, this model predicts that it would reduce transcription blockage as well. This was indeed observed for the short, imperfect H-palindrome from the c-myc gene promoter (13). Surprisingly, for longer non-interrupted homopurinehomopyrimidine sequences, disruption of mirror symmetry does not affect the blockage signal (14). It has been proposed that the principal cause for transcription blockage in these cases is the formation of R-loops behind elongating RNA polymerase--a process that favors homopurine runs in the coding strand, but that does not require mirror symmetry (14). Thus, the contribution of H-DNA to transcriptional blockage could vary depending on the composition of a homopurine-homopyrimidine sequence. Note that the R-loop and H-DNA models of transcription blockage are not mutually exclusive: an H-DNA-like structure could serve as an intermediate in the process of R-loop formation (15). The fact that G-rich, homopurine-homopyrimidine sequences during transcription can also form quadruplexes ((16), reviewed in (17)), which are known to inhibit transcription (18,19), additionally complicates these interpre-tations. Given that H-DNA was detected in vivo and has been proposed to regulate transcription (reviewed in (20); also see Discussion), it is of prime interest to understand its interference with the transcription machinery. To clarify the effects of H-DNA on transcription, it is useful to study a model DNA template containing this structure in a 'frozen' state. The formation of B-DNA in the donor region is the main H-DNA destabilizing factor at an Hpalindrome. To preclude this, we designed a stable analog of H-DNA in which DNA strands comprising the donor region and the center of the H-palindrome are rendered noncomplementary ( Figure 1B) and cannot form a duplex to compete with triplex formation. Triplex formation in this case does not require negative supercoiling, and the transcribing RNA polymerase encounters a preformed stable triplex. This experimental system allows us to directly determine whether the presence of H-DNA in the transcriptional template blocks RNA polymerase and to learn the mechanisms of triplex-mediated transcription blockage. We found that RNA polymerase is sterically hindered by H-DNA and has severe problems in unwinding the triple helical part of this structure. Preparation of DNA substrates Supplementary Figure S1 illustrates the general strategy for the preparation of DNA substrates, similar to that described previously (21,22). Briefly, a double-stranded DNA fragment containing the T7 promoter was obtained by EcoRI and BsgI restriction digest of the pG4 plasmid (14) followed by its ligation to various triplex-and duplexcontaining templates, which were obtained by annealing of synthetic oligonucleotides (see Supplementary Materials for details). In vitro T7 transcription assays T7 RNA polymerase transcription reaction and its product analysis were performed as previously described (e.g. (21,22)). Briefly, DNA substrates were transcribed in the presence of non-radioactive NTPs mixed with radioactive CTP, and radioactive transcripts were then analyzed on sequencing gel. The longest transcript (run-off product, RO) results from unperturbed transcription, while transcription blockage produces various truncated transcripts (see Supplementary Materials). Transcription was performed under the standard conditions: 6 mM MgCl 2 , 8.3 mM NaCl, 37.8 mM Tris HCl (pH 7.9), 1.7 mM Spermidine, 4.2 mM DTT, or in special buffers. Mn-buffer contained 1 mM MgCl 2 and 2 mM MnCl 2 instead of 6 mM MgCl 2 ; K-buffer contained 80 mM KCl instead of 8.3 mM NaCl, and Li-buffer contained 80 mM LiCl instead of 8.3 mM NaCl. Restriction protection assay The restriction protection assay was performed as described in (14). It exploits the presence of a BsgI cleavage site inside the triplex-forming sequence, since formation of a triplex inhibits BsgI digest (Supplementary Figure S2 T7 transcription is arrested by triplex-forming sequences We have designed an artificial triplex-forming Hpalindrome sequence, in which the two DNA strands of the donor half are non-complementary and cannot form a duplex. This ensures that this intramolecular triplex is thermodynamically stable, even in the absence of negative supercoiling ( Figure 1). As a basic triplex-forming motif, we have chosen a sequence comprised of CGG and TAT triads ( Figure 2A). These sequences have been previously studied and shown to form a very stable triplex that blocks DNA replication in vitro ((23) and references therein). These templates were obtained by annealing of specific synthetic oligonucleotides in different combinations, followed by their ligation to a DNA fragment containing the promoter for T7 RNA polymerase (Supplementary Figure S1). Figure 2B shows the expected structures for various experimental and control templates used in our transcription experiments. When both strands are completely complementary, the substrate is predicted to form a perfect Watson-Crick duplex (first column, first row) in the absence of DNA supercoiling. When the homopyrimidine run in the donor part of the sequence is replaced by a random se-quence (first column, second row), a stable ('perfect') triplex is predicted to form. We also used templates, in which three thymines were replaced by cytosines in the donor region, resulting in a mismatched duplex and triplex (second column, first and second rows, respectively), as well as templates with four guanine-to-thymine substitutions along with the three Tto-C substitutions (third column; exact sequence is in the scheme at the top). The latter templates (designated as 'duplex' and 'triplex' bulges) contained seven mismatches in a sixteen bp-long sequence and are not expected to form stable duplexes or triplexes within the mismatched area. Formation of triplexes in these transcription templates was monitored by protection against BsgI digestion (Supplementary Figure S2). As expected, the perfect triplex, but not the triplex bulge, protects the acceptor duplex against BsgI digestion. In the case of the mismatched triplex, a partial protection from BsgI digestion was observed, suggesting that the mismatches do not abolish the triplex formation completely, likely owing to the fact that they do not perturb the stabilizing GGC triads. Because oligonucleotides within these constructs were not phosphorylated, an unsealed nick remains in the nontemplate strand between the promoter fragment and the structure-forming sequence. We have previously shown that a nick in this position does not affect the results of our transcription assay (21). Typical results of in vitro transcription for these templates are shown in Figure 3A. The most intense signal, between 320 and 330 nucleotides, corresponds to the full length 'run off' RNA product (designated General scheme of a transcription template containing a perfect YR*R triplex. It consists of the promoter-containing fragment (thin black lines) ligated to a triplex-forming construct pre-assembled from synthetic DNA oligonucleotides (the color-coding is the same as in Figure 1). Modifications of the third strand within the triplex are shown in gray underneath. A sequence with three T-to-C point substitutions is called 'mismatched', a sequence with seven point substitutions, which efficiently prevents duplex or triplex formation, is referred to as 'bulged'. (B) Various triplex-and duplex-forming constructs studied. T-to-C substitutions in mismatched substrates are shown as gray squares; highly mismatched sequences in 'bulge' substrates are shown in yellow-gray. 'RO'), which forms when the polymerase reaches the end of the template. The second most intense signal, at ∼240 nucleotides, corresponds to the 'run off' from those promoter fragments that did not ligate to the structure-forming oligonucleotide construct (see Supplementary Figure S1). Bands between these two signals correspond to truncated transcription products resulting from transcription blockage, and are referred to as 'blockage signals'. In the perfect duplex template ( Figure 3A, lane 1), there are low intensity blockage signals, slightly above background. The most prominent of these blockages is the 'msignal' at the middle of the synthetic sequence. We believe that these signals simply correspond to sequences for which the probability of transcription termination is above average. In the case of the triplex-bearing template ( Figure 3A, lane 2), at least two significantly stronger blockage signals appear: one upstream (up), and another downstream (dn) from the m-signal. Furthermore, blockage at the m-signal position becomes more pronounced relative to the run off product ( Figure 3B). For triplex-forming templates overall, truncated transcripts constitute 19% of total transcripts, with roughly half of those corresponding to the upstream signal (see Methods for calculation). Figure 3C shows the results of mapping of the most prominent blockage positions. We found that the up-signal corresponds to the junction between the single-stranded part of the template strand and the triplex, while m-and dn-signals are within the triplex-forming region. In addition, the triplex seems to exacerbate minor blockage signals over the entire oligonucleotide construct as compared to the duplex, including areas outside the region directly involved in H-DNA formation. This could be caused by extended RNA-DNA hybrid formation and by collisions between stalled and elongating RNA polymerases (see Discussion). To check whether a destabilization of the triplex would reduce transcription blockage, we replaced thymines in the third strand with cytosines, which cannot form Hoogsteen base pairs with adenines. As expected, transcription blockage at this mismatched triplex was significantly weaker (Figure 3A and B, lane 4 versus lane 2). These substitutions, however, did not abolish transcription blockage completely, consistent with only partially impaired triplex formation, as evident from the BsgI protection assay in Supplementary Figure S2. Furthermore, transcription reactions performed under the same ambient conditions as the restriction protection assay resulted in transcription pattern similar to that observed in our standard conditions ( Supplementary Figures S3 and S4). Figure 3A), while the additional signal 'up' appears only in the presence of Mn 2+ ions (see Figure 4). Triangle width roughly corresponds to the resolution of the mapping (about 5 nt). As expected, mismatched duplex constructs produced virtually the same transcription pattern as the respective normal duplex ( Figure 3A and B, lane 3 versus lane 1). Transcription blockage is sensitive to bivalent cations in the reaction Bivalent cations are known to differentially affect the stability of pyrimidine-purine-purine DNA triplexes (24,25). Specifically, these triplexes are more stable in the presence of manganese than magnesium (24). To confirm our hypothesis that the strength of transcriptional blockage is correlated with the stability of triplexes, we conducted a T7 RNA polymerase transcription reaction in the presence of 2mM Mn 2+ . As expected, the intensity of triplex-mediated transcriptional blockage increased significantly in the presence of Mn 2+ ( Figure 3A, lane 2 versus Figure 4A, lane 2). In these conditions, truncated transcripts accounted for nearly 90% of total RNA products (see quantitation below Figure 4A) as compared to ∼20% in the presence of Mg 2+ (see quantitation in Figure 3B). Furthermore, a new arrest band (designated as 'up'), which was not observed in standard transcription conditions, appeared approximately 5 bases up-stream of the H-DNA boundary (see Figure 3C). This truncated transcript corresponded to 33.5% of the total RNA product. This new blockage signal may have arisen from the collision of an oncoming RNA polymerase with the stalled one at the triplex edge. In concordance with our results under standard conditions, the mismatched triplex with T-to-C substitutions causes less transcription blockage than the perfect triplex in the presence of Mn 2+ ( Figure 4A, lane 4 versus lane 2). In fact, the difference between these two constructs is even more pronounced in manganese transcription buffers. The sequence with seven mismatches in the third strand (Figure 4A, 'triplex bulge', lane 6) does not give rise to any strong blockage signals ( Figure 4A, lane 1). This is consistent with the idea that the triplex is completely destabilized by these mismatches. A more detailed analysis shows that the bulge in the substrate exacerbates weak, diffusive blockage signals in the downstream duplex region, which we explain by the R-loop formation (see Supplementary Figure S3 and its legends). Note that in the presence of manganese, the run-off product is heterogeneous, i.e. comprised of several bands. This heterogeneity has been occasionally observed for T7 RNA polymerase (26). We suggest that it is caused by substandard conditions (the presence of Mn 2+ ) in our transcription reaction. Transcription blockage is exacerbated by monovalent metal cations. When a DNA strand contains guanine runs, another non-B DNA structure, called a G-quadruplex, can form (reviewed in (27)). In our transcription reaction, G-quadruplexes could form in either the non-template DNA strand or in the nascent RNA transcript, both of which might contribute to transcription arrest (reviewed in (4)). To evaluate the role of G-quadruplexes in our system, we capitalized upon their dependence on monovalent cations. G-quadruplexes are more readily formed in the presence of potassium (K + ) than in the presence of lithium (Li + ) ions (28). If G-quadruplexes con-tribute to transcription blockage in our system, we would expect to observe more truncated transcripts in the presence of K + than Li + ions. We conducted T7 transcription reactions on triplex forming DNA templates in two different buffers, containing either 80 mM K + or 80 mM Li + ions. In both conditions, we observed the same pattern of triplex-mediated blockage comprising the three well-pronounced up-, m-, and dnsignals ( Figure 4B, lanes 3 and 4). The intensities of blockage signals relative to run-off products were similar in the presence of potassium or lithium ( Figure 4B, lower panel). These data suggest that transcription blockage observed in our experiments is not caused by G-quadruplexes. Though we did not observe significant differences between the effects of Li + and K + ions on transcription blockage, both caused strong increase in transcription blockage compared to standard conditions ( Figure 4B, lanes 3, 4 ver-sus Figure 3A, lane 2), in which the concentration of monovalent metal cations was about 10-fold lower. This effect could be due to non-specific stabilization of DNA triplexes by increased salt concentration. Alternatively, it may be the result of destabilization of the transcription complex by increased salt concentration ((29) and references therein), which could make RNA polymerase more 'sensitive' to obstacles in general. To check whether our triplex is additionally stabilized by increased salt concentration, we performed DNA melting experiments using self-folding oligonucleotides described in (23) that form triplexes of the same base composition as our transcription substrates (Supplementary Figure S5). It was previously shown that these triplexes melt in one step, i.e. from a triplex to three separate strands (23). Consequently, the presence of a triplex manifests itself as an increase in the melting temperature compared to that of the corresponding duplex DNA. Melting curves in Supplementary Figure S5 confirm triplex formation under our standard transcription conditions. In fact, the triplexes were so stable in these conditions that the midpoints of their melting curves (T m s) were technically impossible to reach. We did observe, however, an increase in absorbance with increasing temperature, indicative of partial triplex dissociation. The rates of these absorbance increases were similar in both standard and high salt conditions, implying that our triplex structures are not additionally stabilized by increased salt concentration. That might seem surprising, given that the increase in salt concentration is expected to stabilize nucleic acid complexes by screening of the repulsion between negatively charged phosphates. However, in our case, triplexes are pre-formed in the presence of bivalent (magnesium) cations, which are much more potent DNA binders than monovalent cations. Therefore, the presence of additional monovalent ions does not produce the apparent increase in triplex stability. Also note that in certain cases, triplex formation could be suppressed by monovalent cations. For example, usual H-DNA formation requires higher negative supercoiling at higher concentration of sodium salt (30), and intermolecular pyrimidinepurine-purine triplex formation is inhibited by potassium (31)(32)(33). Both these effects are likely to be due to stronger stabilization of the structure competing with the triplex formation: B-DNA in the donor half of the H-palindrome in the first case, and G-quadruplex in the triplex-forming oligonucleotide in the second case (33). In our case, competition with B-DNA is excluded by design of the system (Figure 1) and quadruplex formation in the Hoogsteen strand is avoided by pre-forming the triplex in the absence of monovalent cations. Therefore, high salt conditions do not have an apparent effect upon the triplex in our system; thus, we believe that the increase in transcription blockage observed in high salt conditions is likely due to destabilization of the transcription complex (29), rendering it particularly prone to blockage by local DNA structures. Orientation of a DNA triplex changes the pattern of transcription blockage In our initial transcription experiments, elongating T7 RNA polymerase transcribes the strand that does not par-ticipate in the formation of reversed Hoogsteen base pairs. To further investigate the mechanism of transcriptional arrest in DNA triplexes, we created a 'reverse triplex' template. In this template, the orientation of the promoter was reversed such that RNA polymerase transcribes a central strand of the triplex, which is involved in both Watson-Crick and Hoogsteen base pairing ( Figure 5A). Since this would require T7 RNA polymerase to unwind the DNA triplex as a whole, this configuration was expected to result in a stronger triplex-mediated blockage of transcription. To ensure maximum stability of the reverse triplex, transcription assays with this template were performed in the presence of Mn 2+ . Figure 5B (lane 4) shows a well-pronounced blockage (up rev ) that occurs at a position adjacent to the edge of three-stranded region in the reverse triplex ( Figure 5A). The remaining blockage signals appear to be superficial for this construct. These data suggest that when RNA polymerase bumps into the reverse triplex, it dismantles it via an 'allor-none' mechanism. This is in contrast to the more subtle process of dismantling the opposite orientation triplex (Figure 5B, lane 3), in which RNA polymerase can progress inside the triplex zone more easily. Surprisingly, the blockage caused by the reverse triplex was unexpectedly small: about 12% of the total run-off product, as compared to the 30% blockage observed for the m-signal alone in the original orientation (see Discussion). Removal of steric strain in a triplex simplifies the pattern of transcription blockage In addition to the difficult-to-unwind triplex, H-DNA may also inhibit transcription by sterically sequestering RNAP in sharply bent elements of the structure (see Discussion). This type of steric sequestration was invoked to explain transcription blockage at the end of homopurinehomopyrimidine runs (10,11). To test if steric blocks contribute to triplex-mediated transcription blockage, we designed a structure called a 'cut triplex', which contained a break at the duplex-to-triplex junction in the non-template strand of our original triplex structure. This break relieved the steric strain associated with the bulky H-DNA structure. Upon transcription of the cut triplex, the most promoterproximal up-band disappears ( Figure 6B, lane 3 versus lane 2). This suggests that this band could be due to a steric obstacle to transcription (see Discussion). DISCUSSION To elucidate the effect of H-DNA on transcription, we created synthetic analogs that fold into an intramolecular triplex configuration regardless of supercoiling and studied transcription by T7 RNA polymerase through these structures. Depending on specific design, RNA polymerase could first encounter either a donor, or an acceptor part of the H-palindrome (designated (d→a), or (a→d), respectively) with either a pyrimidine (y), or a purine (r) strand as the transcriptional template. In total, one can imagine four different transcriptional substrates ( Figure 7); of which we studied two--(d→a)-y and (a→d)-r in this work. Note that the (d→a)-y configuration is similar to the one responsible for 'suicidal replication' (34) as well as to H-DNA that might be induced by transcription (10,12,14). It also mimics the orientation of many natural H-palindromes, including that found in the c-myc promoter (e.g. (13) and references therein). Furthermore, H-DNA formation in this orientation could compete with the formation of R-loops and/or G-quadruplexes in the non-template strand during transcription. The simplest pattern of transcription blockage was observed for the (a→d)-r construct ( Figure 8A): a wellpronounced blockage signal at the duplex-to-triplex junction. This pattern is likely due to RNA polymerase 'bumping' into the triplex and being unable to unwind it (scheme in dashed-bordered box below). In this orientation, the template strand for transcription is involved in both Watson-Crick and reverse Hoogsteen base pairing, requiring RNA polymerase to disrupt the entire triplex in order to pass through. This presents a much higher energetic barrier for the RNAP as compared to a construct in which the template strand participates in Watson-Crick base pairing only ((d→a)-y). That being said, the two displaced strands do not pair in the (a→d)-r structure resulting in a larger entropic barrier for triplex reformation than in the (d→a)-y structure ( Figure 8B), where the displaced strands could remain base paired via reverse Hoogsteen interactions (35,36). Thus, when the template strand participates in both Watson-Crick and reverse Hoogsteen base pairing, it is more difficult to unwind the triplex, but it is also more difficult to reverse the unwinding once it is initiated. That could explain efficient RNAP passage through the (a→d)-r structure, once the triplex is unwound. The pattern of transcription blockage is more complex in the (d→a)-y construct ( Figure 8B). RNAP could be sterically sequestered at the bent parts of this H-DNA structure (10,11) even before it faces the triplex per se. Furthermore, the nascent RNA could re-hybridize with the singlestranded part of the template strand, forming an extended RNA-DNA hybrid (R-loop), which could destabilize the transcription complex (37). Transcription through this construct generates three blockage signals ( Figure 3A): the most upstream one is at the junction between the single-and three-stranded DNA, while the two other signals are inside the triplex-forming region. The upstream and downstream bands are only evident in triplex-forming constructs; the middle band is seen in all constructs, but is much more intense in triplex-forming constructs. We conclude that these blockages are either caused by or exacerbated by H-DNA. To test whether the blockages are due to a triplex per se or to a sterical impediment to transcription by bent elements of H-DNA, we produced a construct called 'cut triplex', in which the connection between the flanking duplex and the third strand of H-DNA is severed (white block arrow in Figure 8). In this construct, steric constraints are removed while the overall H-DNA structure is upheld. Transcription through this structure did not produce the most upstream blockage signal, while the middle and the downstream signals remained intact ( Figure 6B). Thus, the upstream signal RNA polymerase (shown as gray triangle on the template strand pointing in the direction of elongation) can either proceed from a donor part of the H-palindrome toward its acceptor part (d→a), or from an acceptor half to the donor (a→d) using either homopyrimidine (y), or homopurine (r) strands as templates. This results in four possible combinations of transcription substrates. Only substrates from the first column were studied in the current work. appears to be due to steric constraints, while the other two signals are caused by the triplex per se. Supporting this con-clusion, transcription blockage is additionally enhanced in the presence of manganese ions, which are known to be potent stabilizers of DNA triplexes (24). Judging from the results for the triplex-bulge and duplexbulge constructs, the presence of long single-stranded regions does not cause any strong blockages, indicating that single-stranded DNA present in our triplex-forming substrates is not a major cause for transcription blockage. However, in the case of a bulge substrate we did observe some increase of weak, diffusive blockage signals localized in the downstream duplex region (Supplementary Figure S3). Previously, we had suggested that the exacerbation of these weak downstream blockages could be due to the R-loop formation (37). R-loops can easily be formed during transcription of our triplex-forming templates, as they contain single-stranded DNA regions, which might enhance triplexmediated blockage. In addition, transcription blockage does not appear to be sensitive to the type of monovalent metal cation (potassium versus lithium) arguing against the role of G-quadruplexes. Rather unexpectedly, transcription blockage was exacerbated by increased concentrations of monovalent metal The simplest mechanism of blockage is expected for (a→d)-r configuration, in which RNA polymerase (RNAP) (gray oval) is bumping into the triplex and is unable to unwind it. In this case, the template strand (red) is involved in Watson-Crick and Hoogsteen interactions and RNA polymerase has to disrupt the triplex as a whole (see the scheme in the dashed line-border box below). At the same time, two displaced strands are not pairing with each other resulting in the high entropic barrier for triplex re-formation. (B) The mechanism of blockage in the case of (d→a)-y configuration is more complex: on one hand, the energetic barrier for the triplex dismantling is lower, since only Watson-Crick interactions have to be disrupted to make the template available; on the other hand Hoogsteen interactions at the displaced strands could persist, decreasing the entropic barrier for triplex reformation (see the scheme in the dashed line-border box below). Altogether, it is easier for an elongating RNAP to unwind the triplex in (d→a)-y than in (a→d)-r construct (in Figure 8, A), but the probability that it would be 'pushed back' by triplex reformation is greater in (d→a)-y construct. In the (d→a)-y configuration, RNAP could be sequestered by sterical constraints even before it started to unwind the triplex. This sequestration is gone if the H-DNA-like structure is cut at its triplex-duplex junction (bottom). Finally, hybridization between the nascent RNA and the template strand (bottom) could also contribute to the blockage in the (d→a)-y configuration. cations regardless of their type. Our DNA melting experiment shows that triplex stability is not increased by the increased concentrations of monovalent cations. We believe that monovalent cation-induced blockage is likely due to a destabilization of the transcription complex (29) making it more sensitive to inherent obstacles in DNA templates. The latter observation could be of high biological relevance as the intracellular concentration of monovalent metal ions (∼150 mM) is comparable to that used in our experiments. Sequences that can form intramolecular triplexes are unusually frequent in genomic DNA. Bioinformatics analyses of the human genome has revealed that long homopurinehomopyrimidine tracts occur once per 50 000 bps (38). Moreover, mirror repeats are overrepresented among these homopurine-homopyrimidine sequences (39). Since H-DNA is the only known structure that requires its sequence motifs to be both homopurine-homopyrimidine and mirror symmetrical, the overrepresentation of mirror repeats among homopurine-homopyrimidine sequences implies some biological function of H-DNA. Clusters of Hpalindromes are found in the pseudoautosomal region of the sex chromosomes, which are essential for meiotic segregation and recombination, as well as in genes involved in cell communications in the brain (40). H-palindromes are also common elements of eukaryotic promoters, including c-myc, and have been implicated in the expression of many disease-linked genes ((41) and references therein). Finally, formation of H-DNA has been implicated in transcription blockage by expanded (GAA) n repeats responsible for the human genetic disease Friedreich's ataxia (11). Transcription blockage mediated by H-DNA-like structures may lead to various biological consequences, including transcription-related genome instability (reviewed in (4)). Of a particular interest is the model of 'gratuitous' transcription-coupled repair (gratuitous TCR) ((42), reviewed in (43)), proposing that these structures may be recognized as lesions by the TCR machinery. This could trigger futile cycles of excision and repair replication, eventually causing mutagenesis. Collisions between blocked transcription complexes and DNA replication machinery may further contribute to genomic instability. We can also think of positive biological effects coming from triplex-mediated transcription arrest. A recent report suggests that DNA lesions that do not block transcription could still be repaired by TCR, when a stalled RNA polymerase is present in the vicinity (44). Thus, it is possible that unusual DNA structures arresting transcription could sensitize the TCR machinery to nearby DNA lesions that do not block transcription per se.
2017-04-13T13:11:26.731Z
2015-06-22T00:00:00.000
{ "year": 2015, "sha1": "cd66273b186b531742265a97d37ddc25dc182bf8", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/43/14/6994/16659725/gkv622.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca3b3bf3024702a88a0cfbde25048a16d31647ba", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3738489
pes2o/s2orc
v3-fos-license
Analysis of Decimation on Finite Frames with Sigma-Delta Quantization In Analog-to-digital (A/D) conversion, signal decimation has been proven to greatly improve the efficiency of data storage while maintaining high accuracy. When one couples signal decimation with the $\Sigma\Delta$ quantization scheme, the reconstruction error decays exponentially with respect to the bit-rate. In this study, similar results have been proven for finite unitarily generated frames. We have devised a process called alternative decimation on finite frames that is compatible with $\Sigma\Delta$ quantization up to the second order. In both cases, decimation results in exponential error decay with respect to the bit usage. Background and Motivation. Analog-to-digital (A/D) conversion is a process where bandlimited signals, e.g., audio signals, are digitized for storage and transmission, which is feasible thanks to the classical sampling theorem. In particular, the theorem indicates that discrete sampling is sufficient to capture all features of a given bandlimited signal, provided that the sampling rate is higher than the Nyquist rate. Given a function f ∈ L 1 (R), its Fourier transformf is defined aŝ The Fourier transform can also be uniquely extended to L 2 (R) as a unitary transformation. However, the discrete nature of digital data storage makes it impossible to store exactly the samples {f (nT )} n∈Z . Instead, the quantized samples {q n } n∈Z chosen from a pre-determined finite alphabet A are stored. This results in the following reconstructed signal f (t) = T q n g(t − nT ). As for the choice of the quantized samples {q n } n , we shall discuss the following two schemes • Pulse Code Modulation (PCM): Quantized samples are taken as the direct-roundoff of the current sample, i.e., (2) q n = Q 0 (f (nT )) := arg min q∈A |q − f (nT )|. • Σ∆ Quantization: A sequence of auxiliary variables {u n } n∈Z is introduced for this scheme. {q n } n∈Z is defined recursively as q n = Q 0 (u n−1 + f (nT )), Σ∆ quantization was introduced [20] in 1963, and it is still widely used due to some of its advantages over PCM. Specifically, Σ∆ quantization is robust against hardware imperfection [11], a decisive weakness for PCM. For Σ∆ quantization, and the more general noise shaping schemes to be explained below, the boundedness of {u n } n∈Z turns out to be essential. Quantization schemes with u ∞ < ∞ are said to be stable. Despite its merits over PCM, Σ∆ quantization merely yields linear error decay with respect to the bit-rate as opposed to exponential error decay by its counterpart PCM. Thus, it is desirable to generalize Σ∆ quantization for better error decay rates. As a direct generalization, given r ∈ N, one can consider an r-th order Σ∆ quantization scheme: Theorem 1.3 (Higher Order Σ∆ Quantization, [10]). Given f ∈ P W 1/2 and T < 1, consider the following stable quantization scheme where {q n } and {u n } are the quantized samples and auxiliary variables, respectively. Then, for all t ∈ R, . Higher order Σ∆ quantization has been known for a long time [9,15], and the r-th order Σ∆ quantization improves the error decay rate from linear to polynomial degree r while preserving the advantages of a first order Σ∆ quantization scheme. From here, a natural question arises: is it possible to generalize Σ∆ quantization further so that the reconstruction error decay matches the exponential decay of PCM? Two solutions have been proposed for this question. The first one is to adopt different quantization schemes. Many of the proposed schemes, including higher order Σ∆ quantization, can be categorized as noise shaping quantization schemes, and a brief summary of such schemes will be provided in Section 2. The other possibility is to enhance data storage efficiency while maintaining the same level of reconstruction accuracy, and signal decimation belongs in this category. Signal decimation is implemented as follows: given an r-th order Σ∆ quantization scheme, there exists {q T n }, {u n } such that where u ∞ < ∞, and {f (T ) n } n = {f (nT )} n . Then, consider q T 0 n := (S r ρ q T ) (2ρ+1)n , a sub-sampled sequence of S r ρ q T , where (S ρ h) n := 1 2ρ+1 ρ m=−ρ h n+m . Signal decimation is the process with which one converts the quantized samples {q T n } to {q T 0 n }. See Figure 1 for an illustration. Decimation has been known in the engineering community [5], and it was observed that decimation results in exponential error decay with respect to the bit-rate, even though the observation remained a conjecture until 2015 [12], when Daubechies and Saab proved the following theorem: Theorem 1.4 (Signal Decimation for Bandlimited Functions, [12]). Given f ∈ P W 1/2 , T < 1, and T 0 = (2ρ + 1)T < 1, there exists a functiong such that Moreover, the number of bits needed for each unit interval is Consequently, From (4) and (5), we can see that the reconstruction error after decimation still decays polynomially with respect to the sampling rate. As for the data storage, the number of bits needed changes from O(T −1 ) to O(log(1/T )). Thus, the reconstruction error decays exponentially with respect to the bits used. Figure 1. Illustration of the first order decimation scheme. After obtaining the quantized samples {q n } n in the first step, decimation takes the average of quantized samples within disjoint blocks in the second step. The outputs are used as the decimated sub-samples {q ρ n } in the third step. The effect on the reconstruction (replacing q n with y n − q n ) is illustrated in parentheses. 1.2. Outline and Results. In this paper, we formulate and prove Theorem 3.5 and Theorem 3.8, which is an extension of Theorem 1.4 to finite frames. In particular, using our notion of alternative decimation, which will be defined in Section 3, we shall prove exponential error decay with respect to the total number of bits used. To provide necessary background information, we include preliminaries for signal quantization theory on finite frames in Section 2. We first define Σ∆ quantization on finite frames in Section 2.1. Then, we give a formal definition of noise shaping schemes, which is more general than Σ∆ quantization, in Section 2.2. Section 2.3 is devoted to perspective and prior works, and our notation is defined in Section 2.4. In Section 3, we define alternative decimation and state our main results. Theorem 3.4 is a special case of Theorem 3.5, where we restrict ourselves to finite harmonic frames, a subclass of unitarily generated frames. The same result for unitarily generated frames satisfying certain mild conditions is proven in Theorem 3.5, and it is further extended to the second order in Theorem 3.8. The multiplicative structure of decimation is proven in Theorem 3.7, and this enables us to perform decimation iteratively. We prove Theorems 3.4, 3.5, 3.7, and 3.8 in Sections 4, 5, 6, and 7, respectively. Generalization to orders greater than two is not possible with our current construction, and we illustrate its main obstacle in Appendix A. Decimation for arbitrary orders can be achieved with a different approach and will be introduced in a sequel. Numerical experiments are given in Appendix B. Preliminaries on Finite Frame Quantization Signal quantization theory on finite frames is well motivated from the need to deal with data corruption or erasure [18,17]. The authors considered the PCM quantization scheme described above and modeled the quantization error as random noise. In [3], deterministic analysis on Σ∆ quantization for finite frames showed that a linear error decay rate is obtained with respect to the oversampling ratio. Moreover, if the frame satisfies certain smoothness conditions, the decay rate can be super-linear for first order Σ∆ quantization. Noise shaping schemes for finite frames have also been investigated, some of which yield exponential error decay rate [7,6,8]. In this section, we shall provide necessary information on quantization for finite frames before stating our results in Section 3. 2.1. Σ∆ Quantization on Finite Frames. Fix a separable Hilbert space H along with a set of vectors T = {e j } j∈Z ⊂ H. The collection of vectors T forms a frame for H if there exist A, B > 0 such that for any v ∈ H, the following inequality holds: The concept of frames is a generalization of orthonormal bases in a vector space. Different from bases, frames are usually over-complete: the vectors form a linearly dependent spanning set. Over-completeness of frames is particularly useful for noise reduction, and consequently frames are more robust against data corruption than orthonormal bases. Let us restrict ourselves to the case when H = C k is a finite dimensional Euclidean space, and the frame consists of a finite number of vectors. Given a finite frame T = {e j } m j=1 , the linear operator E : C k → C m satisfying Ev = {<v, e j >} m j=1 is called the analysis operator. Its adjoint operator E * : C m → C k satisfies E * c = m j=1 c j e j and is called the synthesis operator. The frame operator S is defined by S = E * E : Under this framework, one considers the quantized samples q of y = Ex and reconstructs The frame-theoretic greedy Σ∆ quantization is defined as follows: given a finite alphabet A ⊂ C, consider the auxiliary variable {u n } m n=0 , where we shall set u 0 = 0. For n = 1, . . . , m, we calculate {q n } n and {u n } n as follows: q n = Q 0 (u n−1 + y n ) u n = u n−1 + y n − q n , where Q 0 is defined in (2). In the matrix form, we have (7) y − q = ∆u, where ∆ ∈ Z m×m is the backward difference matrix, i.e., ∆ i,i = 1 for all 1 ≤ i ≤ m, and ∆ i,i−1 = −1 for 2 ≤ i ≤ m. For an r-th order Σ∆ quantization, we have instead y − q = ∆ r u. In practice, the quantization alphabet A is often chosen to be A 0 which is uniformly spaced and symmetric around the origin: given δ > 0, we define a mid-rise uniform quantizer For complex Euclidean spaces, we define A = A 0 + ıA 0 . In both cases, A is called a mid-rise uniform quantizer. Throughout this paper we shall always be using A as our quantization alphabet. 2.2. Noise Shaping Schemes and the Choice of Dual Frames. Σ∆ quantization is a subclass of the more general noise shaping quantization, where the quantization scheme is designed such that the reconstruction error is easily separated from the true signal in the frequency domain. For instance, it is pointed out in [8] that the reconstruction error of Σ∆ quantization for bandlimited functions is concentrated in high frequency ranges. Since audio signals have finite bandwidth, it is then possible to separate the signal from the error using low-pass filters. Noise shaping quantization has been well established for A/D conversion since the mid 20th century [23], and in terms of finite frames, noise shaping schemes generalize the Σ∆ scheme in the following way: where y, q, and u are the samples, quantized samples, and the auxiliary variable, respectively, while the transfer matrix H is lower-triangular. Now, given an analysis operator E, a transfer matrix H, and a dual F to E, i.e. , F E = I k , the reconstruction error in this setting is where · ∞,2 is the operator norm between ∞ and 2 , i.e., The choice of the dual frame F plays a role in the reconstruction error. For instance, [4] proved that arg min F E=I k F H 2 = (H −1 E) † H −1 , where given any matrix A, A † is defined as the canonical dual (A * A) −1 A * . More generally, one can consider a V -dual, namely (V E) † V , provided that V E is still a frame. With this terminology, decimation can be viewed as a special case of V -duals, and conversely every V -dual can be associated with corresponding post-processing on the quantized sample q. Perspective and Prior Works. 2.3.1. Quantization for Bandlimited Functions. Despite its simple form and robustness, Σ∆ quantization only results in linear error decay with respect to the sampling period T as T → 0. It was shown [10,9,15] that a generalization of Σ∆ quantization, namely the r-th order Σ∆ quantization, has error decay rate of polynomial order r. Leveraging the different constants for this family of quantization schemes, sub-exponential decay can also be achieved. A different family of quantization schemes was proven [19] to yield exponential error decay with a small exponent (c ≈ 0.07.) In [13], the exponent was improved to c ≈ 0.102. Finite Frames. Σ∆ quantization can also be applied to finite frames. It was proven [3] that for any family of finite frames with bounded frame variation, the reconstruction error decays linearly with respect to the oversampling ratio m/k, where the corresponding analysis operator E is an m×k matrix. With different choices of dual frames, [4] proved that the so-called Sobolev dual achieves minimum induced matrix 2-norm for reconstructions. By carefully matching between the dual frame and the quantization scheme, [8] proved that using the β-dual for random frames results in exponential error decay of near-optimal exponent with high probability. 2.3.3. Decimation. In [5], under the assumption that the noise in Σ∆ quantization is random, it was asserted that decimation greatly reduces the number of bits needed while maintaining the reconstruction accuracy. In [12], a rigorous proof was given to show that the assertion is indeed valid, and the reduction of bits used improves the linear error decay into exponential error decay with respect to the bit-rate. [8,6] proposed a distributed noise shaping quantization scheme with beta duals as an example. The definition of a beta dual is as follows: Beta Dual of Distributed Noise Shaping. Chou and Günturk Definition 2.1 (Beta Dual). Let E ∈ R m×k be an analysis operator and k | m. Recall that In this case, the transfer matrix H is an m-by-m block matrix where each block h is an m/k-by-m/k matrix with unit diagonal entries and −β as sub-diagonal entries. Under this setting, it is proven that the reconstruction error decays exponentially. One may notice the similarity between the beta dual and decimation. Indeed, if one chooses β = 1 and normalizes V by k m , the same result as decimation can be obtained, achieving linear error decay with respect to the oversampling ratio and exponential decay with respect to the bit usage. Nonetheless, its generalization to higher order error decay with respect to the oversampling ratio is lacking, whereas the alternative decimation we propose can be extended to the second order. In particular, the raw performance of the second order decimation is superior to the 1-dual under the same oversampling ratio. 2.4. Notation. The following notation is used in this paper: • x ∈ C k : the signal of interest. • E ∈ C m×k : a fixed frame. • y = Ex ∈ C m : the sample. • ρ ∈ N: the block size of the decimation. • q ∈ C m : the quantized sample obtained from the greedy Σ∆ quantization defined in (6). • u ∈ C m : the auxiliary variable of Σ∆ quantization. • F ∈ C k×m : a dual to the analysis operator E, i.e. F E = I k . • R: total number of bits used to record the quantized sample. • Ω ∈ C k×k : a Hermitian matrix with eigenvalues {λ j } k j=1 ⊂ R and corresponding orthonormal eigenvectors {v j } k j=1 . • Φ ∈ C m×k : the analysis operator of the unitarily generated frame (UGF) with the generator Ω and the base vector φ 0 ∈ C k . Main Results For the rest of the paper, we shall also assume that our Σ∆ quantization scheme is stable, i.e. , u ∞ remains bounded as the dimension m → ∞. Before we state our results, we shall define the notion of a unitarily generated frame. 3.1. Unitarily Generated Frames. A unitarily generated frame T u is generated by a cyclic group: given a unit base vector φ 0 ∈ C k and a Hermitian matrix Ω ∈ C k×k , the frame elements of T u are defined as The analysis operator Φ of T u has {φ * j } j as its rows. As symmetry occurs naturally in many applications, it is not surprising that unitarily generated frames receive serious attention, and their applications in signal processing abound, [16,14,6,8]. One particular application comes from dynamical sampling, which records the spatiotemporal samples of a signal in interest. Mathematically speaking, one tries to recover a sig- which aligns with the frame reconstruction problems, [1,2]. In particular, Lu and Vetterli [21,22] investigated the reconstruction from spatiotemporal samples for a diffusion process. They noted that one can compensate under-sampled spatial information with sufficiently over-sampled temporal data. Unitarily generated frames represent the cases when the evolution process is unitary and the spatial information is one-dimensional. It should be noted that unitarily generated frames are group frames with the generator G = U 1/m provided that U 1 = G m = I k , while harmonic frames are tight unitarily generated frames. Here, a frame A special class of harmonic frames that we shall discuss is the exponential frame with generator Ω as a diagonal matrix with integer entries and the base vector φ 0 = (1, . . . , 1) t / √ k. Main Theorems. It will be shown that, for unitarily generated frames Φ satisfying conditions specified in Theorem 3.5, Σ∆ quantization coupled with alternative decimation still has linear reconstruction error decay rate with respect to the oversampling ratio ρ. As for the data storage, decimation allows for highly efficient storage, and the error decays exponentially with respect to the number of bits used. Here, the cyclic convention is adopted: for any s ∈ Z, s ≡ s + m. • D ρ ∈ N η×m is the sub-sampling operator satisfying Remark 3.2 (Canonical Decimation D ρSρ and Alternative Decimation D ρ S ρ ). It is tempting to consider a closely related circulant matrixS ρ that satisfies S ρ =S ρ −L, where L is constant on the first (ρ − 1) rows and zero otherwise. Visually,S ρ and S ρ has the following form (12) Indeed, D ρSρ = D ρ S ρ , so there is no difference between the alternative decimation and canonical decimation. However, we will show in Appendix B.2 that D ρS 2 ρ = D ρ S 2 ρ , and it is necessary to consider D ρ S 2 ρ instead of D ρS 2 ρ for the second order decimation. (a) Signal reconstruction: The matrix D ρ S ρ E ∈ C η×k has rank k. Definition 3.3 (Frame variation). Given if m, k are even and n j 's are nonzero, In particular, the error decays linearly with respect to the oversampling ratio m/k. (c) Efficient data storage: Suppose the length of the quantization alphabet A is 2L, then the decimated samples D ρ S ρ q can be encoded by a total of R = 2 m/ρ log(2Lρ) = 2η log(2Lρ) bits. Furthermore, suppose η is fixed as m → ∞, then as a function of the total number of bits used, the reconstruction error E is For ρ | m, we have a better estimate The optimal exponent 1 2k will be achieved in the case ρ = m/k ∈ N. The more general result is as follows: Theorem 3.5 (Decimation for Unitarily Generated Frames (UGF)). Given Ω, φ 0 , {λ j } j , {v j } j , and Φ = Φ m,k as the generator, base vector, eigenvalues, eigenvectors, and the analysis operator of the corresponding UGF, respectively, suppose where η = m/ρ, then the following statements are true: (a) Signal reconstruction: (c) Efficient data storage: Suppose the length of the quantization alphabet is 2L, then the total number of bits used to record the quantized samples are R = 2η log(2Lρ) bits. Furthermore, suppose η = m/ρ is fixed as m → ∞, then as a function of the total number of bits used, E m,ρ satisfies Remark 3.6. For Theorem 3.4 and 3.5, if both the signal and the frame are real, then the total number of bits used will be R = η log(2Lρ) bits, half the amount needed for the complex case. One additional property of decimation is its multiplicative structure. Besides the first order alternative decimation in Theorem 3.5, it is also possible to generalize the result to the second order decimation. For such a decimation process, the reconstruction error decays quadratically (as opposed to linearly in Theorem 3.5) with respect to the oversampling ratio ρ and exponentially with respect to the bit usage. Theorem 3.8 (Second Order Decimation for UGF). With the same assumptions as Theorem 3.5 and the additional requirement that the eigenvalues are nonzero, the following statements are true: (a) Signal reconstruction: D ρ S 2 ρ Φ m,k ∈ C η×k has rank k. (b) Error estimate: For the dual frame F = (D ρ S 2 ρ Φ m,k ) † D ρ S 2 ρ , the reconstruction error E m,ρ,r has quadratic error decay rate with respect to the oversampling ratio ρ: (c) Efficient data storage: Suppose the length of the quantization alphabet is 2L, then the total number of bits used to record the quantized samples is R = 4η log(2Lm) bits. Furthermore, suppose η = m/ρ is fixed as m → ∞, then as a function of the total number of bits used E m,ρ satisfies To better demonstrate the ideas in the proof, Theorem 3.4 will be proven separately in Section 4 even though it is essentially a special case of Theorem 3.5. Theorem 3.5 will be proven in Section 5, and Theorem 3.7 in Section 6. The proof of Theorem 3.8 is given in Section 7. Decimation for Finite Harmonic Frames To prove Theorem 3.4, we break down the proof into the following steps: first, we investigate properties of D ρ S ρ E, the decimated version of the frame E. Then, we examine the effect of D ρ S ρ ∆, which is essential for our error estimate. and K is zero except for the j 0 -th column, where otherwise. In either case D ρ S ρ E = D ρ EC as D ρ K = 0. Remark 4.2. In (12), one observes that S ρ differs from an actual circulant matrixS ρ by a matrix L with 1/ρ on every entry of the first ρ − 1 rows and zero otherwise. Since D ρ L = 0, we can conclude that D ρ S ρ = D ρSρ . Thus, it is possible to consider D ρSρ , which is a more natural formulation of decimation than the alternative decimation. For ρ ≤ l ≤ m, If n j = 0, then (S + ρ E) l,j = 1 √ k = E l,j . For l ≤ ρ, we make the following observation: with the cyclic convention on indices. Then for l ≤ ρ − 1, noting that exp(−2πın j (s + m)/m) = exp(−2πın j s/m), Now we can give the condition for which D ρ S ρ E has full rank. Proposition 4.3. The following statements are equivalent: • D ρ S ρ E has full rank. Proof. of Theorem 3.4: Adopting the notations above, we see that the reconstruction error is where the second equality comes from (7), and the third equality follows from Proposition 4.8 along with the fact thatF =C −1F withF being the canonical dual frame to D ρ E. By Proposition 4.6 and (33), For the case ρ | m, we note that E m/ρ,k is a tight frame with frame bound m kρ . In particular, (E m/ρ,k ) * E m/ρ,k = m kρ I k . Thus, by Lemma 4.9 , Thus, we have obtained the following error bound Furthermore, by Lemma 4.10, if m, k are even, n j 's are all nonzero, and ρ | m, then u ηρ = u m = 0. With that there is a better estimate Letting F =F D ρ S ρ , Theorem 3.4 (b) is now proven. For Theorem 3.4 (c), note that for mid-rise uniform quantizers A = A 0 +ıA 0 with length 2L, each entry q j of q ∈ C m is of the form Then, each entry in D ρ S ρ q is the average of ρ entries in q which has the form There are at most ((2L − 1)ρ + 1) 2 ≤ (2Lρ) 2 choices per entry with η = m/ρ entries in total. Thus, the vector D ρ S ρ q can be encoded by R = 2η log(2Lρ) bits. Noting that 1 m ≤ 1 η · 1 ρ and for any estimate we have for some C > 0. Substituting the suitable constant for each case, we have where C F,L ≤ πL(σ(F ) + F η 2 ). If ρ | m, then by (35), (36), where C k,L ≤ πkL η ( 2π(k+1) √ 3 + 1), independent of ρ. Generalization: Decimation on Unitarily Generated Frames Upon examining the proof of Theorem 3.4, one can see the following interaction between decimation and the existing sampling scheme: • Commutativity: Fixing the Σ∆ quantization scheme for now, any family of frames satisfying the commutativity condition shall be compatible with decimation, yielding exponential error decay with respect to the bit usage. One example is the unitarily generated frames. The collection of such elements j } m j=1 is the frame of interest. Lemma 5.1. For the same D ρ and S ρ along with the analysis operator Φ ∈ C m×k of T u generated by (Ω, {λ j } k j=1 , φ 0 ), Proof. First, note that S ρ =S ρ + L, where L has value 1/ρ on the first ρ − 1 rows and 0 otherwise, and D ρ L = 0. Moreover, for any 1 ≤ t ≤ m, Now, we can find the conditions under which D ρ S ρ Φ m,k has full rank: In particular, the frame operator Proof. Suppose the assumptions above are true, then given an arbitrary where the second equality follows from the fact that U t is unitary, the fourth by expanding the sums, and the last one from the following equality Also, we need to consider the frame variation of Φ * m/ρ,k . Proof. Following the same process of Lemma 4.9, we see that Now we are ready to prove Theorem 3.5. The multiplicative property implies the possibility to conduct decimation with multiple steps, gradually down-sizing the dimension m. It can be particularly useful for parallel computation and transmission of data through multiple devices with scarce storage resources. In particular, for each stage, it suffices to choose ρ j to be a small number dividing m. It reduces the waiting time between each transmission, and the amplification of quantized sample q will not be large after each stage. Moreover, although the case where ρ m does not produce this structure for frames, it is now possible to first reduce m to a number closer to k. Only at the last stage do we choose ρ that does not divide m. This yields the same result as direct division m/k by the remark above while possibly gaining sharper estimate on the error. Extension to Second Order Decimation So far, we have only defined decimation for the first order Σ∆ quantization, while its counterpart for bandlimited functions, introduced in Section 1, applies for arbitrary orders. Due to the boundary effect in finite dimensional spaces, it is harder to extend decimation to arbitrary orders. However, there is no issue generalizing this concept to the second order, as stated in Theorem 3.8. To prove the theorem, we shall need the following lemmas: whereC m,ρ = 1 ρ ρ s=1 U * (s−ρ)/m has eigenvalues {e πı(ρ−1)λ j /m sin(ρλ j π/m) ρ sin(λπ/m) } j . In particular, for any r ∈ N, The proof is very similar to the one of Lemma 5.1. However, since we are now dealing with D ρ S r ρ , we are no longer able to use the fact that D ρ L = 0. Instead, we impose the condition that U 1/m has no eigenvalue equal to 1. Proof. First, note that if 1 ∈ C m is the constant vector with value 1, then Thus, for r ∈ N, we have, by induction on r, Proof. For s = m, where the δ(s + ρ) = δ(s − (m − ρ)) comes from the second term in the second-to-last line. When s + 1 + ρ = m + 1, the term δ(s + 1 + ρ − j) wraps around, producing an additional −1. When s = m, Combining the two equations above, we see that ∆ −1∆ ρ ∆ =∆ ρ + E. Proof. Let {v s } s be the canonical basis of C η . Then, by Proposition 7.5, we have Lemma 7.8 (Total Number of Bits Used). Given a mid-rise quantizer A = A 0 + ıA 0 with length 2L and r ∈ N, if q ∈ A m is a quantized sample from the alphabet, then D ρ S r ρ q ∈ C η can be encoded by η · 2r log(2Lm) bits. Proof. Given the assumption above, each entry q j of q is a number of the form Then, each entry in S ρ q is the average of ρ entries in q, which has the form (S ρ q) j = (2s j + ρ) + ı(2t j + ρ) δ 2ρ , −Lm ≤s j ,t j ≤ (L − 1)m. There are at most ((2L − 1)m + 1) 2 ≤ (2Lm) 2 choices per entry. Note that there are (2Lm) 2 choices instead of (2Lρ) 2 as we need to account for the first ρ − 1 rows, which sums m − ρ terms. Iterating r times, there are (2Lm) 2r choices for each entry of S r ρ q. Thus, the vector D ρ S r ρ q can be encoded by R = η · 2r log(2Lm) bits. Proof. of Theorem 3.8: To estimate the reconstruction error, we note that where {v j } j ⊂ C m denotes the canonical basis in C m , the first inequality comes from Proposition 7.7, and the second follows from Lemma 7.6. Here, we see that the error decays quadratically with respect to the oversampling rate ρ. As for the bits used, note that 1 m = 1 η · 1 ρ and where R = η · 4 log(2Lm) comes from Lemma 7.8. Thus, we have Lemma 7.4 shows that ∆ −1 and∆ ρ do not commute, and such non-commutativity limits the potential to generalize alternative decimation to higher orders. For the sake of demonstration, we show explicit calculation in Appendix A which highlights the difficulty in the generalization of our results. Thus, to achieve exponential error decay with respect to the bit usage for higher order Σ∆ quantization schemes, we need to employ different approaches. The new scheme we propose will be published in a subsequent manuscript. Acknowledgement The author greatly acknowledges the support from ARO Grant W911NF-17-1-0014, and John Benedetto for the thoughtful advice and insights. Further, the author appreciates the constructive analysis and suggestions of the referees. Appendix A. Limitation of Alternative Decimation: Third Order Decimation The non-commutativity between∆ ρ and ∆ −1 results in incomplete difference scaling when applying D ρ S r ρ on ∆ r , creating substantial error terms. This phenomenon already occurs for r = 3. Proposition A.1. Given m, ρ ∈ N with ρ | m, the third order decimation satisfies D ρ S 3 . In particular, D ρ S 3 ρ only yields quadratic error decay with respect to the oversampling ratio ρ. First, by noting that ∆ −1∆ ρ ∆ = E as in Lemma 7.4, one has We shall calculate all terms one-by-one. Lemma A.2. We have the following equalities: (1) : : : : : Proof. We will first compute each term without the effect of D ρ since D ρ is the sub-sampling matrix retaining only the tρ-th rows for t ∈ [η]. (2) Finally, as ∆ −1 E∆ only has non-zero entries on the (m − ρ − 1) and (m − ρ)-th columns, and the two columns differ by a sign, it suffices to calculate the (m − ρ)-th column of∆ 2 ρ (∆ −1 E∆). Then, Proof. of Proposition A.1: From (49) and Lemma A.2, we see that Even in higher order cases, alternative decimation still only yields quadratic error decay with respect to the oversampling ratio, as can be seen in Figure 2d and 2e. Alternative decimation is limited by this incomplete cancellation, but canonical decimation has even worse error decay. Contrary to the quadratic decay for alternative decimation, canonical decimation only has linear decay for high order Σ∆ quantization. The same thing applies to plain Σ∆ quantization, as can be seen in Figure 2b. Appendix B. Numerical Experiments Here, we present numerical evidence that the alternative decimation on frames has linear and quadratic error decay rate for the first and the second order, respectively. Moreover, it is shown that the canonical decimation, as described in Remark 3.2, is not suitable for our purpose when r ≥ 2. Recall that given m, r, ρ, one can define the canonical decimation operator D ρS r ρ ∈ R η×m , whereS ρ ∈ R m×m is a circulant matrix. B.1. Setting. In our experiment, we look at three different quantization schemes: alternative decimation, canonical decimation, and plain Σ∆. Given observed data y ∈ C m from a frame E ∈ C m×k and r ∈ N, one can determine the quantized samples q ∈ C m by y − q = ∆ r u for some bounded u. The three schemes differ in the choice of dual frames: • Alternative decimation:x = (D ρ S r ρ E) † D ρ S r ρ q = F a q. For each experiment, we use the mid-rise quantizer A and fix k = 55, δ = 0.5, L = 100, and η = 65. For each ρ, we set m = ρη and pick 10 randomly generated vectors {x j } 10 j=1 ⊂ C k . Σ∆ quantization on each signal gives {q j } 10 j=1 ⊂ C m . The maximum reconstruction error over the 10 experiments is recorded, namely The frame in our experiment is (E m,k ) l,j = (E) l,j = 1 √ k (exp(−2πı(l + 1)(j + 1)/m)) l,j . First, we shall compare alternative decimation with plain Σ∆ quantization from Figure 2. For r = 1, alternative decimation performs worse than plain Σ∆ quantization, as plain Σ∆ quantization benefits from the smoothness of the frame elements, having decay rate O(( m k ) −5/4 ) proven in [3]. However, for r ≥ 2, alternative decimation supersedes plain Σ∆ quantization as the better scheme. This can be explained by the boundary effect in finitedimensional spaces that results in incomplete cancellation for backward difference matrices. We are interested in the case r = 1 or 2. As we can see, the theoretical error bound does not have a tight constant, although the decay rate is consistent with our experimental result. B.2. Necessity of Alternative Decimation. The main difference between the alternative decimation operator D ρ S r ρ and the canonical one D ρS r ρ lies in the scaling effect on difference structures. We haveS r ρ = (S ρ + L) r with ρL having unit entries on the first ρ − 1 rows and 0 everywhere else. In Figure 2, we can see the performance drop-off when switching from alternative decimation to canonical decimation for r ≥ 2. we can see that canonical decimation incurs much worse reconstruction error than the alternative one, while generally having worse decay rate. For demonstration, we show explicitly the difference between alternative and canonical decimation schemes for r = 2:S 2 ρ ∆ 2 = (S ρ + L) 2 ∆ 2 = S 2 ρ ∆ 2 + (LS ρ + S ρ L + L 2 )∆ 2 = S 2 ρ ∆ 2 + L(S ρ + L 2 )∆ 2 + S ρ L∆ 2 . The log-log plot for reconstruction error against the decimation ratio ρ for different quantization schemes. In the case r = 1, alternative decimation coincides with canonical decimation. For r ≥ 2, alternative decimation has better error decay rate than both canonical decimation and plain Σ∆ quantization.
2018-04-03T02:31:29.442Z
2018-03-08T00:00:00.000
{ "year": 2018, "sha1": "f71714a047f57492ace22d8297a6dff3abf8a303", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1803.02921", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7750dc42a3fee99c3615806e11a31fc88aefa4d0", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
268467712
pes2o/s2orc
v3-fos-license
Effectiveness and safety of neoadjuvant apatinib in combination with capecitabine and oxaliplatin for the therapy of locally advanced colorectal cancer: A retrospective study The goal of the present study was to appraise the efficacy and safety of neoadjuvant apatinib in combination with capecitabine and oxaliplatin (XELOX) in patients with locally advanced colorectal cancer (CRC), as relevant data on its usage in this setting are lacking. A retrospective analysis was implemented on 100 patients with locally advanced CRC who received either neoadjuvant apatinib in combination with XELOX (N=50) or neoadjuvant XELOX alone (N=50). Radiological response and pathological complete response rates were evaluated. Furthermore, the researchers obtained data pertaining to disease-free survival (DFS), overall survival, as well as adverse events. The consequences of the present study indicated that the neoadjuvant apatinib in combination with XELOX treatment approach yielded higher rates of radiological objective response (86.0 vs. 68.0%, P=0.032) and major pathological response (46.0 vs. 22.0%, P=0.011) compared with XELOX alone. These findings were further confirmed through multivariate logistic regression analyses (P=0.037 and P=0.008, respectively). Interestingly, the neoadjuvant apatinib in combination with XELOX treatment approach significantly prolonged DFS when compared with XELOX alone (P=0.033). In summary, the administration of neoadjuvant apatinib in combination with XELOX demonstrates superiority over the use of XELOX alone in terms of achieving a more favorable pathological response and a longer duration of DFS in patients diagnosed with locally advanced CRC. Introduction Colorectal cancer (CRC) is a prevalent malignant tumor worldwide, with significant mortality rates (1,2).Unfortunately, multiple patients are diagnosed with locally advanced CRC at the time of initial diagnosis (3,4).In such cases, neoadjuvant therapy plays a crucial role in providing patients with more opportunities for subsequent surgical resection and improving their long-term survival outcomes (5)(6)(7).Currently, the primary neoadjuvant regimen for patients with locally advanced CRC involves platinum-based chemotherapy, such as capecitabine plus oxaliplatin (XELOX) and fluorouracil (8,9).Nonetheless, the effectiveness of these treatment protocols is deemed unsatisfactory (10).Consequently, it is imperative to devise alternative neoadjuvant regimens to manage patients with locally advanced CRC. As an oral inhibitor of vascular endothelial growth factor receptor-2 (VEGFR2), apatinib possesses anti-angiogenic properties that are considered to regulate angiogenesis and β-catenin signaling, thereby inhibiting CRC cell proliferation, migration and invasion (11).Previous studies have established the efficacy and safety of combining apatinib with chemotherapy for the therapy of patients with advanced CRC (12,13).For instance, a meta-analysis has demonstrated that the combination of apatinib and chemotherapy yields a favorable objective response rate (ORR), disease control rate (DCR), and survival rate with manageable adverse reactions among patients with advanced CRC (12).Furthermore, another study has indicated that the combination of apatinib and chemotherapy enhances progression-free survival and exhibits an acceptable tolerance in patients with refractory metastatic CRC (13).Nevertheless, there is a dearth of pertinent evidence concerning neoadjuvant apatinib in combination with XELOX in patients with locally advanced CRC. The purpose of the present study was to investigate radiological response, pathological response, survival outcomes and adverse events in patients diagnosed with locally advanced CRC who underwent neoadjuvant treatment with apatinib and XELOX. Patients. A retrospective analysis was conducted on a total of 100 patients with locally advanced CRC who received treatment at The Affiliated Hospital of Hebei University (Baoding, Effectiveness and safety of neoadjuvant apatinib in combination with capecitabine and oxaliplatin for the therapy of locally advanced colorectal cancer: A retrospective study China) between January 2017 and January 2019.The inclusion criteria contained: i) Patients who were histologically or cytologically confirmed to have CRC; ii) had a clinical stage of cT3-4b/N + /M0 for patients with rectal cancer or cT4b/N + /M0 for patients with colon cancer, which was appraised by computed tomography (CT) or magnetic resonance imaging; iii) >18 years old; iv) the eastern cooperative oncology group performance status (ECOG PS) score of 0-1; v) received surgical resection; vi) had accessible and available clinical data for study analysis.The exclusion criteria contained: i) Had severe infections; ii) had severe dysfunctions of the liver or kidney; iii) had coagulation disorders; iv) had severe heart failures; v) had uncontrollable hypertensive diseases.Clinical characteristics of patients (including sex and age distribution) are included in Table I.The present study was approved by (approval no.ChiECRCT20210395) by the Ethics Committee of The Affiliated Hospital of Hebei University (Baoding, China).Written informed consent was provided by each patient or their guardian (if the patient died). Data collection and treatment.Patient clinical characteristics, biochemical indices and treatment data were collected, along with poorly differentiated clusters (PDC) and tumor budding (TB) measurements.PDC was categorized as low (0-4) or high (≥5), while TB was classified as low (0-4 buds) or high (≥5 buds) based on the International Consortium on TB Recommendations (14,15).Patients were stratified into two groups based on their neoadjuvant regimens: The XELOX group and the apatinib plus XELOX group.Neoadjuvant therapy was administered for three cycles, with each cycle lasting 21 days.The standard regimens for the XELOX group were as follows: For the XELOX group, 130 mg/m 2 XELOX was administered intravenously on day 1, 1.0 g/m 2 capecitabine was given orally 2 times/day for 14 days with a 7-day-off; for the apatinib plus XELOX group, apatinib was given orally at 0.25 g/day on the basis of XELOX.The dose of apatinib was determined referring to the instruction, and the dose of neoadjuvant XELOX was determined referring to the clinical guidelines (16).Moreover, surgical information (laparoscopic radical resection or radical resection) was collected based on an assessment at 4-5 weeks after discontinuing neoadjuvant therapy. Assessment.The imaging data obtained from patients after neoadjuvant treatment were utilized to assess clinical response based on Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST v.1.1)and evaluated post-neoadjuvant pathologic tumor-node-metastasis (ypTNM) (17).Furthermore, the Becker's grading system was employed to evaluate tumor regression grade (TRG) based on surgery information, which was graded as 0, 1, 2, and 3 (18).Major pathological response was defined as grade 0-1 of TRG.Additionally, follow-up information was collected to appraise disease-free survival (DFS) and overall survival (OS) of patients.Additionally, adverse events were detected.I. Radiological and pathological comparisons between cohorts. The findings of the present study indicated a significant discrepancy in radiological response between the apatinib plus XELOX group and the XELOX group (P= 0.012), with the former exhibiting a more favorable outcome.The proportion of patients achieving ORR was also revealed to be higher in the apatinib plus XELOX group vs. the XELOX group (86.0 vs. 68.0%)(P= 0.032).However, the DCR was not different between groups (98.0 vs. 86.0%)(P= 0.059). As a whole, based on Dworak's scale, tumor regression was staged as follows: TRG 0 received 5.0% of the cases, TRG 1 received 29.0%, TRG 2 received 29.0%, and TRG 3 received 37.0%.The TRG demonstrated superiority in the apatinib plus XELOX group as compared with the XELOX group (P<0.001).The apatinib plus XELOX group exhibited an increased rate of major pathological response compared with the XELOX group (46.0 vs. 22.0%) (P= 0.011).With regards to the TNM stage following neoadjuvant therapy, no discernible difference was illustrated in ypTNM stage (P= 0.200) or TNM stage decline (60.0 vs. 46.0%)(P=0.161) between groups (Table III).treatment with apatinib plus XELOX, as opposed to XELOX alone, was independently associated with higher ORR rates in patients with locally advanced CRC [odds ratio (OR)=2.891,P=0.037], as depicted in Fig. 1A.Furthermore, treatment with apatinib plus XELOX was independently linked with higher rates of major pathological response (OR=3.431,P=0.008), while the distance of the tumor from the anus (>5 cm vs. ≤5 cm) was independently related to lower rates of major pathological response (OR= 0.354, P= 0.032) in patients with locally advanced CRC, as demonstrated in Fig. 1B. Associated factors with ORR and major pathological DFS and OS between cohorts.The DFS was found to be significantly higher in the apatinib plus XELOX group compared with the XELOX group (P= 0.033; Fig. 2A).Nevertheless, no significant difference was revealed in OS between the two groups (P=0.107;Fig. 2B).Comparison of adverse events between cohorts.The present study revealed that the apatinib plus XELOX group exhibited a higher incidence of leukopenia (72.0 vs. 52.0%;P=0.039), neutropenia (48.0 vs. 12.0%; P<0.001) and anorexia (46.0 vs. 26.0%;P= 0.037) compared with the XELOX group. No difference was disclosed in the incidences of other adverse events, including nausea and vomiting, thrombocytopenia, hypertension, proteinuria and hemoglobinopenia (all P>0.05).Notably, all adverse events were grade 1-2, and there was no grade 3-4 adverse event in both groups (Table IV). Discussion The VEGF pathway-mediated angiogenesis plays a crucial role in providing nutrients for tumor growth, thereby contributing to the progression of CRC (19).Apatinib, an oral antiangiogenic agent, has been shown to inhibit tumor angiogenesis by restraining VEGFR-2, which presents a promising treatment strategy for CRC (11,20).The present study demonstrated that neoadjuvant apatinib in combination with XELOX significantly increased the ORR and major pathological response compared with XELOX alone.In addition, the results of the present study revealed that apatinib in combination with XELOX improved radiological response compared with XELOX alone.This was attributed to apatinib's enhancement of conventional chemotherapy.This effect could be attributed to the ability of apatinib to restrain angiogenesis and the VEGFR2-β-catenin pathway, leading to tumor regression in patients with locally advanced CRC (11).Furthermore, apatinib promoted ferroptosis by targeting the elongation of very long chain fatty acids protein 6/acyl-CoA synthetase long-chain family member 4 signaling in CRC cells, which eliminated CRC cells and inhibited CRC growth (21).Therefore, neoadjuvant apatinib in combination with XELOX improved ORR and major pathological response in patients with locally advanced CRC.The efficacy of current neoadjuvant chemotherapy for patients with CRC remains suboptimal, as evidenced by previous research (22)(23)(24).Specifically, studies have illustrated a 5-year OS rate of 67-76% in patients with locally advanced CRC who undergo neoadjuvant chemotherapy (23,24).By contrast, the investigation of the present study demonstrated that patients with locally advanced CRC who received neoadjuvant apatinib in combination with XELOX had a 5-year DFS rate of 86.9% and a 5-year OS rate of 81.7%, which surpassed the outcomes of neoadjuvant XELOX alone in Table II.Levels of NLR, PLR and CEA. In addition to efficacy, the safety of neoadjuvant apatinib in combination with XELOX in patients with locally advanced CRC is also a noteworthy issue.In the present study, neoadjuvant apatinib in combination with XELOX increased the incidences of leukopenia, neutropenia and anorexia compared with neoadjuvant XELOX alone.The possible reasons were as follows: i) Apatinib restrained the colony formation of bone marrow by inhibiting VEGFR-2, causing myelosuppression, thus decreasing leukocytes and neutrophils (27).ii) Apatinib inhibited VEGFR-2, which might lead to gastrointestinal mucosal injury and gastritis, thus increasing anorexia (28,29).Interestingly, hypertension and proteinuria are considered common adverse events associated with apatinib (30).A previous study reported that the incidence of hypertension and proteinuria in patients with advanced CRC who receive apatinib is 25.9 and 22.2%, respectively (31).Similar to the aforementioned study, the incidences of hypertension and proteinuria in the present study were both 28% in the apatinib plus XELOX group. Additionally, there was no new adverse event occurring in the apatinib plus XELOX group.These results supported the favorable tolerance of neoadjuvant apatinib in combination with XELOX in patients with locally advanced CRC.The findings of the present study indicated that clinicians needed to pay attention to adverse events caused by apatinib and provide timely treatment. Notably, the present study did not intervene in the neoadjuvant treatment regimens of patients with locally advanced CRC, and all regimens were selected based on the physician's recommendations or patients' wishes.In the present study, the majority of clinical characteristics of patients in both groups were non-differential, while there was a lower proportion of patients with high TB in the apatinib plus XELOX group vs. the XELOX group.In detail, TB is a histological characteristic of tumor cells that represents the dissociation of a single cancer cell or clusters of up to four cancer cells from the invasive tumor front (32,33).The 2016 International TB Consensus Conference (ITBCC) has indicated that TB is a well-established independent factor for predicting the prognosis of CRC patients (15).Thus, the difference in TB between the two groups in the present study represented that patients in the apatinib plus XELOX group might have improved prognosis vs. the XELOX group, which might influence the results to some extent.However, the current study used forward-stepwise multivariate Cox's proportional hazard regression models to correct confounding factors, which found that apatinib in combination with XELOX treatment was independently linked with prolonged DFS in patients with locally advanced CRC. The present study involved several limitations worth noting: i) The present study reviewed as numerous patients as possible who met the inclusion criteria and did not meet the exclusion criteria.However, there was a small sample size, and further studies should consider including a large sample size to verify the efficacy and safety of neoadjuvant apatinib in combination with XELOX in patients with locally advanced CRC; ii) the present study was retrospective, which might lead to bias to some extent.Thus, future randomized, controlled studies are required for further verification; iii) in the present study, neither neoadjuvant apatinib in combination with XELOX nor neoadjuvant XELOX alone were evaluated for quality of life; and iv) the conventional doses of apatinib used in patients with CRC are 0.25 g/day or 0.5 g/day (34), while the present study only used 0.25 g/day doses of apatinib, and future studies should consider evaluating the clinical efficacy and safety of 0.5 g/day doses of apatinib used for neoadjuvant therapy in patients with locally advanced CRC. In conclusion, the administration of neoadjuvant apatinib in combination with XELOX has been found to enhance radiological and pathological responses, as well as improve DFS with acceptable tolerance in patients diagnosed with locally advanced CRC.The primary outcome measured neoadjuvant apatinib in combination with XELOX is effectiveness and safety for treating locally advanced CRC. Figure 1 . Figure 1.Independent factors related to ORR and major pathological response in patients with locally advanced CRC.Independently predictor of (A) ORR and (B) major pathological response in patients with locally advanced CRC.Statistical methods: forward-stepwise multivariate logistic regression models.ORR, objective response rate; CRC, colorectal cancer. Figure 2 . Figure 2. DFS and OS in apatinib plus XELOX group and XELOX group in patients with locally advanced CRC.Comparison of (A) DFS and (B) OS between apatinib plus XELOX group and XELOX group in patients with locally advanced CRC.Statistical methods: Kaplan-Meier curves with a log-rank test.DFS, disease-free survival; OS, overall survival; XELOX, capecitabine plus oxaliplatin; CRC, colorectal cancer. Figure 3 . Figure 3. Independent factors related to DFS and OS in patients with locally advanced CRC.Independently predictor of (A) DFS and (B) OS in patients with locally advanced CRC.Statistical methods: forward-stepwise multivariate Cox's proportional hazard regression models.DFS, disease-free survival; OS, overall survival; CRC, colorectal cancer. The primary outcome of the present study was DFS.Factors linked with DFS and OS were determined using forward-stepwise and enter method multivariate Cox's proportional hazard regression models.P<0.05 was considered to indicate a statistically significant difference. XELOX group, there were 45 (90.0%) patients with left lesions and 5 (10.0%) patients with right lesions.In the XELOX group, there were 49 (98.0%)patients with left lesions and 1 (2.0%) patient with right lesions.The majority of clinical characteristics were similar between groups (all P>0.05), except that the proportion of patients with high TB in the apatinib plus XELOX group was lower than the proportion of patients with high TB in the XELOX group (42.0 vs. 64.0%)(P=0.028).A more detailed description of the two groups is presented in Table Table I . Clinical characteristics. responses.A forward-stepwise multivariate logistic regression model was applied to recognize factors associated with ORR and major pathological response.The results indicated that Table III . Radiological and pathological response. Table IV . Adverse events.
2024-03-17T17:05:54.895Z
2024-03-11T00:00:00.000
{ "year": 2024, "sha1": "9ff53aabe0de46d4e907352670f0e148baab8138", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ol.2024.14335/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "664c9834b3ebbe7f34362abd7b91b11fd26fbcc1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115867383
pes2o/s2orc
v3-fos-license
A Cyber-Physical Model of Internet of Humans for Productivity and Worker General Welfare A Cyber-Physical Model of Internet of Humans (CPMIOH) is proposed in this paper. With the great advance of semiconductor technology, many wearable sensors are available nowadays. The low cost, proper accuracy, low energy consumption, and portable features of these kind sensors make a wearable device popular in recent years. Most of the applications of wearable devices are implemented in the areas of medical evaluation, clinical diagnosis, healthcare, game and entertainment, and personal fitness. However, the application in the industry is still limited. A CPMIOH is composed of wearable devices equipping on the body of workers in the factory, wireless data collecting devices, cloud data storage servers, network communication protocols, and algorithms for human movement measurement and recognition. Under this architecture, the human motion analysis is not limited in the laboratory but realized in the practical work environment. In this CPMIOH model, workers could be evaluated through continuous analysis of movement measurement and classification. The operations of a task might be reviewed repeatedly by the representation of 3D human motion model in the computer. This model also provides a potential objective analysis to complement traditional subjective evaluation methods in the ergonomic risk evaluation on the work-related musculoskeletal disorders. The biofeedback sensors also provide long-term data while the workers are exposed to the real work environment. This data assists the analysis of objective mental load in a real time. The work stress evaluation of a job would not be limited by the post subjective methods. This application would not only do benefit to monitor the stress of workers exposed to the high tension work environment but also provide an opportunity to improve the psychological health of workers by job redesign. Introduction In the manufacturing practice, people care about the productivity and efficiency of a production line, the safety of operators during work, a good and worker-friendly environment, and the health of employees. The productivity and efficiency of an operation are evaluated by reports generated from a computer system such as enterprise resource planning (ERP) and manufacturing execution system (MES). Classical management intervention is applied for the safety protection in shop floors. For some specific industry, hazardous gas detectors and thermometers are equipped in factories to monitor work environments. Ergonomics control and risk assessment are applied for the work-related musculoskeletal disorders prevention. The method of assessment of human work fatigue and risk hazard can be divided into employee's self-assessment method, expert's subjective observation, and objective risk factor method. The disadvantage of self-assessment is that follow-up data analyses require high technical skills. It is also difficult to interpret the data correctly. However, the problem is that there exists a great difference between workers' experienced risk exposure and the severity of the hazardous condition. Thus, a self-assessment method may fail to produce reliable data (Viikari-Juntura et al., 1996;Balogh et al., 2004). Expert's subjective observation and evaluation method are executed by ergonomists according to the actual situation of a work to assess a variety of pre-defined risk factors at different levels of exposure. The advantages of this kind method are low cost, a wide range of application among varied work types, very suitable for practical investigation and evaluation. Most importantly, it will not interrupt the work. The disadvantage is that different observers might have varied judgments for the same risk exposure of work. Moreover, this method is suitable for assessment of static posture-based and/or high repeatability of movement. The objective risk factor method uses a variety of instruments to measure the magnitudes of various biomedical signals or biomechanical variables of workers at work. The advantage of this method is that you can accurately get an objective variable measurement, but the disadvantage is that the equipment is expensive, and most of the equipment will be hampered by the operation (Tong et al., 2013). As a result, such method is hard to carry out risk assessments of real works in practice. With the progress of science and technology, the rapid advancement of semiconductor process makes the application of electronic products is extremely tiny and light. In recent years, Internet of Things (IoT) technology is getting mature; making low-cost and high reliability of wearing device is expected to become an economic and reliable new tool for ergonomic risk assessments at the workplace. This technology reveals a hope of steadying current objective risk factor methods which were difficult to implement in the workplace. The purpose this study is to propose a Cyber-Physical Model of Internet of Humans (CPMIOH). Some electronic sensors were proposed to wear on humans, and these sensors will send information to the computer server via wireless communication technology. The proposed structure of CPMIOH is demonstrated in the next session. CPMIOH Model The analytic procedure of a CPMIOH model is illustrated in Figure 1. The CPMIOH server plays the integrated role in this model. Each worker carries sensors. Lots of data receivers are allocated in the shop floor. Workers wear the inertial and/or biofeedback sensors during working. These sensors are tiny and handy and will not obstruct the operation handling. The communication between a data receiver and a sensor could be any wireless communication protocol such as Wi-Fi, Bluetooth Low Energy (BLE), ZigBee, or any formal standardized protocol. If there is any requirement of ergonomics investigation, the inertial and/or the bio-medical sensors are worn on the worker. The kinematics and biofeedback data are collected by the collecting device (could be the mobile phone or send to the server directly). Some pre-defined risk factors could be calculated and a report could be generated for the shop floor managers. Also, the raw data is stored in the database which provides ergonomists a succeeding study possibility. Some simple awkward postures, static postures, highly repeated movements, extreme heart rate, and highly energy consuming task could be identified by the CPMIOH server. Then the server would send a warning message to the worker to prevent their injury. Also, the manager would have the same message. He could take action to improve the job design to protect his workers. Figure 2 The data analysis procedure in a CPMIOH model. The collected data from sensors would be processed by the CPMIOH server with specific algorithms. The desired risk indices are generated for the WMSDs, physical, and mental workload. By the specific algorithm developed based on the criteria of the subjective evaluation methods. The tradition work related musculoskeletal disorders (WMSDs) subjective evaluation methods such as Baseline Risk Identification of Ergonomic Factors (BRIEF), Rapid Entire Body Assessment (REBA), Rapid Upper Limb Assessment (RULA), Key Index Method (KIM), and so on could be implemented in an objective perspective through the data generated by the inertial sensors. The physical workload would be evaluated simply by the biofeedback sensor's data such as heart rate. While the extreme heart rate was detected during the operation, the CPMIOH server could judge if the worker was exposed under risk according to the comparison with his maximal heart rate. Moreover, all of the data are kept for ergonomists to do advanced biomechanics and biofeedback analysis. An expert could study the work context and apply the data to develop the proper biomechanical model for a succeeding study. The heart rate based study such as heart rate variability (HRV) is possible for the mental workload of the task. Figure 2 demonstrates the data analysis procedure in a CPMIOH model as we mentioned. Challenges The technology for the services of productivity and environment monitor are mature. They could be implemented and integrated easily. However, this is not the case for the ergonomics risk evaluation services. For the ergonomics risk evaluation, many applications of inertial sensors in the clinic were reported. Medical community applied inertial sensors to evaluate the rehabilitation status and to apply in the health care like fall detection (Alvarez et al., 2016). Many researches about the body link's orientation or angle measurement applied inertial sensors were published (Hu et al., 2014;Cockcroft et al, 2014;Lambrecht and Kirsch, 2014;Slajpah et al., 2014;Bergamini et al., 2015;Chen et al, 2015;Ruffaldi et al, 2015). However, there is no any application in industrial practice was reported. The body movements in the industries are complex and rapid. This causes the identification of movements much harder than the limited number and expected pattern movements in most of the published literature. The need of expertise for advanced biomechanical and biofeedback analysis is another difficulty. The analysis needs more mathematical and computer programming knowledge. It might be another challenge for the developers of a CPMIOH system. Conclusion In this study, we proposed an integrated model based on wearing sensors on the person to form IoH. By this way, we could provide ergonomics risk assessment services by an objective method at the real industry shop floor. By this way, the risk assessment methods in the industry would not be limited to the subjective ones. The successful implementation of this model reveals the hope of improving workers' welfare without losing productivity.
2020-05-05T14:27:09.668Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "4d88e62ece5a4305576c6027322ddae2a405fd6d", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jje/53/Supplement2/53_S608/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4d88e62ece5a4305576c6027322ddae2a405fd6d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
3027047
pes2o/s2orc
v3-fos-license
Cost analysis of a growth guidance system compared with traditional and magnetically controlled growing rods for early-onset scoliosis: a US-based integrated health care delivery system perspective Purpose Treating early-onset scoliosis (EOS) with traditional growing rods (TGR) is effective but requires periodic surgical lengthening, risking complications. Alternatives include magnetically controlled growing rods (MCGR) that lengthen noninvasively and the growth guidance system (GGS), which obviate the need for active, distractive lengthenings. Previous studies have reported promising clinical effectiveness for GGS; however the direct medical costs of GGS compared to TGR and MCGR have not yet been explored. Methods To estimate the cost of GGS compared with MCGR and TGR for EOS an economic model was developed from the perspective of a US integrated health care delivery system. Using dual-rod constructs, the model estimated the cumulative costs associated with initial implantation, rod lengthenings (TGR, MCGR), revisions due to device failure, surgical-site infections, device exchange, and final spinal fusion over a 6-year episode of care. Model parameters were from peer-reviewed, published literature. Medicare payments were used as a proxy for provider costs. Costs (2016 US$) were discounted 3% annually. Results Over a 6-year episode of care, GGS was associated with fewer invasive surgeries per patient than TGR (GGS: 3.4; TGR: 14.4) and lower cumulative costs than MCGR and TGR, saving $25,226 vs TGR. Sensitivity analyses showed that results were sensitive to changes in construct costs, rod breakage rates, months between lengthenings, and TGR lengthening setting of care. Conclusion Within the model, GGS resulted in fewer invasive surgeries and deep surgical site infections than TGR, and lower cumulative costs per patient than both MCGR and TGR, over a 6-year episode of care. The analysis did not account for family disruption, pain, psychological distress, or compromised health-related quality of life associated with invasive TGR lengthenings, nor for potential patient anxiety surrounding the frequent MCGR lengthenings. Further analyses focusing strictly on current generation technologies should be considered for future research. Introduction Early-onset scoliosis (EOS) is defined as a coronal curvature of the spine exceeding 10° occurring before the age of 10 years, and can be subcategorized as congenital, idiopathic, syndromic, or neuromuscular. [1][2][3][4] Left untreated, EOS may progress to produce disfigurement and deformity of the chest wall, leading to thoracic insufficiency 180 syndrome characterized by labored breathing, extreme breathlessness/fatigue, and reduced quality of life. 3 Treatment options for EOS include observation, casting, bracing, and surgical techniques. 3 Ideally EOS treatment would permit correction (partial or complete) of the deformity, maintain the deformity correction, and permit vertical growth of the spine and radial expansion of the rib cage. Fusion surgeries result in iatrogenic limitation of spinal growth with long-term impairment of pulmonary volumes, making these surgeries suboptimal in EOS. 5 Hence "growth-friendly" surgeries such as growth guidance system (GGS), magnetically controlled growing rods (MCGR), and traditional growing rods (TGR) have been developed to attempt to satisfy the goals of treatment. TGR are effective, yet require periodic invasive surgical lengthenings with risk of complications. 6 The surgeries inherent with TGR treatment are also associated with considerable socioeconomic, psychological, and health-related quality of life (HRQoL) disadvantages for both patients and their caregivers. 7 MCGR have also been shown to be clinically effective and can be lengthened noninvasively, with a hand-held external remote controller, allowing for magnetically controlled continuous elongation (to a set tension), or incremental elongation (to a set distance). [7][8][9][10][11][12] Although device costs for MCGR are higher, several studies have shown that these may be offset by the reduced complications and costs garnered by noninvasive lengthenings. [13][14][15][16] Both TGR and MCGR are effective for preventing disease progression and facilitating correction of curves. [8][9][10]12,17,18 An alternative construct for EOS, GGS (SHILLA™ Growth Guidance System, Medtronic Spinal & Biologics, Memphis, TN, USA) was cleared for marketing in the United States in July 2014. 19 GGS is a new growth-sparing technology that helps provide deformity correction while allowing continued skeletal growth at the proximal and distal construct ends and obviating the need for periodic lengthening procedures. GGS utilizes a unique non-locking set screw that allows the pedicle screws to slide along the rod axis during vertical growth. Once implanted during a surgical procedure similar to TGR and MCGR, GGS has demonstrated clinical effectiveness (in both curve correction and increasing thoracic height) with 6-year follow-up. 20,21 Obviating the need for invasive lengthening procedures, GGS would be expected to reduce overall costs per patient in a similar manner to MCGR; however, no economic study of GGS has been published to date. The objective of this research was to estimate -over a 6-year episode of care -the cumulative cost of treating EOS with GGS compared with MCGR and TGR from the US integrated health care delivery system (IDS) perspective. Similar to the cost analysis by Polly et al 14 comparing MCGR with TGR, the present economic model was developed from the IDS perspective. For each treatment using dual-rod constructs, the model assessed the 6-year cumulative costs associated with initial implantation, rod lengthenings (TGR every 6 months; MCGR every 3 months), revisions due to device failure, surgical site infections (SSIs), device exchange (at 3.8 years), and final spinal fusion. Costs are presented in 2016 US$ and, in line with the recommendation of the Congressional Budget Office, were discounted at an annual rate of 3.0%. 22 An institutional review board (IRB) exemption was granted given that model parameters were sourced from peer-reviewed, published literature and the present research did not involve human subjects. Model overview For the present study, the cost analysis by Polly et al 14 was first reconstructed (from the publicly available paper and technical report) and then updated to reflect the most recent published literature and to include GGS. As such, the model assumptions and parameter values for TGR and MCGR are largely based on Polly et al with the exception of updating the construct type (to 100% dual-rod to reflect current practice), device failure rates, deep SSI rates, time under anesthesia, and reimbursement codes and costs. For completeness, we have summarized the assumptions and data sources in the following section. Table 1 details the model framework created and clinical parameters used, while medical resources are detailed in Table 2. Importantly, the TGR device failure rates (rod breakage rates) were derived from an economic evaluation commissioned by the National Institute for Health and Care Excellence, 18,23 while the MCGR and GGS rod breakage rates were obtained from the most recent comparable literature available from multicenter studies. 11,20 The device failure rates for TGR and MCGR were corrected using the relative risk of rod breakage for single vs dual-rods to estimate what the rate would be if every construct were a dual-rod construct (constructs were 64% and 85% dual-rod [P Hosseini and J Pawelek, San Diego Spine Foundation, personal communication, April, 2017] in the sources used for TGR and MCGR, respectively). 11,18,23,24 The source used for the GGS rod breakage rate already reflected 100% dual-rod construct. 20 The model assumes GGS, MCGR, and TGR are of equal clinical effectiveness and that medical resource use for initial implantations, revisions, and exchanges with GGS, MCGR, 181 and TGR is similar (with the exception of anesthesia time and device cost, where appropriate). The model also assumes that one radiograph is required per insertion, health care professional (HCP) visit (GGS), lengthening procedure (MCGR and TGR), exchange, revision, deep SSI, and final fusion; and treatment of deep SSIs will require intravenous antibiotics and a complete replacement of implants while treatment of superficial infections will require oral antibiotics. As the cost of oral antibiotics would be incurred by the patient (rather than the provider), this has not been included in the analysis; there is also no consideration of pediatric mortality. Using the average observed spinal growth in a child with EOS aged 6 years; the model estimates that all patients will require one surgery to exchange the device at 3.8 years. 31,32 The components that require replacement during the course of a partial revision procedure (Table 2) were based on the TGR study by Bess et al and expert clinical advice. 14,24 In the absence of such data for GGS and MCGR, these percentages have been assumed to be the same for GGS, MCGR, and TGR. Hence, during a partial removal for GGS, MCGR, or TGR, pedicle screw/hooks were assumed to require replacement in 95% of surgeries, rod set screws in 61% of surgeries, and all other components (including rods and connectors) in 100% of surgeries. MCGR rod costs were not included for revisions due to MCGR failure within 1 year following an MCGR implantation or MCGR exchange (in the unlikely event of a manufacturing defect); all other costs for the MCGR revisions were included (for example, cross link, hospital facility costs, and professional fees). Medicare payments were used as a proxy for provider costs (a widely accepted methodology for cost analyses). 33 As such, hospital inpatient facility costs were based on Medicare diagnosis-related group (DRG) data, physician professional fees were based on current procedural terminology (CPT) data, and hospital outpatient facility costs were based on ambulatory payment classification (APC) data. As hospital inpatient DRG payments are bundled to include the TGR device cost, such inpatient procedures for GGS and MCGR had the TGR device costs subtracted and the GGS or MCGR device costs added in order to account for the differences in device costs. Table 3 details the total costs used for these procedures in the model, while the Supplementary materials detail the component costs, including all CPT, APC, and DRG codes and costs, as well as anesthesia, intraoperative neurophysiological monitoring, and radiograph codes and costs. Sensitivity and scenario analyses were conducted to assess whether the cost analysis results were robust to modifications in the values of important parameters such as device failure rates, time between lengthenings, and construct costs. Figure 1 illustrates the cumulative costs for treatment of EOS with GGS compared with MCGR and TGR, detailing the higher cost of initial insertion and exchange (at 3.8 years) for GGS being offset by the cost of frequent TGR surgical lengthenings and associated deep SSIs. From the IDS perspective, the 6-year cumulative cost for GGS was lower than TGR, saving US$25,226. Sensitivity analysis One-way sensitivity analyses indicated that results were sensitive to changes in construct costs, rod breakage rates, months between lengthenings (TGR and MCGR), and TGR lengthening setting of care (Figures 2 and 3). Only one parameter in the sensitivity analysis (months between lengthenings for TGR) produced a positive budget impact for GGS, suggesting that GGS is likely to be cost saving over a 6-year episode of care from the IDS perspective. Note that GGS becomes cost neutral with TGR if TGR lengthenings occur at approximately every 9 months. Using clinically realistic scenarios, two-way sensitivity analysis for particularly impactful and less precisely known model parameters, specifically 1) GGS with TGR or MCGR device failure rates, and 2) months between GGS HCP visits with months between TGR or MCGR lengthenings, demonstrated that the cumulative costs varied by relatively little, suggesting that the economic model is robust to plausible parameter values (Tables 1 and 2 show ranges). Only when TGR lengthenings are performed at 9-month or greater intervals is there a positive budget impact, suggesting that GGS is likely to be cost saving over the 6-year episode of care. Scenario analyses (that is, multi-way sensitivity analyses) were also run, to further assess the device failure rate (rod breakage rate) -first all the rates for all three technologies were set to 0.5493% per month, to reflect the adjusted, dualrod rate for TGR from The National Institute for Health and Care Excellence (NICE) external assessment report and the longest follow-up for the greatest number of patients. 18,23 This had a minimal impact on costs, reducing the 6-year cumulative costs for GGS and MCGR by less than 1%. The second scenario analysis set the values for TGR and MCGR to the lowest found in published literature and the GGS to the highest (GGS: 0.6053% [represents dual-rod construct]; MCGR: Discussion Modeling is a simplified representation of the real world in an analytical framework to help decision-makers (patients, providers, and payers) compare alternative options in terms of their clinical benefit and cost. The present study addresses the growing need to demonstrate how medical technologies fit into the emerging value-based paradigm. To this end, a model was developed to evaluate the clinical-economic value of GGS compared to TGR and MCGR. The economic model presented in this study demonstrates that the cost impact of GGS due to increased construct cost (vs TGR) and slightly higher revision rate due to device failure (vs TGR and MCGR) is offset by obviating the need for repeated surgeries to lengthen TGR (with associated deep SSIs). The reduction in costs was mainly driven by the absence of inpatient stay, anesthesia, and intraoperative neurophysiological monitoring associated with invasive TGR lengthenings. As seen in Figure 1, GGS becomes cost saving in the second year following implantation and remains so throughout the remainder of the 6-year episode of care. Compared to MCGR, GGS had a similar number of device failures (rod fractures) and deep SSIs; however the reduced construct cost for GGS drove cost savings at implantation and exchange as well as after a device failure or deep SSI. Previous economic analyses showed cost savings or cost neutrality for MCGR vs TGR, which could be reflective of the shorter time horizon with lack of exchange, 13,16 or the less expensive single-rod construct used in 15% of patients. 14 We believe that our approach is most reflective of current practice with dual-rod construct and represents a realistic 6-year time horizon, considering the average length of treatment. As a cost analysis, rather than cost-effectiveness analysis, this model did not account for family disruption, pain, psychological distress, implications of multiple anesthetics, or compromised HRQoL associated with invasive TGR lengthenings, nor for patient anxiety surrounding the frequent MCGR lengthenings. Additionally, recent literature has reported that an increased number (eight or more) of invasive surgeries in patients with TGR is significantly correlated with an even higher rate of complications. 27 There could therefore be substantial additional direct and indirect cost savings associated with the use of GGS compared to TGR. Further, the model does not include instances where the MCGR rod fails to lengthen (as reported by Choi et al in two of 54 patients), possibly underestimating costs of revision surgery; current recommendations are to reattempt lengthening at a later date and if that fails, replacing the device. 11,12 Lastly, due to conflicting views on the necessity of revision for hook dislodgement and screw pull-out complications, these have not been included in the model. While revision costs may therefore be slightly underestimated, they currently only account for 9.1%, 7.5%, and 2.8% of total costs for GGS, MCGR, and TGR, respectively, and slight variations are unlikely to affect the budget impact trend of the model. Also noteworthy, the Centers for Medicare and Medicaid Services (CMS) approved MCGR for a new technology addon payment (NTAP) for fiscal year (FY) 2017 in the amount of US$15,750, whereby CMS provides incremental payment (in addition to the DRG payment) for technologies that qualify for NTAP. 35 The NTAP payment mechanism is based on the cost to hospitals for the new technology and lasts for 2-3 years until data are available to reflect the cost of the technology in the DRG weights through recalibration. However, NTAP applies only to Medicare patients, of whom <2,000 are under 18 years, meaning that it is unlikely that a Medicare patient would be diagnosed with EOS, a disease that affects fewer than one in 10,000 people. 36,37 For this reason, and the fact that CMS is proposing to discontinue NTAP for MCGR for FY 2018, we did not account for the NTAP in this cost analysis. 35 Limitations While the model parameter values were based on the most recent published literature, these reports nevertheless reflect various rod materials and diameters. This is particularly relevant for TGR, for which 3.5, 4.5, and 5.5 mm rods of steel and titanium in both single-and dual-rod constructs were reported in the NICE external assessment report. 18,23 This limitation was addressed by adjusting the TGR rod fracture rate using the relative risk of rod fracture for single-vs dualrod construct reported by Bess et al. 24 Further, the data used herein for MCGR represented a mixture of both first-and second-generation devices, whereby the second generation incorporates structural and mechanism improvements intended to reduce device failures. These MCGR data also had a limited length of follow-up (mean of 19.4 months) and a slightly smaller population (54 patients) than that reported for MCGR in the NICE external assessment report (80 patients across eight studies) but was taken from a multicenter study of five centers, rather than a collection of smaller studies and had a higher proportion of dual-rod constructs better reflecting current practice. 11,18,23 The relatively short followup compared to GGS (6 years) and TGR (4 years) may have inflated the MCGR device failure rate slightly. Similarly, compared to the original GGS technique that used 3.5 mm rods through 2008, the current GGS technique uses larger rods, deeper screw placement, c-clamps to prevent migration in the event of rod breakage, and O-arm or other image guidance. The rod breakage rate for GGS came from a relatively small sample size (18 patients); however these data were chosen because they are the most reflective of current practice and patients were followed for six years through definitive treatment. 20 While the GGS device failure rate represents a key model parameter, to which the cumulative costs are sensitive, it is important to note that the overall trend of the results (a negative budget impact for GGS), does not change in the scenario and sensitivity analysis, when these rates are varied across a clinically relevant set of values. Conclusion From the perspective of the US IDS, GGS can be cost saving over the 6-year episode of care by obviating the need for repeated and costly invasive TGR surgical lengthenings and their associated complications, particularly deep SSIs. Compared with MCGR, GGS can be cost saving due to a comparable rod fracture and deep SSI rate and a substantially reduced construct cost. Further analyses focusing strictly on current generation technologies and accounting for HRQoL of children and their caregivers should be considered for future research.
2018-04-03T05:23:35.089Z
2018-03-16T00:00:00.000
{ "year": 2018, "sha1": "053f98f706af364776946526de6e023d567b7c79", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2147/ceor.s152892", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07be43476b60eac4b1d79bd2dfac7f8d4c767f0a", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4817674
pes2o/s2orc
v3-fos-license
Nonclassicality by Local Gaussian Unitary Operations for Gaussian States A measure of nonclassicality N in terms of local Gaussian unitary operations for bipartite Gaussian states is introduced. N is a faithful quantum correlation measure for Gaussian states as product states have no such correlation and every non product Gaussian state contains it. For any bipartite Gaussian state ρAB, we always have 0≤N(ρAB)<1, where the upper bound 1 is sharp. An explicit formula of N for (1+1)-mode Gaussian states and an estimate of N for (n+m)-mode Gaussian states are presented. A criterion of entanglement is established in terms of this correlation. The quantum correlation N is also compared with entanglement, Gaussian discord and Gaussian geometric discord. Introduction The presence of correlations in bipartite quantum systems is one of the main features of quantum mechanics. The most important one among such correlations is entanglement [1]. However, recently much attention has been devoted to the study and the characterization of quantum correlations that go beyond the paradigm of entanglement, being necessary but not sufficient for its presence. Non-entangled quantum correlations also play important roles in various quantum communications and quantum computing tasks [2][3][4][5]. For the last two decades, various methods have been proposed to quantify quantum correlations, such as quantum discord (QD) [6,7], geometric quantum discord [8,9], measurement-induced nonlocality (MIN) [10] and measurement-induced disturbance (MID) [11] for discrete-variable systems. It is also important to develop new simple criteria for witnessing correlations beyond entanglement for continuous-variable systems. In this direction, Giorda, Paris [12] and Adesso, Datta [13] independently introduced the definition of Gaussian QD for Gaussian states and discussed its properties. Adesso and Girolami in [14] proposed the concept of Gaussian geometric discord (GD) for Gaussian states. Measurement-induced disturbance of Gaussian states was studied in [15], while MIN for Gaussian states was discussed in [16]. For other related results, see [17,18] and the references therein. Note that not every quantum correlation defined for discrete-variable systems has a Gaussian analogy for continuous-variable systems [16]. On the other hand, the values of Gaussian QD and Gaussian GD are very difficult to be computed and the known formulas are only for some (1 + 1)-mode Gaussian states. Little information is revealed by Gaussian QD and GD. The purpose of this paper is to introduce a new ρ − (U A ⊗ I)ρ(U A ⊗ I) † F , where A F = Tr(A † A) denotes the Frobenius norm and U A is any unitary operator satisfying [ρ A , U A ] = 0. This quantity demands that the reduced density matrix of the subsystem A is invariant under this unitary transformation. However, the global density matrix may be changed after such local unitary operation, and therefore d U A (ρ) may be non-zero for some U A . Then, Datta, Gharibian, et al. discussed respectively in [20,21] the properties of d U A (ρ) and revealed that max U A d U A (ρ) can be used to investigate the nonclassical effect. Motivated by the works in [19][20][21], we can consider an analogy for continuous-varable systems. In the present paper, we introduce a quantity N in terms of local Gaussian unitary operations for (n + m)-mode quantum states in Gaussian systems. Different from the finite dimensional case, besides the local Gaussian unitary invariance property for quantum states, we also show that N (ρ AB ) = 0 if and only if ρ AB is a Gaussian product state. This reveals that the quantity N is a kind of faithful measure of the nonclassicality for Gaussian states that a state has this nonclassicality if and only if it is not a product state. In addition, we show that 0 ≤ N (ρ AB ) < 1 for each (n + m)-mode Gaussian state ρ AB and the upper bound 1 is sharp. An estimate of N for any (n + m)-mode Gaussian states is provided and an explicit formula of N for any (1 + 1)-mode Gaussian states is obtained. As an application, a criterion of entanglement for (1 + 1)-mode Gaussian states is established in terms of N by numerical approaches. Finally, we compare N with Gaussian QD and Gaussian GD to illustrate that it is a better measure of the nonclassicality. Gaussian States and Gaussian Unitary Operations Recall that, for arbitrary state ρ in an n-mode continuous-variable system, its characteristic function χ ρ is defined as where z = (x 1 , y 1 , · · · , x n , y n ) T ∈ R 2n with R the field of real numbers and (·) T the transposition, and W(z) = exp(iR T z) is the Weyl operator. Let R = (R 1 , R 2 , · · · , R 2n ) T = (Q 1 ,P 1 , · · · ,Q n ,P n ) T . As usual,Q i andP i stand respectively for the position and momentum operators for each i ∈ {1, 2, · · · , n}. They satisfy the Canonical Commutation Relation (CCR) in natural units (h = 1) Gaussian states: ρ is called a Gaussian state if χ ρ (z) is of the form where d = ( R 1 , R 2 , . . . , R 2n ) T = (Tr(ρR 1 ), Tr(ρR 2 ), . . . , Tr(ρR 2n )) T ∈ R 2n is called the mean or the displacement vector of ρ and Γ = ( [22][23][24]). Here, M l×k (R) stands for the set of all l-by-k real matrices and, when l = k, we write M l×k (R) as M l (R). Note that the CM Γ of a state is symmetric and must satisfy the uncertainty principle Γ + i∆ ≥ 0, where From the diagonal terms of the above inequality, one can easily derive the usual Heisenberg uncertainty relation for position and momentum V(Q i )V(P i ) ≥ 1 with V(R i ) = (∆R i ) 2 [25]. Now assume that ρ AB is any (n + m)-mode Gaussian state. Then, the CM Γ of ρ AB can be written as where A ∈ M 2n (R), B ∈ M 2m (R) and C ∈ M 2n×2m (R). Particularly, if n = m = 1 , by means of local Gaussian unitary (symplectic at the CM level) operations, Γ has a standard form: [26][27][28][29]). Gaussian unitary operations. Let us consider an n-mode continuous-variable system with R = (Q 1 ,P 1 , · · · ,Q n ,P n ) T . For a unitary operator U, the unitary operation ρ → UρU † is said to be Gaussian if its output is a Gaussian state whenever its input is a Gaussian state, and such U is called a Gaussian unitary operator. It is known that a unitary operator U is Gaussian if and only if for some vector m in R 2n and some S ∈ Sp(2n, R), the symplectic group of all 2n × 2n real matrices S that satisfy Thus, every Gaussian unitary operator U is determined by some affine symplectic map (S, m) acting on the phase space, and can be denoted by U = U S,m ( [23,24]). The following well-known facts for Gaussian states and Gaussian unitary operations are useful for our purpose. Lemma 1 ([23] ). For any (n + m)-mode Gaussian state ρ AB , write its CM Γ as in Equation (1). Then, the CMs of the reduced states ρ A = Tr B ρ AB and ρ B = Tr A ρ AB are matrices A and B, respectively. where Γ, Γ A and Γ B are the CMs of ρ AB , σ A and σ B , respectively. Lemma 3 ([23,24]). Assume that ρ is any n-mode Gaussian state with CM Γ and displacement vector d, and U S,m is a Gaussian unitary operator. Then, the characteristic function of the Gaussian state σ = UρU † is of the form exp(− 1 4 Quantum Correlation Introduced by Gaussian Unitary Operations Now, we introduce a quantum correlation N by local Gaussian unitary operations in the continuous-variable system. Definition 1. For any (n + m)-mode quantum state ρ AB ∈ S(H where the supremum is taken over all Gaussian unitary operators U ∈ B(H B ) satisfying Uρ B U † = ρ B , and ρ B = Tr A (ρ AB ) is the reduced state. Here, B(H B ) is the set of all bounded linear operators acting on H B . Observe that N (ρ AB ) = 0 holds for every product state. Thus, the product state contains no such correlation. Remark 1. For any Gaussian state ρ AB , there exist many Gaussian unitary U so that Uρ B U † = ρ B . This ensures that the definition of the quantity N (ρ AB ) makes sense for each Gaussian state ρ AB . To see this, we need Williamson Theorem ( [31]), which states that, for any n-mode Gaussian state and v i s are called respectively the Williamson form and the symplectic eigenvalues of Γ ρ . By the Williamson Theorem, there exists a Gaussian unitary operator . Then, S θ is a symplectic matrix, and the corresponding We first prove that N is local Gaussian unitary invariant for all quantum states. Proposition 1 (Local Gaussian unitary invariance). If ρ AB ∈ S(H Then, W is also a Gaussian unitary operator and satisfies The next theorem shows that N (ρ AB ) is a faithful nonclassicality measure for Gaussian states. Proof of Theorem 1. By Definition 1, the "if" part is apparent. Let us check the "only if" part. Since the mean of any Gaussian state can be transformed to zero under some local Gaussian unitary operation, it is sufficient to consider those Gaussian states whose means are zero by Proposition 1. In the sequel, assume that ρ AB is an (n + m)-mode Gaussian state with zero mean vector and CM Γ = A C C T B as in Equation (1), so that N (ρ AB ) = 0. By Lemma 1, the CM of ρ B is B. According to the Williamson Theorem, there exists a symplectic matrix S 0 such that . It follows from Proposition 1 that N (σ AB ) = N (ρ AB ) = 0. Obviously, σ AB has the CM of form: For any θ i ∈ [0, π 2 ] for i = 1, 2, · · · , m, let S θ be the symplectic matrix as in Remark 1. Then, , and hence they must have the same CMs, that is, Note that I − S T θ is an invertible matrix if we take θ i ∈ (0, π 2 ) for each i. Then, it follows from C = C S T θ that we must have C = 0. Thus, σ AB is a product state by Lemma 2, and, consequently, is also a product state. We can give an analytic formula of N (ρ AB ) for (1+1)-mode Gaussian state ρ AB . Since N is locally Gaussian unitary invariant, it is enough to assume that the mean vector of ρ AB is zero and the CM is standard. Proof of Theorem 2. By Proposition 1, we may assume that the mean vector of ρ AB is zero. Let U S,m be a Gaussian unitary operator such that U S,m ρ B U † S,m = ρ B . Then, S and m meet the conditions SB 0 S T = B 0 and Sd B + m = d B = 0. It follows that m = 0. Thus, we can denote U S,m by U S . For the general (n + m)-mode case, it is difficult to give an analytic formula of N (ρ AB ) for all (n + m)-mode Gaussian states ρ AB . However, we are able to give an estimate of N (ρ AB ). Theorem 3. For any (n + m)-mode Gaussian state ρ AB with CM Γ = A C C T B as in Equation (1), Particularly, when ρ AB is pure, . Moreover, the upper bound 1 in the inequality Proof of Theorem 3. By Proposition 1, without loss of generality, we may assume that the mean of Note that, for any n-mode Gaussian states ρ, σ with CMs V ρ , V σ and means d ρ , d σ , respectively, it is shown in [32] that Hence, , by Fischer's inequality (p. 506, [33]), we have . If ρ AB is a pure state, then Notice that, by Equation (6), we have 1 < 1 since det A > 0 and det B > 0, that is, the inequality (5) is true. To see that the upper bound 1 is sharp, consider the two-mode squeezed vacuum state 2 ) is the two-mode squeezing operator with squeezed number r ≥ 0 and |00 is the vacuum state ( [24]). The CM . By Theorem 2, it is easily calculated that N (ρ(r)) = 1 − 8 6 + exp(−4r) + exp(4r) . Comparison with Other Quantum Correlations Entanglement is one of the most important quantum correlations, being central in most quantum information protocols [1]. However, it is an extremely difficult task to verify whether a given quantum state is entangled or not. Recall that a quantum state ρ AB ∈ S(H A ⊗ H B ) is said to be separable if it belongs to the closed convex hull of the set of all product states ρ A ⊗ ρ B ∈ S(H A ⊗ H B ). Note that a state ρ AB is separable if and only if it admits a representation ρ AB = X ρ A (x) ⊗ ρ B (x)π(dx), where π(dx) is a Borel probability measure and ρ A(B) (x) is a Borel S(H A(B) )-valued function on some complete, separable metric space X [34]. One of the most useful separability criteria is the positive partial transpose (PPT) criterion, which can be found in [35,36]. The PPT criterion states that if a state is separable, then its partial transposition is positive. For discrete systems, the positivity of the partial transposition of a state is necessary and sufficient for its separability in the 2 ⊗ 2 and 2 ⊗ 3 cases. However, it is not true for higher dimensional systems [36]. For continuous systems, in [27,37], the authors extended the PPT criterion to (n + m) -mode continuous systems. It is remarkable that, for any (1 + n)-mode Gaussian state, it has PPT if and only if it is separable. Furthermore, for the (1 + 1)-mode case, it is shown that a (1 + 1)-mode Gaussian state ρ AB is separable if and only ifv − ≥ 1, wherev − is the smallest symplectic eigenvalue of the CM of the partial transpose ρ T B AB [24,29]. Comparing N with the entanglement, we conjecture that there exists some positive number d < 1 such that N (ρ AB ) ≤ d for any (n + m)-mode separable Gaussian state ρ AB , that is, If this is true, then ρ AB is entangled when N (ρ AB ) > d. This will give a criterion of entanglement for (n + m)-mode Gaussian states in terms of correlation N . Though we can not give a mathematical proof, we show that this is true for (1 + 1)-mode separable Gaussian states with d ≤ 1 10 by a numerical approach (Firstly, we randomly generated one million, five million, ten million, fifty million, one hundred million, five hundred million separable Gaussian states with a, b, |c|, |d| ranging from 1 to 2, respectively. We found that the maximum of N is smaller than 0.09. Secondly, we used the same method and extended the range to 5. Then, the maximum of N is smaller than 0.1. Thirdly, using the same method and extending the range to 10, 100, 1000, 10000, respectively, we found that the maximum of N is still smaller than 0.1. We repeated the above computations ten times, and the result is just the same). Proposition 2. N (ρ AB ) ≤ 0.1 for any (1 + 1)-mode separable Gaussian state ρ AB . It is followed from Theorem 1 that the quantum correlation N exists in all entangled Gaussian states and almost all separable Gaussian states except product states. In addition, Proposition 2 can be viewed as a sufficient condition for the entanglement of two-mode Gaussian states: if N (ρ AB ) > 0.1, then ρ AB is entangled. To have an insight into the behavior of this quantum correlation by N and to compare it with the entanglement and the discords, we consider a class of physically relevant states-squeezed thermal state (STS). This kind of Gaussian state is used by many authors to illustrate the behavior of several interesting quantum correlations [12,13]. Recall that a two-mode Gaussian state ρ AB is an STS if ρ AB = S(r)ν 1 (n 1 ) ⊗ ν 2 (n 2 )S(r) † , where ν i (n i ) = ∑ kn k i (1+n i ) k+1 |k k| is the thermal state with thermal photon numbern i (i = 1, 2) and S(r) = exp{r(â † 1â † 2 −â 1â2 )} is the two-mode squeezing operator. Particularly, whenn 1 =n 2 = 0, ρ AB is a pure two-mode squeezed vacuum state, also known as an Einstein-Podolski-Rosen (EPR) state [24]. Whenn 1 > 0 orn 2 > 0, ρ AB is a mixed Gaussian state. We first discuss the relation between N and the entanglement by considering SSTS. Regard N (ρ AB ) as a function of µ andv − . From Figure 1a, for separable states, we see that the value N at the separable SSTS is always smaller than 0.06, which supports positively Proposition 2. From Figure 1b, for fixed purity µ, N turns out to be a decreasing function ofv − . However, for fixedv − , N tends to 0 when µ increases. For the entangled SSTS, one sees from Figure 2a,b that the value of N is from 0 to 1. This reveals that, for some entangled SSTSs, N can be smaller than 1 10 . Thus, Proposition 2 is only a necessary condition for a Gaussian state to be separable. For fixed purity µ, from Figure 1b and 2b, N (ρ AB ) increases when entanglement increases (that is,v − → 0) and lim µ→1,v − →0 N = 1. However, for fixed v − , the behavior of N on µ is more complex. Regarding N as a function of r andn, Figure 3 shows that N (ρ AB ) is an increasing function of r and a decreasing function ofn, respectively. The value of N (ρ AB ) always gains the maximum at n = 0, that is, at pure states. Figure 3b also shows that N (ρ AB ) almost depends only onn when r is large enough because the curves for r = 5, 10, 20 are almost the same. Recall that an n-mode Gaussian positive operator-valued measure (GPOVM) is a collection of positive operators Π = {Π(z)} satisfying z Π(z)dz = I, where Π(z) = W(z)ωW † (z), z ∈ R 2n with W(z) the Weyl operators and ω an n-mode Gaussian state, which is called the seed of the GPOVM Π [38,39]. Let ρ AB be a (n + m)-mode Gaussian state and Π = {Π(z)} be a GPOVM of the subsystem B. Denote by ρ A (z) = 1 p(z) Tr B (ρ AB I ⊗ Π(z)) the reduced state of the system A after the GPOVM Π performed on the system B, where p(z) = Tr(ρ AB I ⊗ Π(z)). Write the von Neumann entropy of a state ρ as S(ρ), that is, S(ρ) = −Tr(ρ log ρ). Then, the Gaussian QD of ρ AB is defined as D(ρ AB ) = S(ρ B ) − S(ρ AB ) + inf Π dzp(z)S(ρ A (z)) [12,13], where the infimum takes over all GPOVMs Π performed on the system B. It is known that a (1 + 1)-mode Gaussian state has zero Gaussian QD if and only if it is a product state; in addition, for all separable (1 + 1)-mode Gaussian states, D(ρ AB ) ≤ 1; if the standard form of the CM of a (1 + 1)-mode Gaussian state ρ AB is as in Equation (2), then where the infimum takes over all one-mode Gaussian states ω, f (x) = x+1 2 log x+1 2 − x−1 2 log x−1 2 , v − and v + are the symplectic eigenvalues of the CM of ρ AB , E ω = A 0 − C 0 (B 0 + Γ ω ) −1 C T 0 with Γ ω the CM of ω. Let α = det A 0 , β = det B 0 , γ = det C 0 , δ = det Γ 0 , then we have [13] inf otherwise. In [14], the quantum GD D G is proposed. Consider an (n + m)-mode Gaussian state ρ AB , its Gaussian GD is defined by D G (ρ AB ) = inf Π ||ρ AB − Π(ρ AB )|| 2 2 , where the infimum takes over all GPOVM Π performed on system B, ||· || 2 stands for the Hilbert-Schmidt norm and Π(ρ AB ) = dz(I ⊗ Π(z))ρ AB (I ⊗ Π(z)). If ρ AB is a (1 + 1)-mode Gaussian state with the CM Γ as in Equation (1) and Π is an one-mode Gaussian POVM performed on mode B with seed ω B , then It is known from [14] that Now it is clear that, for (1 + 1)-mode Gaussian state ρ AB , D G (ρ AB ) = 0 if and only if ρ AB is a product state. By Theorem 1 and the results mentioned above, D, D G and N describe the same quantum correlation for (1 + 1)-mode Gaussian states. However, from the definitions, D, D G use all GPOVMs, while N only employs Gaussian unitary operations, which is simpler and may consume less physical resources. Moreover, though an analytical formula of D is given for two-mode Gaussian states, the expression is more complex and more difficult to calculate (Equations (8) and (9)). D G is not handled in general and there is no analytical formula for all (1 + 1)-mode Gaussian states (Equation (10)). As far as we know, there are no results obtained on D, D G for general (n + m)-mode case. To have a better insight into the behavior of N and D G , we compare them in scale with the help of two-mode STS. Note that D G of any two-mode STS ρ AB is given by [14] Clearly, our formula (7) for N is simpler then formula (11) for D G . Figures 4 and 5 are plotted in terms of photo numbern and squeezing parameter r. Figure 4 shows that, for the case of SSTS and for 0 < r ≤ 2.5, we have D G (ρ AB ) < N (ρ AB ). This means that N is better than D G when they are used to detect the correlation that they describe in the SSTS with r < 2.5. Figure 5a reveals that, for the case of nonsymmetric STS and for r = 0.5, we have D G (ρ AB ) < N (ρ AB ); that is, N is better in this situation too. However, for r = 5, N and D G can not be compared with each other globally, which suggests that one may use max{N (ρ AB ), D G (ρ AB )} to detect the correlation. Conclusions In conclusion, we introduce a measure of quantum correlation by N for bipartite quantum states in continuous-variable systems. This measure is introduced by performing Gaussian unitary operations to a subsystem and the value of it is invariant for all quantum states under local Gaussian unitary operations. N exists in all (n + m)-mode Gaussian states except product ones. In addition, N takes values in [0, 1) and the upper bound 1 is sharp. An analytical formula of N for any (1 + 1)-mode Gaussian states is obtained. Moreover, for any (n + m)-mode Gaussian states, an estimate of N is established in terms of its covariance matrix. Numerical evidence shows that the inequality N (ρ AB ) ≤ 0.1 holds for any (1 + 1)-mode separable Gaussian states ρ AB , which can be viewed as a criterion of entanglement. It is worth noting that Gaussian QD, Gaussian GD and N measure the same quantum correlation for (1 + 1)-mode Gaussian states. However, N is easer to calculate and can be applied to any (n + m)-mode Gaussian states.
2018-04-14T22:07:09.401Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "b1f397f9b06272a52e9a43682bd22069b8d94ada", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/20/4/266/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66a9b1053d70f9bc6f0119721b38fbeba0208ade", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics", "Medicine" ] }
227231240
pes2o/s2orc
v3-fos-license
TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data. Recently, pre-trained language models show potential in few-shot learning with linguistic knowledge learnt from pretraining on large-scale corpus. However, benefiting table-to-text generation in few-shot setting with the powerful pretrained language model faces three challenges, including (1) the gap between the task’s structured input and the natural language input for pretraining language model. (2) The lack of modeling for table structure and (3) improving text fidelity with less incorrect expressions that are contradicting to the table. To address aforementioned problems, we propose TableGPT for table-to-text generation. At first, we utilize table transformation module with template to rewrite structured table in natural language as input for GPT-2. In addition, we exploit multi-task learning with two auxiliary tasks that preserve table’s structural information by reconstructing the structure from GPT-2’s representation and improving the text’s fidelity with content matching task aligning the table and information in the generated text. By experimenting on Humans, Songs and Books, three few-shot table-to-text datasets in different domains, our model outperforms existing systems on most few-shot settings. Introduction to-text generation, aiming at generating descriptive text about important information in structured data, has well application prospect in communicating with human in a comprehensible and natural way, such as financial report (Murakami et al., 2017), medical report (Hasan and Farri, 2019) generation, etc. In recent years, data-driven models have shown impressive capability to produce informative and fluent text with the help of large-scale datasets, such as WIKIBIO (Lebret et al., 2016) and E2E (Dušek et al., 2020). However, it is not always feasible to collect large-scale labeled dataset for various domains in the real world, resulting in unsatisfying performance due to the insufficient training. Such few-shot learning setting for table-to-text generation is not well-explored, and in this paper, we focus on exploring how to efficiently model for few-shot table-to-text generation with limited training pairs. Recently, pre-trained language models have shown promising progress in various natural language processing tasks Devlin et al., 2019;Radford et al., 2019). They can capture linguistic knowledge by pretraining on large-scale unlabeled dataset and generalize to downstream tasks with little labeled data in target domain, effectively modeling for few-shot setting (Peng et al., 2020). However, efforts to benefit table-to-text generation from the powerful pre-trained language model, especially in few-shot setting, are non-trivial due to three challenges. (1) There is a gap between the structured data input for table-to-text generation and natural language input that is used for pretraining (2) Also, it lacks modeling of the table's structure which contains rich information to understand the input before generating text. (3) Additionally, it doesn't address how to maintain text's fidelity for table-to-text gener- ation while exploiting linguistic knowledge from pretraining corpus, that is the (highlighted) information in text (Table 1) should correctly derive from structured data. In order to alleviate aforementioned problems, we propose TableGPT that focus on generating highfidelity text for table-to-text generation with limited training pairs. Addressing the gap between structured table input and natural language input that GPT-2 processes during pretraining, we utilize a table transformation module that employs template to naturally transform structured table into natural language. In addition, we utilize two auxiliary tasks under the framework of multi-task learning, table structure construction and content matching, targeting pretrained GPT-2's lack of modeling for table structure and text's fidelity. In detail, the table structure reconstruction task is proposed for GPT-2 which force it to embed table structure into its representation when modeling structured table. Besides, we utilize content matching task that help model correctly describe important information from table via Optimal-Transport technique, which measures the distance between the information in generated text and information in table and use the distance as penalty for text with incorrect information. We conducted experiments on three data-to-text datasets on different domains (Chen et al., 2020b): Humans, Books, Songs in various settings. Both automatic evaluation and human evaluation results show that our model can achieve new state-of-the-art performance for table-to-text generation in terms of generating fluent and high-fidelity text in most few-shot settings. Task Definition For the table-to-text task discussed in this paper, we can formulate each training instance as pair of table and summary E = (S, T ). Given a table, which can be formulated as sets of records S = {r i } N i=1 , the model is expected to generate descriptive text T = w 1 , w 2 , ..., w L . N is the number of records and L is the number of words in text. Each record r i consists of two type of information: r i .a and r i .v. r i .a denotes the attribute of the record (e.g. name) and r i .v denotes the corresponding value (e.g. james beattie). Please note that both r i .a and r i .v can be viewed as a sequence of words. Pre-trained Language Model Recently, pre-trained language models, such as BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), XLNet ) and more, have achieved remarkable progress in various NLP tasks. The main idea is to pretrain a neural language model with large number of parameters on large-scale dataset in order to capture the linguistic knowledge at first. Then, transfer those knowledge to downstream task (a) Table Structure Reconstruction and (c) Content Matching are auxiliary tasks. The former reconstructs attributes from GPT-2's representation of value tokens, forming L SR . The latter use Optimal Transport to measure distance between information mentioned in text and table, using this as content matching loss L CM . The model is jointly finetuned with above three losses. via finetuning on the task's dataset. Impressively, they can outperform various NLP tasks' previous stateof-the-arts by a large margin. Since we investigate table-to-text generation task in this paper instead of natural language understanding tasks, we choose GPT-2 as the basis of our model. The model structure for GPT-2 is a 12-to-48-layer transformer decoder (Vaswani et al., 2017) with 117 million to 1542 million parameters. Each layer consists of a stack of masked multi-head self attention and feed-forward neural network with residual connection and layer normalization. The pre-training target of GPT-2 is as the same as language model: maximizing gold text's probability. The large-scale model is trained on vast and diverse WebText dataset with 8 million documents collected from the Internet. Its success in the area of text generation attributes to both its high capacity model and knowledge learnt from pretraining on vast dataset. Approach In this section, we propose to address the three incompatibility between pretrained language model and table-to-text generation illustrated in Section 1. Figure 1 presents the overall multi-task training framework of our model. We first utilize table transformation module to reasonably transform structured table into text sequence and aggregate it with reference text, resulting in a suitable training sequence for GPT-2 model. Then, two auxiliary tasks: table structure reconstruction and content matching with optimal transport are performed on top of GPT-2's representation of the training sequence. Those two auxiliary tasks' training objectives along with GPT-2's language model training objective are jointly finetuned based on the pre-trained GPT-2 under the multi-task training framework. The overall objective here is to produce high-fidelity text while maintaining its fluency. Table Transformation As noted in Section 2.1, the given structured table consists of multiple records as attribute-value pairs. In order to adapt to the sequential nature of language model, we employ a template-based table serialization method (Chen et al., 2019b) to encode such structured table as a sequence. For example, we serialized the attribute-value pair "name: jack reynolds" as a sentence "name is jack reynolds." and concatenate all of them into a document according to the order of records in table. After obtaining the serialized structured table, we connect it with the corresponding natural language description T with special token "< table2text >". It serves as a functional token that both encodes the overall information of the table and as a starting signal to generate text. The whole sequence is ended with special token "< endof text >". In this way, our model encodes the structured table and learns to predict the target sequence one word at a time as in GPT-2. We denote the final input sequence as ST = st 1 , · · · , st m+n+2 . m is the length of serialized table, n is the length of text and 2 refers to the special tokens mentioned above. The language model's training objective is to maximize the likelihood of the reference text, which is equivalent to minimize the negative log likelihood (language model loss L LM ) as characterized by Equation 1. Table Structure Reconstruction As shown in Table 1, unlike many natural language generation tasks that take sentences as input, tableto-text generation models need to process table with structural information. Each data record in the input can be seen as a pair of attribute and value. Traditionally, table-to-text models utilized attribute-value concatenation to represent the tables. In this way, they are able to capture the structural information by learning the correspondence between value and attribute. However, when we transform the table into natural language and use pre-trained language model GPT-2 for representation, it lacks the explicit modeling to incorporate such structural information. Inspired by , our model treats the attribute names as the labels for the model to reconstruct such structural information from GPT-2's learned table values' representation. In detail, as shown in Figure 1 (b), given a serialized table S k which consists of different attributes and values in natural language form, the table structure reconstruction task takes the last layer of GPT-2's hidden states for each value tokens [H t i,j ] i=1:n,j=1:m i of the table and classify which attribute does each value token's representation refer to. Specifically, the i means the ith record of the table S k , j means the jth value token of ith record, n is the number of records and m i is the number of tokens of ith record's value. (2) Equation 2 shows the detail of the reconstruction classifier. Please note that the serialized table consists of attribute, value and template tokens. In this auxiliary task, we only take the GPT-2's hidden states for those value tokens and reconstruct the structural information by classifying their corresponding attribute. W t and b t are the trainable parameters of the introduced classifier and p(a i,j ) is probability to classify value token H t i,j as referring to attribute a i,j . We use cross entropy as this task's objective function, illustrated by Equation 2 and 3. a i,j refers to the gold attribute label for the value token and Z is the number of value tokens' that need to reconstruct the corresponding attribute label. By incorporating this auxiliary task, TableGPT can be guided to embed structural information when representing the table at the training stage. Content Matching Take Table 1 as an example, generating high fidelity text that correctly describe information in the table is the core of table-to-text generation. Producing fluent but incorrect text still means unsatisfying performance as the text is not reliable for the purpose of disseminating comprehensible information. Ideally, when generating words that is intended to describe information in table, directly copying them from table will result in high-fidelity text. However, it's non-trivial to integrate a copy mechanism inside the transformer architecture of GPT-2 model, since the change of model structure may break syntactic and semantic features contained in the pretrained language model, which are essential for text generation especially in few-shot setting. Also, rephrasing sometimes is needed to produce more natural text. In order to encourage our model to generate high-fidelity text while keeping GPT-2's advantage of produing fluent text, we utilize another auxiliary task, called content matching task, during finetuning on the table-to-text corpus. The content matching task is to explicitly match the important information in a table with information in the corresponding generated text. An intuitive way is to apply a mis-matching loss by hard-matching key information in table and information in the generated text. But that is discrete and non-differentiable and the corresponding gradient descent can't be learned directly. Inspired by optimal transport (OT) that can measure the distance between information in source sequence and target sequence without breaking the end-to-end training process, we adopt it as a content matching loss that guide the model to generate text containing information that align with the table. As in Section 3.1, the whole GPT-2 training sequence consists of serialized table and reference text. The serialized table sequence, x = x 1 , · · · , x m , can be represented as a discrete distribution µ = m i=1 u i δ x i , where u i ≥ 0 and i u i = 1, m is the length and δ x is the Dirac function centered on x. Similar with the serialized table sequence, the discrete distribution of reference text sequence y = y 1 , · · · , y n can be represented as ν = n j=1 v j δ y j . Under such setting, computing the OT (optimal transport) distance between probability distributions u = {u i } m i=1 and v = {v j } n j=1 is defined as the solution of the following network-flow problem (Luise et al., 2018): where Π(µ, ν) = {T ∈ R m×n + |T1 n = µ, T 1 m = ν} which is the set of joint distribution of the two marginal distribution u and v, 1 n and 1 m are n-dimensional all-one vector and m-dimensional all-one vector respectively, and d(x i , y j ) denotes the cost of moving x i to y j . Especially, we adopt the cosine distance between two token embedding vectors of x i and y j , which is defined as d(x i , y j ) = 1 − x i y j x i 2 y j 2 . Exact minimization over T is computational intractable. In order to overcome this problem, we use the recently proposed Inexact Proximal point method for Optimal Transport (IPOT) as an approximation. For natural language generation tasks such as neural machine translation, OT distance is often applied by matching source sequence with whole target sequence, since almost every word in both sequences are supposed to be matched. However, when it comes to table-to-text task in a realistic way, there are some redundant information or words in both table and text. In order to apply the OT distance, unlike previous adoption (Wang et al., 2020) based on the assumption that all information in the table should be described in text, we propose to only match the record words which appear in both table and reference text. In this way, the OT distance is able to avoid wrongly penalizing text that doesn't mention redundant information in table. Learning Objective For table structure reconstruction and content matching, both auxiliary tasks are trained with the main GPT-2's language model loss together, which can be regarded as multi-task learning. The loss function of multi-task learning consists of language model loss L LM , table structure reconstruction loss L SR and content matching loss L CM . In this way, the loss function L M T of the full model is computed as follows: where λ 1 and λ 2 are hyper-parameters that are two scale factors. Please note that when optimizing the L CM with IPOT algorithm, the gradients of OT loss are hard to back propagate to model's parameters, since the process of sampling words from multinomial distribution which comes from language model We implement TableGPT based on the transformers library (Wolf et al., 2019). The configuration of base GPT-2 model is 12 layers and 8 attention heads per layer. For optimizer, we adopt the OpenA-IAdamW optimizer with 100 warm steps. We train the model with learning rate set to 2e-4. The batch size is set to 10 for all datasets. The weights λ 1 , λ 2 of the table structure reconstruction loss and content matching loss are both 0.2 according to performance on validation set. Following Chen et al. (2020b)'s way to deal with the vocabulary limitation, for all datasets we use the Byte Pair Encoding (BPE) and subword vocabulary as in Radford et al. (2019). Comparing Methods We compare our proposed TableGPT with baseline model and previous state-of-the-art model: Base and Base+switch+LM. More details can be found in Chen et al. (2020b). Table 4: Human evaluation results. Models with perform significantly different from TableGPT (p < 0.05), using a one-way ANOVA with posthoc Tukey HSD tests. • Base: It is based on a Seq2Seq model with field-gating encoder that incorporate the table structure's information during encoding . Additionally, it utilizes the pre-trained word embedding which is fixed during the training stage. Since it achieves competitive performance on large-scale dataset, it can show how a data-driven Seq2Seq model typically perform with limited training data. • Base + switch + LM: It tries to exploit GPT-2's learnt knowledge from pretraining on vast corpus by proposing a switch policy that choosing between copying from infobox and generating from the GPT-2 language model when generating each word in text. We also use the released codes and data 1 by Chen et al. (2020b) to reproduce its result for human evaluation and report corresponding automatic evaulation results as Base + switch + LM(R). • TableGPT: In this paper, we propose TableGPT that exploits GPT-2's learnt knowledge from pretraining on vast corpus for few-shot learning while enhance it for generating high fidelity text with two auxiliary tasks. Also, we perform ablation studies for evaluating each auxiliary task's contribution. -sr represents the variant without table structure reconstruction, -cm represents the variant without content matching and -sr&cm shows the performance of GPT-2 without auxilary tasks. Automatic Evaluation Following the previous work Chen et al. (2020b), we adopt BLEU-4 (Papineni et al., 2002) and ROUGE-4 (Lin, 2004) to conduct automatic evaluations. Table 2 and 3 show corresponding results of comparing methods on different datasets. Although the Base achieves competitive results when training on largescale dataset , the performance drops drastically in few-shot setting. While utilizing a switch policy to combine copying words from table and generating words from GPT-2 (Base + switch + LM) can achieve impressive performance in all few-shot setting, a standard GPT-2 model (TableGPT -sr&cm) that takes a serialized table as input and generate text afterwards without copying can actually perform better in most of the few-shot setting. TableGPT with table structure reconstruction and content matching that preserves structural information during encoding and guide the model to generate high-fidelity text can further improve the performance. Ablation studies also show that each of the auxiliary task attributes to the performance enhancement and applying both of them can achieve the best performance in most of the few-shot setting. Human Evaluation Following the settings in Chen et al. (2020b), we conduct human evaluation on TableGPT with previous state-of-the-art model Base+switch+LM(R) and Reference from two aspects: Factual correctness and Language naturalness. We sampled 100 tables along with corresponding generated text from Humans, Books and Songs test set (under the few-shot setting of 100 training data) respectively, resulting in 300 tables in total. In order to reduce variance caused by human, each example is evaluated by three different graduates who have passed intermediate English test and the scores are averaged in Base+switch+LM(R): james beattie ( , born 10 july 1971 ) is an english former professional association footballer who played for , among others , England . TableGPT: james beattie ( born 27 february 1978 in lancaster ) is a former english footballer who played as a striker . noted as #Sup , and how many are contradicting with or missing from the table, noted as #Cont. We report the average number of supporting facts (#Sup) and contradicting facts (#Cont) on different dataset in Table 4. The second evaluation criteria Language naturalness tries to evaluate these models from the aspect of grammaticality (is the sentence grammatically correct?) and fluency of the text (is the sentence fluent and natural?). We arrange text from different models on the same table into 3 pairs. For each pair of text without table, raters are asked to decide which one is better or whether both text are of same quality, solely in terms of language naturalness. When a generated summary is chosen as the better one, we assign 1.0 score to the better one and 0.0 score to the worse. If two summaries are deemed of same quality, we assign 0.5 score to both of them. We then calculate the average scores and report results on different dataset in Table 4. Results show that TableGPT can produce less contradicting facts than previous state-of-the-art model Base+switch+LM(R) on Humans and Books and achieves comparable performance on Songs. Meanwhile, TableGPT can include more supporting facts on Humans and Songs and generate more natural text than Base+switch+LM(R) on Humans and Songs. Overall, our TableGPT model can improve text fidelity while preserving the naturalness of the text. Case Study Compared with the previous state-of-the-art model Base+switch+LM(R) and Reference, TableGPT performs better. It can accurately describe most of the key information in fluent text compared with reference. For Base+switch+LM(R), the design of separate copy mechanism and GPT-2 may attributes to the inconsistent expression "played for , among others , England ." and the expression of wrong birth date, which shows the imperfect switch policy on deciding when to copy from table can sometimes hurt model's ability to generate high-fidelity text. On the contrary, TableGPT, enhanced to generate high-fidelity text with two auxiliary tasks without breaking one unified GPT-2 model for generating text, performs better in terms of fidelity and fluency of text in this example. Related Work In recent years, neural models for generating texts directly from preprocessed data (Wiseman et al., 2017;Puduppully et al., 2019a;Puduppully et al., 2019b;Gong et al., 2019;Feng et al., 2020), have become mainstream for table-to-text generation and achieved impressive performance with the help of largescale dataset. Mei et al. (2016) proposes a pre-selector on encoder-aligner-decoder model for generation, which strengthens model's content selection ability and obtains considerable improvement over standard Seq2Seq model. proposes a hybrid attention mechanism for modeling the order of content when generating texts. presents a field-gating encoder focusing on modeling table structure and dual attention mechanism to utilize the structure information when decoding. In addition, Bao et al. (2018) develops a table-aware sequence-to-sequence model on this task. However, Chen et al. (2020b) shows that the well-performed Seq2Seq model trained on large-scale dataset suffer from limited training data in few-shot setting. Recently, GPT-2 has been successfully adapted to dialogue generation in few-shot setting (Zhang et al., 2020;Peng et al., 2020), showing potential to address insufficient training data problem for few-shot learning with the help of learnt knowledge from pretraining on vast corpus. As for table-to-text generation, Chen et al. (2020b) propose a switch model that use GPT-2 to generate template-like functional words while generating factual expressions via copying records' values from table in few-shot scenario. Different from this work, we model the table and generate text within a GPT-2 model in a unified way and we show that our proposed TableGPT can perform well in the few-shot scenario. In addition, different from both works mentioned above, we enhance GPT-2's ability to model table structure and to improve text fidelity. Another closely related paper is Chen et al. (2019b), which predicts whether a statement align with records in the table. Since the nature of classification task makes it possible to model table records bidirectionally, it use BERT with templates to transform and model the table. Meanwhile, Chen et al. (2020a) explores coarse-to-fine table-to-text generation with standard GPT-2 model. Different from above two works, we adapted GPT-2 in a text generation scenario for structured data input and more importantly address table structure modeling and improve text fidelity. In addition, one of the auxiliary task: content matching is inspired by ideas in machine translation (Yang et al., 2019a) and Seq2Seq model. The closest paper on data-to-text generation is Wang et al. (2020). They assume that the expected generated text should cover all information in the table. But in a more realistic scenario, like the task we explore in this paper, the table consists of redundant information and only the important ones should be used to constraint model to generate high-fidelity text. Therefore, we propose to match important information only in the table and information in text as an auxiliary task during training. Conclusion In this work, we present TableGPT, which enhances GPT-2 for table-to-text generation with two auxiliary tasks, table structure reconstruction and content matching, for improving text fidelity while exploiting GPT-2's learnt linguistic knowledge from pretraining on large-scale corpus. In detail, we use table transformation to bridge the gap between structured table and natural language input for GPT-2 and further enhance GPT-2 with following two auxiliary tasks for table-to-text generation. The table structure reconstruction task help model preserve the structural information of input while representing table with powerful pretrained GPT-2. In addition, the content matching task guides model to generate high-fidelity task with less incorrect expressions that are contradicting to the table via measuring distance between table and information in generated text. Experiments are conducted on three datasets, Humans, Books and Songs, in different domains. Both automatic evaluation and human evaluation results show that our model achieves new state-of-the-art performance in most few-shot setting.
2020-12-01T14:07:56.801Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "7e57f8aeed2074ea0a943c619cac4a78f28628f4", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/2020.coling-main.179.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "7e57f8aeed2074ea0a943c619cac4a78f28628f4", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17112589
pes2o/s2orc
v3-fos-license
Radiation-induced magnetoresistance oscillation in a two-dimensional electron gas in Faraday geometry Microwave-radiation induced giant magnetoresistance oscillations recently discovered in high-mobility two-dimensional electron systems in a magnetic field, are analyzed theoretically. Multiphoton-assisted impurity scatterings are shown to be the primary origin of the oscillation. Based on a model which considers the interaction of electrons with the electromagnetic fields in Faraday geometry, we are able not only to reproduce the correct period, phase and the negative resistivity of the main oscillation, but also to obtain secondary peaks and additional maxima and minima in the resistivity curve, some of which were already observed in the experiments. The discovery of a new type of giant magnetorersistance oscillations in a high mobility two-dimensional (2D) electron gas (EG) subject to crossed microwave (MW) radiation field and a magnetic field, 1,2,3,4,5 especially the observation of "zero-resistance" states developed from the oscillation minima, 4,5,6,7 has revived tremendous interest in magneto-transport in 2D electron systems. 8,9,10,11,12,13 These radiation-induced oscillations of longitudinal resistivity R xx are accurately periodical in inverse magnetic field 1/B with period determined by the MW frequency ω rather than the electron density N e . The observed R xx oscillations exhibit a smooth magneticfield variation with resistivity maxima at ω/ω c = j − δ − and minima at ω/ω c = j + δ + (ω c is the cyclotron frequency, j = 1, 2, 3...) having positive δ ± around 1 4 . 4 The resistivity minimum goes downward with increasing sample mobility and/or increasing radiation intensity until a "zero-resistance" state shows up, while the Hall resistivity keeps the classical form R xy = B/N e e with no sign of quantum Hall plateau over the whole magnetic field range exhibiting R xx oscillation. To explore the origin of the peculiar "zeroresistance" states, different mechanisms have been suggested. 8,9,10,11,12,13 It is understood that the appearance of negative longitudinal resistivity or conductivity in a uniform model suffices to explain the observed vanishing resistance. 9 The possibility of absolute negative photoconductance in a 2DEG subject to a perpendicular magnetic field was first explored 30 years ago by Ryzhii. 14,15 Recent works 8,10,11 indicated that the periodical structure of the density of states (DOS) of the 2DEG in a magnetic field and the photon-excited electron scatterings are the origin of the magnetoresistance oscillations. Durst et al. 8 presented a microscopic analysis for the conductivity assuming a δ-correlated disorder and a simple form of the 2D electron self-energy oscillating with the magnetic field, obtaining the correct period, phase and the possible negative resistivity. Shi and Xie 11 reported a similar result using the Tien and Gorden current formula 16 for photon-assisted coherent tunneling. In these studies, however, the magnetic field is to provide an oscillatory DOS only and the high frequency (HF) field enters as if there is no magnetic field or with a magnetic field in Voigt configuration. The experimental setup requires to deal with the magnetic field B perpendicular to the HF electric field. In this Faraday configuration, the electron moving due to HF field, experiences a Lorentz force which gives rise to additional electron motion in the perpendicular direction. In the range of ω ∼ ω c , the electron velocities in both directions are of the same order of magnitude and are resonantly enhanced. This cyclotron resonance (CR) of the HF current response will certainly change the way how the photons assist the electron scattering. In this Letter, we construct a microscopic model for the interaction of electrons with electromagnetic fields in Faraday geometry. The basic idea is that, under the influence of a spatially uniform HF electric field, the center-of-mass (CM) of the whole 2DEG in a magnetic field performs a cyclotron motion modulated by the HF field of frequency ω. In an electron gas having impurity and/or phonon scatterings, there exist couplings between this CM motion and the relative motion of the 2D electrons. It is through these couplings that a spatially uniform HF electric field affects the relative motion of electrons by opening additional channels for electron transition between different states. Based on the theory for photon-assisted magnetotransport developed from this physical idea, we show that the main experimental results of the radiation-induced magnetoresistance oscillations can be well reproduced. We also obtain the secondary peaks and additional maxima and minima observed in the experiments. 5,7 For a general treatment, we consider N e electrons in a unit area of a quasi-2D system in the x-y plane with a confining potential V (z) in the z-direction. These electrons, besides interacting with each other, are scattered by random impurities/disorders and by phonons in the lattice. To include possible elliptically polarized MW illumination we assume that a uniform dc electric field E 0 and ac field E t ≡ E s sin(ωt) + E c cos(ωt) of frequency ω are applied in the x-y plane, together with a mag-netic field B = (0, 0, B) along the z direction. In terms of the 2D CM momentum and coordinate of the electron system, 17,18,19 which are defined as P ≡ j p j and R ≡ N −1 e j r j with p j ≡ (p jx , p jy ) and r j ≡ (x j , y j ) being the momentum and coordinate of the jth electron in the 2D plane, and the relative electron momentum and coordinate p ′ j ≡ p j − P/N e and r ′ j ≡ r j − R, the Hamiltonian of the system can be written as the sum of a CM part H cm and a relative-motion part H er (A(r) is the vector potential of the B field), We are concerned with the steady transport under an irradiation of single frequency and focus on the photon-induced dc resistivity and the energy absorption of the HF field. These quantities are directly related to the time-averaged and/or base-frequency oscillating components of the CM velocity. At the same time, in an ordinary semiconductor the effect of higher harmonic current is safely negligible for the HF field intensity in the experiments. Hence, it suffices to assume that the CM velocity, i.e. the electron drift velocity, consists of a dc part v 0 and a stationary time-dependent part that This time-dependent CM velocity enters all the operator equations having couplings to impurities and/or phonons in the form of the following exponential factor, which can be expanded in terms of Bessel functions J n (x): Here ξ ≡ (q · v 1 ) 2 + (q · v 2 ) 2 /ω and tan ϕ = (q · v 2 )/(q · v 1 ). On the other hand, for 2D systems having electron sheet density of order of 10 15 m −2 , the intra-band and inter-band Coulomb interactions are sufficiently strong that it is adequate to describe the relativeelectron transport state using a single electron temperature T e . Except this, the electron-electron interaction is treated only in a mean-field level under random phase approximation (RPA). 18,19 For the determination of unknown parameters v 0 , v 1 , v 2 , and T e , it suffices to know the damping force up to the base frequency oscillating term F(t) = F 0 +F s sin(ωt)+F c cos(ωt), and the energyrelated quantities up to the time-average term. We finally obtain the following force and energy balance equations: Here is the time-averaged damping force, S p is the timeaveraged rate of the electron energy-gain from the HF field, 1 2 N e e(E s · v 2 + E c · v 1 ), which can be written in a form obtained from the right hand side of Eq. (8) by replacing the q factor with nω, and W is the timeaveraged rate of the electron energy-loss due to coupling with phonons, whose expression can be obtained from the second term on the right hand side of Eq. (8) by replacing the q factor with Ω q , the energy of a wavevectorq phonon. The oscillating frictional force amplitudes F s ≡ F 22 −F 11 and F c ≡ F 21 +F 12 are given by (µ = 1, 2) In these expressions, η µ ≡ q · v µ /ωξ; ω 0 ≡ q · v 0 ; U (q ) and M (q) stand for effective impurity and phonon scattering potentials, Π 2 (q , Ω) and Λ 2 (q, Ω) = 2Π 2 (q , Ω)[n(Ω q /T ) − n(Ω/T e )] (with n(x) ≡ 1/(e x − 1)) are the imaginary parts of the electron density correlation function and electron-phonon correlation function in the presence of the magnetic field. Π 1 (q , Ω) and Λ 1 (q, Ω) are the real parts of these two correlation functions. The effect of interparticle Coulomb interactions are included in them to the degree of level broadening and RPA screening. The HF field enters through the argument ξ of the Bessel functions in F 0 , F µν , W and S p . Compared with that without the HF field (n = 0 term only), 20 we see that in an electron gas having impurity and/or phonon scattering (otherwise homogeneous), a HF field of frequency ω opens additional channels for electron transition: an electron in a state can absorb or emit one or several photons and scattered to a different state with the help of impurities and/or phonons. The sum over |n| ≥ 1 represents contributions of single and multiple photon processes of frequency-ω photons. These photonassisted scatterings help to transfer energy from the HF field to the electron system (S p ) and give rise to additional damping force on the moving electrons. Note that v 1 and v 2 always exhibit CR in the range ω ∼ ω c , as can be seen from Eqs. (5) and (6) rewritten in the form Therefore, ξ may be significantly different from the argument of the corresponding Bessel functions in the case without a magnetic field or with a magnetic field in Voigt configuration. 20 Eqs. (4)- (7) can be used to describe the transport and optical properties of magnetically-biased quasi-2D semiconductors subject to a dc field and a HF field. Taking v 0 = (v 0x , 0, 0) in the x direction, Eq. (4) yields transverse resistivity R xy ≡ E 0y /N e ev 0x = B/N e e, and longitudinal resistivity R xx ≡ E 0x /N e ev 0x = −F 0 /N 2 e e 2 v 0x . The linear magnetoresistivity is then The parameters v 1 , v 2 and T e in (11) should be determined by solving equations (5), (6) and (7) with vanishing v 0 . We see that although the transverse resistivity R xy remains the classical form, the longitudinal resistivity R xx can be strongly affected by the irradiation. We calculate the unscreened Π 2 (q , Ω) function of the 2D system in a magnetic field by means of Landau representation: 17 Π 2 (q , Ω) = 1 2πl 2 B n,n ′ C n,n ′ (l 2 B q 2 /2)Π 2 (n, n ′ , Ω), (12) where l B = 1/|eB| is the magnetic length, Fermi distribution function, and ImG n (ε) is the imaginary part of the Green's function, or the DOS, of the Landau level n. The real part functions Π 1 (q , Ω) and Λ 1 (q , Ω) can be obtained from their imaginary parts via the Kramers-Kronig relation. In principle, to obtain the Green's function G n (ε), a self-consistent calculation has to be carried out with all the scattering mechanisms included. 21 In this Letter we do not attempt a self-consistent calculation of G n (ε) but choose a Gaussian-type function for the purpose of demonstrating the R xx oscillations (ε n is the energy of the n-th Landau level): 22 with a broadening parameter Γ = (2eω c α/πmµ 0 ) 1/2 , where µ 0 is the linear mobility at temperature T in the absence of the magnetic field and α > 1 is a semiempirical parameter to take account the difference of the transport scattering time determining the mobility µ 0 , from the single particle lifetime. 4,8,10 The moderate microwave intensity for the R xx oscillation in these high-mobility samples yield only slight electron heating, which is unimportant as far as the main phenomenon is concerned and is neglected for simplicity. We consider scatterings from remote impurities as well as from acoustic phonons. After solving v 1 and v 2 from Eqs. (9) and(10) the magnetoresisivity R xx can be obtained directly from Eq. (11). At lattice temperature T = 1 K, the contribution from photon-assisted phonon scattering is minor. The role of acoustic phonons, however, becomes essential at elevated lattice temperatures. Calculations were carried out for linearly polarized MW fields with multiphoton processes included. Fig. 1 shows the calculated longitudinal resistivity R xx versus ω/ω c ≡ γ c subject to a linearly polarized MW radiation of frequency ω/2π = 100 GHz at four values of amplitude: E s = 20, 45, 65 and 80 V/cm. Shubnikovde Haas (SdH) oscillations show up strongly at high ω c side, and gradually decay away as 1/ω c increases. All resistivity curves exhibit pronounced oscillation having main oscillation period γ c = 1 (they are crossing at integer points γ c = 2, 3, 4, 5). The resistivity maxima locate around γ c = j − δ − and minima around γ c = j + δ + with δ ± ∼ 0.23 − 0.25 for j = 3, 4, 5, δ ± ∼ 0.17 − 0.21 for j = 2, and even smaller δ ± for j = 1. The magnitude of the oscillation increases with increasing HF field intensity for γ c > 1.5. Resistivity gets into negative value for E s = 80 V/cm around the minima at j = 1, 2 and 3, for E s = 65 V/cm at j = 1 and 2, and for E s = 20 and 45 V/cm at j = 1. All these features, which are in fairly good agreement with experimental findings, 1,3,4,5 are relevant mainly to single-photon (|n| = 1) processes. Anomaly appears in the vicinity of γ c = 1, where the CR greatly enhances the effective amplitude of the HF field in photon-assisted scatterings and multiphoton pro-cesses show up. The amplitudes of the j = 1 maximum and minimum no longer monotonically change with field intensity. Furthermore, there appears a shoulder around γ c = 1.5 on the curves of E s = 45 and 65 V/cm, and it develops into a secondary peak in the E s = 80 V/cm case. This has already been seen in the experiment (Fig. 2 in Ref. 5). The valley between γ c = 1.4 and 1.8 peaks can descend down to negative as E s increasing further (Fig. 1b). The appearance of the secondary peak is due to two-photon (|n| = 2) processes. Radiation-induced resistivity behavior at γ c < 1 is shown more clearly in the ω/2π = 60 GHz case. As seen in Fig. 1c, a shoulder around γ c = 0.4-0.6 with a minmum at γ c = 0.6 can be indentified from the SdH oscillation background for all three curves, which is related to two-photon process. With increasing MW strength there appears a clear peak around γ c = 0.68 and a valley around γ c = 0.76. This peak-valley is mainly due to three-photon (|n| = 3) process. In the case of 40 GHz, similar peak and valley also show up (Fig. 1d). Qualitatively, the main oscillation features come from the symmetrical property of the DOS function in a magnetic field. Since G n (ε−jω c ) = G n−j (ε) for any integer j, the impurity contribution to R xx from the n-photon process, which is related to the weighted summation of the derivative Π 2 function over all the Landau levels at frequency nω [Eq. (11)], has an intrinsic periodicity characterized by nω = jω c . The main oscillation of R xx shown in Fig. 1a relates to single-photon process and characterized by ω = jω c . We have also performed calculation using a Lorentz-type DOS function and find that, although the oscillating amplitude and the exact peak and valley positions are somewhat different, the basic periodic feature of the radiation-induced magnetoresistivity oscillation remains.
2018-04-03T05:05:21.171Z
2003-04-30T00:00:00.000
{ "year": 2003, "sha1": "7981ca1d38976f4964b7a1bbe936756592751f45", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0304687", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7981ca1d38976f4964b7a1bbe936756592751f45", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
62801006
pes2o/s2orc
v3-fos-license
Potential Application of Pheromones in Monitoring , Mating Disruption , and Control of Click Beetles ( Coleoptera : Elateridae ) Wireworms, the larvae stage of click beetles (family, Elateridae), are serious soil dwelling pests of small grain, corn, sugar beet, and potato crops globally. Since the 1950s, conventional insecticides such as lindane provided effective and inexpensive protection from wireworms, and little integrated pest management research (IPM) was conducted. The removal of these products from the agricultural market, particularly Lindane, has resulted in increasing levels of wireworm damage to small grain, corn, and potato crops. The wireworm damage has become an increasing problem for growers, so the demand for a meaningful risk assessment and useful methods to restrict damage is increasing. However, due to the cryptic habitat of the wireworms, pest control is very difficult and leads to unsatisfying results. The prospective appropriateness of sex pheromone traps for employing management strategies against wireworm’s populations was first suggested with experimentation in Hungary and Italy. Simultaneously, considerable work has been done on the identification and use of pheromone traps to monitor population of click beetles. The work has been mostly done in European and former Soviet Union countries. For this paper, we reviewed what work has been done inmonitoring the click beetle which was considered as pests and how the pheromones can be used in IPM tomonitor and control wireworms/click beetles. Also, the possibilities of using the pheromone-baited traps for mating disruption and control tested in the fields were summarized. Introduction Wireworms are the larval forms of click beetles (Coleoptera: Elateridae), inflicting damage to many important crops around the world, primarily through the subterranean feeding of plant roots and tubers [1].Wireworms, the larval stage of click beetles, are serious soil dwelling pests of small grain, corn, sugar beet, and potato crops globally [2].About 9,300 species of click beetles have been described worldwide [3].In North America, 885 species in 60 genera have been identified [4].These species are well known as widely distributed agricultural and horticultural pests [5].Many of these species have affected crop industry and ranked among the most important soil dwelling agricultural pests worldwide [6].Usually, economic damage to the field crops caused by wireworms is rare.However, the population of click beetle larvae can reach numbers high enough to cause economic damage [7].During larval stage, which may extend over 5 years, these larvae feed on decaying matters or feed on root of crops such as wheat and rye [7]. The wireworm as pests on agricultural crops has been controlled until 2009 through the use of lindane (gamma HCH).After that, lindane was not allowed to be used due to health concerns.This insecticide was found to cause so many harmful health effects such as convulsion, vertigo, abnormal EEG pattern, cancer, endocrine disruption, and liver toxicity [8]. Since recent years, wireworm damage has become an increasing problem for growers.The demand for a meaningful risk assessment and useful methods to restrict damage is increasing.However, due to the cryptic habitat of the wireworms, pest control is very difficult and leads to unsatisfying results [9]. Soil treatment with insecticides or fumigants may be used to control wireworm effectively [10].However, these fumigants or residual chemicals are costly and may build up large amounts of chemicals in the soil [11].Insecticide seed treatment is reported to be cost-effective in protecting seeds and young plants because of the small amounts of insecticide used and the low cost of application [12]. Managing wireworms (Figure 2) requires regular and consistent monitoring and attentive field management.There have not been many options using biopesticides as little research work has been done.For example, Cherry and Nuessly [13] reported that azadirachtin did not cause mortality or antifeeding responses or change growth rate of wireworms.However, azadirachtin-treated soil was repellent to wireworms at up to 17 days after application. Chemical communication plays a very important role in the lives of many insects [14].Chemicals with an intraspecific function are called pheromones.Pheromones are substances which occur naturally and are used to communicate between organisms.Sex or aggregation pheromone-baited traps have been used to monitor and control the populations of many insect pests [15].The composition of sex pheromone produced in female click beetles has been identified in several species and the use of synthetic pheromone-blend compositions to control click beetles is promising [5,16]. In the USA, the identification of pheromone compounds for click beetles (wireworms) (Figure 3) was initiated in 1968.For example, valeric acid (pentanoic acid) from Limonius californicus [17] and caproic acid (hexanoic acid) from L. anus [18] were identified as pheromone compounds.However, no work was carried out after that because the chemical lindane also known as gamma-hexachlorocyclohexane was available and had been used as a seed treatment.This chemical was effective against wireworms and gave good control.However, the USEPA and WHO both classify lindane as moderately acutely toxic.The use of lindane as an insecticide was banned in 2009 in the USA and other parts of the world.Since there is no substitute for these chemicals available, wireworms became serious pests causing damage to potatoes, wheat, barley, vegetable, and other major crops throughout the world.Wireworms can attack both spring and fall-seeded crops.In spring, when soil temperature gets warm to 50 ∘ F, wireworm larvae begin to migrate upward, almost to the soil surface, where they feed on newly planted seed, seeding, and roots.When soil temperatures rise to 80 ∘ F, the larvae seek lower soil temperature which is a foot or two feet below the soil surface.Although seed treated with Gaucho (imidacloprid) gave some level of control, the monitoring of the pests has been very challenging and difficult.Pheromone-baited traps are useful in monitoring and control of insect pests [19].The pheromone compounds have not yet been identified for North American species of wireworms.Nevertheless, the compounds have been identified for European-based wireworm species [20,21]. Evidence for the existence of long range sex pheromones within the click beetles has been demonstrated for many species [7,22]. In this review, we summarized the click beetle pheromone source and the practice of using pheromone in monitoring and controlling click beetles.The aim was to present the identified and synthesized pheromones and employed methodology. Pheromones Source and Gland Extract In most insects, pheromones are produced by glandular epidermal cells concentrated in discrete areas beneath the cuticle, but in some species, gland cells are scattered through the epidermis of different parts of the body.For click beetles, the male attractant pheromone is produced by female pheromone glands located at the last abdominal segment [23].The click beetle pheromone gland resembles a paired ball-like structure in the abdomen [16].Merivee and Erm [23] conducted study on sex pheromone gland morphology in female Elaterid beetles and demonstrated brief morphological description of the female reproductive system of Agriotes obsucurus.They found that, for this species, the paired pheromone gland was located in the 8th segment of the female abdomen and is attached to the sternite with muscular fibers.The reservoirs of the gland are connected with the intersegmental membrane by means of thin winding excretory ducts which are dilated before opening on the body surface, forming pseudoovipositor pockets.The excretory ducts are spirally surrounded by thin muscular fibers which are functionally related to the excretion of secretion.The length of gland reservoirs in sexually mature females was 0.9-1.3mm, width 0.25-0.35mm.The amount of secretion in two reservoirs of one female pheromone gland was 30-40 nL [23]. For gland extraction, usually the sexually mature adult beetles are collected and sorting the sex.Female sex pheromone gland is extracted by carefully piercing the pheromone gland with a fine glass capillary and collecting the liquid inside into the capillary [24,25].The extracts (liquid samples inside the capillary) which are usually colorless and with mildly unpleasant smell will be analyzed by gas chromatography linked with mass spectrometer (GC-MS) in order to identify the volatiles released from pheromone glands. Agriotes rufipalpis Brullě. There was not much known information about the pheromone composition of this species.The study done by Tóth et al. [6] revealed that no reliable analysis of pheromone gland extracts could be conducted.They failed to collect female A. rufipalpis in large enough numbers.However, they found that males of this species were attracted to geranyl hexanoate in the field. 3.6.Agriotes sordidus Illiger.Tóth et al. [6] tried the traps baited with geranyl hexanoate and found that A. rufipalpis males were attracted to these compounds.They then extracted the female pheromone glands and found from analysis that geranyl hexanoate and (E,E)-farnesyl hexanoate were the major peaks from the retention times. Synthesis of Click Beetle Pheromones Used in the Fields The synthetic click beetle main sexual attractants based on the studies of European scientists [6,15,27,34,35] were synthesized and applied in the fields.In this review, we summarized the attractiveness of the synthetic sexual pheromone compounds being used in each click beetle species considered as pests. Agriotes brevis (Candeze). The study done by Tóth et al. [6] showed that geranyl butanoate and (E,E)-farnesyl butanoate were the main compounds found from the female pheromone gland extract and proved to be active in the field in attracting males to the traps with these compounds' lures. Their study showed that the presence of both semiochemical compounds could efficiently attract male A. brevis towards Austria, Bulgaria, Italy, and Slovenia.However, in Hungary, Romania, and Croatia, the traps baited with these compounds caught other species (A.sputator) more than they did with A. brevis.They hypothesized that the content of geranyl butanoate played role in catching A. sputator because the geranyl butanoate also attracts this species. Agriotes lineatus L. Geranyl octanoate was used as lures to attract A. lineatus because this compound has been proven to be main pheromone component for this species [6,7,31,32].Single geranyl octanoate compound attracted fewer individuals than the combinations of more than one single compound.In Switzerland the traps bait containing 10% geranyl butanoate added to geranyl octanoate attracted a total of 273 individuals, whereas nonindividual was found in the traps baited with only geranyl octanoate [15].The ratio of 1 : 10 mixture between geranyl butanoate and geranyl octanoate has been applied in the field trials in Europe and reported to be efficient in capturing A. lineatus in United Kingdom, Germany, Austria, Switzerland, Italy, Slovenia, Croatia, Romania, Bulgaria, Greece, Spain, France, and Hungary [6]. In North America, this 1 : 10 mixture also showed good result in capturing A. lineatus in Canada [36]. [20] tried these 2 compounds as lures in the field to capture adults, but there was no catch at this entire species in the traps.Later, Vuts et al. [37] did analyses of collected volatiles from air entrainment samples and found that these 2 compounds were not detected either in gland extracts or in head space samples of A. proximus females.What seemed to be strange was that when Tóth et al. [20] applied geranyl butanoate and geranyl, octanoate with the ratio of 1 : 1 in order to capture A. lineatus the blend could also capture a large number of A. proximus.These results were mystifying because chemical studies showed that geranyl butanoate and geranyl octanoate were detected only in little trace amounts in A. proximus female pheromone glands [6,26,31].However, when applying the blend of geranyl butanoate and geranyl octanoate in the field, the ratio of 1 : 10 was revealed to capture more adults than the ratio of 1 : 1 [37]. Agriotes litigiosus Rossi. Geranyl isovalerate was reported to be the main pheromone of this species [22,27,32].In 1983, Yatsynin and Rubanova [38] combined the (E,E)-farnesyl isovalerate or (E)-8-hydroxygeranyl 1,8-diisovalerate with geranyl isovalerate.They found the synergistic effect which resulted in enhancing the capture of A. litigiosus var.tauricus.However, Tóth and Furlan [27] found that this combination of mixture did not influence catches in any of the morphological forms of A. litigiosus.Therefore, only traps baited with single compound (geranyl isovalerate) were used in Europewide trapping test [27].The result from this trial did not prove to be promising.In some parts of Europe, for example, Italy, Austria, and Greece, the individuals were captured a lot.But for other parts such as Croatia, there were some other species captured from the traps baited with this compound. There was no consistency of capturing A. litigiosus using this main pheromone compound.More repeated studies should be conducted to prove the efficacy of the traps baited with geranyl isovalerate. Agriotes obscurus L. Geranyl hexanoate was reported to be dominant pheromone of Agriotes obscurus and for single compound of this chemical without adding with any compounds could attract A. obscurus efficiently [31,32].However, previous study done by Borg-Karlson et al. [7] revealed that there was the other dominant pheromone compound also and this compound was identified as geranyl octanoate.Later on, Yatsynin et al. [26] and Tóth et al. [6] found the same results that geranyl hexanoate and geranyl octanoate were dominant pheromone compounds and the presence of both compounds was needed for attracting adults.The study was conducted on how to optimize the ratio between these two compounds.Tóth et al. [6] found no significant difference in applying 2 : 1, 1 : 1, and 1 : 2 mixture ratios.Therefore when applying the traps with lures in the field, only traps baited with 1 : 1 between these two compounds were adequate to large numbers of A. obscurus.This practice was effective in capturing adults in United Kingdom, Germany, Switzerland, Italy, Slovenia, Croatia, Romania, and also Canada [27]. 4.6.Agriotes rufipalpis Brullé.Geranyl hexaonate was found to be dominant pheromone for this species.These compounds were used as lures by Tóth et al. [39].The result showed that traps baited with geranyl hexanoate performed well in capturing A. rufipalpis in Austria, Serbia, Greece, Romania, and Hungary. Agriotes sordidus Illiger. Analysis of gland female pheromone gland extracts showed dominant peaks at the retention times of geranyl hexanoate and (E,E)-farnesyl hexanoate [6].However, when traps baited with only geranyl hexanoate alone or the combination of geranyl hexanoate and (E,E)farnesyl hexanoate did not reveal any significant differences in catching adults [27], the synergistic effect between these 2 compounds did not occur.Traps baited with only geranyl hexanoate have been used in Italy, France, and Spain and showed good results in capturing large numbers of adult males [27]. [26] also reported synergistic effect by adding (E,E)-farnesyl hexanoate alone or together with geranyl propionate mixed with geranyl butanoate as blend.However, when these blends from both studies were tried again by Tóth [16], only geranyl butanoate alone without addition of any other compounds worked the best in catching the adults.The traps with geranyl butanoate lures were very effective in capturing adults in northern and central Europe and Canada [27]. 4.9.Agriotes ustulatus Schaller.The dominant compound from pheromone extracts of this species was (E,E)-farnesyl acetate [6,26,31,32].The traps with lures of (E,E)-farnesyl acetate performed well in attracting the adult beetles in Europe.There are some species accidentally captured by using some compounds such as geranyl butanoate used not only to capture species A. sputator but also attract species A. proximus.However, these species which were accidentally captured were not reported to be pests. Pheromone-Based Monitoring and Control The application of traps baited with sex pheromones to lure male insects has been an excellent tool for monitoring pest populations in survey and integrated pest management (IPM).A lot of insect sex pheromones can be synthesized and conventionally used in pest monitoring and controling.The advantages of using pheromone traps are (1) able to detect early pest infestation, for example, the first detection of migratory pests, (2) to easily define areas of pest infestations, (3) to able to track the establishment of pest populations, and (4) to help in decision making of management [40].However, in order to apply effective pheromone trapping system and result in large number of pest catches, this requires careful preparation, handling, and selection of pheromone traps and lures, as well as proper trap placement [40].Females of some Agriotes spp.are known to produce sex pheromones [41].Oleschenko et al. [24] reported that male Agriotes litigiosus were attracted to geranyl isovalerate and male A. gurgistanus were attracted to geranyl butyrate [24].Female sex pheromones were identified as n-dodecyl acetate [42] and as (E)-9, 11-dodecadienyl butyrated and (E)-9, 11-dodecadienyl hexanoate in Melanotus sakishimensis Ohira [43].Nagamine and Kinjo [44] reported on the population parameters of M. okinawensis by using water pan traps baited with synthetic sex pheromone in the field.Further density and dispersal distance of this species were successfully estimated by Kishita et al. [45] by using mark-recapture experiments over an agricultural field on Ikei Island (Japan). Role of Pheromone Traps in Monitoring the Click Beetle The traps baited with pheromone to attract click beetles were used since 1997 [46].Different types of traps were tried to capture beetles using pheromones as baits.Development of a trap model suitable for catching the different species was conducted [46].In the beginning, the bottle traps which were funnel traps were made at home from 2-litter transparent plastic bottles as the prototype traps.Then VARb traps were invented by using the plastic CSALOMON VAR funnel traps. After that the TAL traps were introduced to the capturing of click beetles.The development was still ongoing until the YATLOR traps were made (Figure 1).This trap design was made of plastic at the Italian laboratory and was similar in shape and size to the "Estron" trap which had been used earlier [32,47].This trap design was modified in order to prevent the adults from escaping.The traps were developed and modified until YATLOR funnel traps were made by modifying the bottom part like YATLOR prototype and an upper part resembling the Bottle trap [36].Each trap has different performance in capturing adult beetles.With TAL, YATLOR designs, the beetles could get into the traps by crawling in.They do not need to fly into the traps.Conversely, the BOT-TLE and VARb traps will need the beetles to fly in (flying traps).The craw in traps (TAL and YATLOR traps) was proved to be much less effective in catching the flying species.Also, the BOTTLE and VARb traps were shown that they were not suitable for catching Agriotes brevis and A. obscurus [36].Ivezić et al. [48] conducted study on the implementation of pheromone traps in detecting click beetles population level in East Croatia.In this study, they used traps baited with pheromone composition for seven species from genus Agriotes.Their result was similar to Vernon and Tóth [36] in the way that VARb trap was suitable only for flying species, while YATLOR trap was suitable for crawling species.However, YATLOR funnel traps were proved to be effective in monitoring all the species [36,48]. Evaluation of the effectiveness of the pheromone traps in different areas with different populations was also conducted.Mostly this evaluation was done in several European countries.The efficacy of the new Agriotes sex pheromone traps in detecting wireworm population levels was done in different European countries [49].From this study, the individual traps were baited with lures for one of the following species: A. lineatus, A.obscurus, A. sputator, A. soridus, A. illiger, A. rufipalpis, A. brevis, A. litigiosus, and A. ustulatus.The researchers used bait traps and soil sampling to estimate the larval populations.Their results revealed that pheromone traps were able to detect dominant species and moreover the pheromone traps were selective enough to distinguish A. sputator and A. brevis despite the fact that these two species are systematically very close.They stated that sex pheromone traps proved to be a much more sensitive tool than soil sampling and bait traps for larvae.Moreover, all species traps were able to detect wireworm populations below those that can be detected using soil sampling and bait trapping [49].However, some species of Agriotes click beetles, for example, A. lineatus, A. obscurus, and A. sputator, were found to response to pheromone trap differently.Hicks and Blackshaw [50] revealed that there were significant differences in recapture rate between species and release distance.Thus, the species specific pheromone traps used to monitor click beetles may not show the same equal catch of adult males for each species.If the pheromone traps are placed at 40 m minimum spacing, there will be overlapped of sampling areas for A. lineatus and A. obscurus and this also suggests that the trap interference could occur for the small space between each pheromone trap [50].Blackshaw and Vernon [9] also demonstrated that there could be spatial temporal interference between traps that may affect the detection of spatial structure.They suggested that the optimal trap spacing for A. obscurus should be in the range 29-59 m apart and for A. lineatus it should be greater.Therefore, we can no longer assume that all pheromone traps operate with similar physical capture properties [50].Iwanaga and Kawamura [51] reported that funnel-vane traps captured significantly more males of both M. sakishimensis and M. okinawensis than did water pan traps.The wind also plays a major role in trap catches.For example, Kawamura et al. [52] reported that on Miyako Island, on calm days, funnel-vane traps and water pan trap with a vane traps captured significantly more M. sakishimensis males than funnel traps and water pan traps.On windy days, funnel-van traps and water pan traps with a vane traps captured more males than funnel and water traps, but the differences were not significant.The author also reported that traps at ground level (2 cm) and above the ground. Mass Trapping of Click Beetles Using mass pheromone trapping to control male click beetles does not seem to work well in the field.Sufyan et al. [53] conducted study on effect of male mass trapping of Agriotes species on wireworm abundance and potato tuber damage.What they found was that male mass trapping is not a suitable approach to reduce wireworm populations in the soil.The reason for that was because the relationship between pheromone trap catches of male beetles and wireworm populations in the soil is still not clear.Therefore, a prediction of potential wireworm damage based on male trapping is not yet possible [54].However, current pheromone traps are sensitive enough to detect low-density populations and trapping systems are able to indicate to the growers about the presence or absence of wireworm infestation [35].Mass trapping experiments by Arakaki et al. [55] to control Melanotus sakishimensis Ohira with a trap density (0.57 trap per ha) close to the manufacturer standard (0.67 trap per ha) indicated the reduction of the yearly trap catches was not successful. On the contrary, there are some encouraging results in case of mass trapping.Nagamine and Kinjo [44] reported success in mass trapping to control Melanotus okinawensis in the sugarcane fields in Okinawa (Japan) from 1985 to 1989.Since then, mass trapping has been conducted to control this species in various regions, with trap densities of 0.67-1 trap per ha.Despite these controls by mass trapping on several islands over a span of 10 yr, enough control effects have not been achieved.The mass trapping experiment to control M. okinawensis with a high trap density (10.6 traps per ha) on a small island during 6 yr, a great reduction in the yearly trap catches, and wild population were observed [56]. Mating Disruption for Control of Click Beetles There seems to be only one example on mating disruption study conducted in case of wireworms.Arakaki et al. [57] conducted mating disruption experiments to control M. okinawensis indicating that the mean total catches obtained by monitoring traps in the sugarcane field decreased by 96.1% in 2001 from the previous year on Minami-Daito Island (Japan).The mean total trap catches in the treated area further decreased by 74.0% from 2001 until 2007 as cumulative effects.Simultaneously, the number of adults captured by hand decreased from 4.7 per sugarcane field in 2001 to 0.5 in 2007 (89.3% reduction), whereas those captured in the untreated area did not show such a decrease.The mating rates were significantly lower in the females captured in the treated area (14.3-71.4%)than those in the untreated area (96.9-100%).These results indicated that the mating disruption effectively reduced an isolated population of M. okinawensis. The authors also concluded that, for M. okinawensis by using synthetic sex pheromone, the mating disruption method may be preferable to the mass trapping method from a practical point of view. Click Beetles and Trapping Protocols Applied in the Field The pheromone trapping protocols have been established by some researchers.For example, Furlan et al. [49] conducted the experiment on evaluation of the effectiveness of the pheromone traps in different areas with different populations.The way they set up their pheromone traps was that they installed the traps in the field and each trap was separated 30 meters apart from one another.The pheromone should be replaced within the period of 4-6 weeks [49,53].The inspection of pheromone traps should be conducted once or twice a week [49,53].All specimens should be removed from the traps at each observation and retained. Conclusion Pheromones of click beetles were identified especially to be the dominant species in Europe [6,26].The synthetic pheromones have been used in traps and are able to detect the click beetle populations which are useful for monitoring at the field scale.However, to control click beetles by mass trapping baited with pheromones is still questionable due to difficulty in interpreting correlation between pheromone trap catches of male beetles and wireworm populations in the soil.Nevertheless, the growers can still get benefit from current pheromone traps in detecting the presence or absence of wireworm infestation in soil even with the low-density populations. Future Studies The pheromone bait compositions to attract click beetles have been optimized.However, most of the click beetle pheromone research has been conducted throughout European countries [6,21,28] and some parts of Canada [9].As a result, highly effective pheromone baits are available now for all the important pest click beetle species mainly in Europe.Experiments on the uses of pheromones have been scarce so far with click beetles in the United States and Asian countries.More studies should be conducted in these parts of the world because there might be some different factors playing role in giving not exactly the same results as those in European countries, for example, European click beetle populations might not be the same as those in the United States or other Asian countries.Moreover, more studies need to be done to determine the actual range of attractiveness, the correlation between males captured and the number of females, or the correlation between adult trap catches and wireworm populations in different geographic and climatic conditions.The communication disruption system appears to be effective for the sugarcane wireworm management.Therefore, further studies on this aspect for the management of wireworms on other crops will be helpful and possibly an effective tool for managing the wireworms. Figure 1 : Figure 1: Yatlor funnel trap current used for monitoring click beetles. Figure 2 : Figure 2: Larva of the wireworm that causes damage to the crops.
2018-12-29T15:12:25.605Z
2014-01-12T00:00:00.000
{ "year": 2014, "sha1": "1a32b44de4dfd35d63f40c3fbb0381f128462c68", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/archive/2014/531061.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1a32b44de4dfd35d63f40c3fbb0381f128462c68", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
245286097
pes2o/s2orc
v3-fos-license
Effect of Dissipation on the Moonpool-Javelin Wave Energy Converter : In this work, the hydrodynamic performance of a novel wave energy converter (WEC) configuration which combines a moonpool platform and a javelin floating buoy, called the moonpool– javelin wave energy converter (MJWEC), was studied by semianalytical, computational fluid dynamics (CFD), and experimental methods. The viscous term is added to the potential flow solver to obtain the hydrodynamic coefficients. The wave force, the added mass, the radiation damping, the wave capture, and the energy efficiency of the configuration were assessed, in the frequency and time domains, by a semianalytical method. The CFD method results and the semianalytical results were compared for the time domain by introducing nonlinear power take-off (PTO) damping; additionally, the viscous dissipation coefficients under potential flow could be confirmed. Finally, a 1:10 scale model was physically tested to validate the numerical model and further prove the feasibility of the proposed system. Introduction The increasing need to replace traditional fossil fuels with clean energy (including wind, tidal, and wave energy) for power generation has been highlighted in the last ten years. Among the different kinds of marine energies, wave energy has become one of the most promising options that can be regarded as a resource component. Wave energy converters (WECs) can be used to extract wave energy through the periodic resonance caused by slow periodic waves. The wave energy is utilized via WECs, which can convert wave energy in sea water into electrical energy. As energy conversion devices, they can adapt to the wave conditions of the selected sea area, absorb wave energy stably and reliably, and realize the energy conversion to the maximum extent. Several classic devices based upon the theory of wave conversion are the Archimedes Wave Swing [1], CETO [2], the IPS Buoy [3], and the Wavebob [4]. The problem of hydrodynamic interaction between waves and cylindrical structures has been the subject of academic research because of its wide and important applications in various engineering projects. Several conceptual studies have been carried out to improve its efficiency. Ramadan et al. [5] designed a new float which consists of two parts, a hollow cylinder and an inverted cup attached to its bottom. Mavrakos and Katsaounis [6] investigated the different effects of floaters' geometries by analyzing the performance of tight-moored vertical axisymmetric WECs. The results showed the effect of the different hydrodynamic characteristics of each specific float geometry on the studied hydroelectric performance characteristics. Zang and Zhang [7] studied the power performance of a heaving-buoy WEC with power take-off (PTO) damping under regular and irregular waves. 2 of 24 To consider viscous effects, potential-flow theory can be introduced as a dissipative term in the boundary conditions, where the dissipative term acts as a fictitious dissipative force to suppress the kinetic energy associated with wave uplift. When the resonance occurs in a confined region, the amplitude of the fluid motion is reduced. Duan et al. [8] improved the method and calculate the wave loads associated with the wave elevation by introducing uncertainties to the free surface conditions in the case of a set of cylinders. Chen et al. [9] applied a semianalytical method based on eigenfunction matching to study the wave diffraction of cylindrical structures with a moonpool by introducing dissipation to the free-surface conditions. Liu et al. [10] introduced a nonlinear viscous dissipation term in the modeling of the free surface inside the lunar pool structure and derived the relationship between the nonlinear dissipation coefficient and the resonant frequency of the lunar pool. The CFD method is, however, more accurate compared with the above method; Lo et al. [11] used a CFD approach to analyze the performance of an air-blower wave power generation device, and to calculate the power output of two buoys. Jin et al. [12] took the nonlinear viscosity into account to model the WEC hydrodynamics' near-resonance conditions. The experimental approach is a useful method for conducting feasibility studies on newly developed WECs. Ren et al. [13] analyzed the combination of a monopile wind turbine and a swing-type WEC and derived the optimal tuning of the PTO damping by using a coordinated numerical and experimental analysis. Gao et al. [14] studied three floating wind-wave hybrid concepts, comparing their energy efficiency and economic feasibility. Wan et al. [15][16][17] studied the green water phenomena of STC, and a model test was used to simulate these nonlinear phenomena as well as the survivability of the device in extreme sea conditions. It is difficult to use conventional methods to explore two floating coupling resonances from a mechanism and optimize the PTO damping to improve the power. Few studies have taken into account the viscous effects of the potential-flow approach applied to the fundamental physics of the coupling resonance between two objects. In this paper, a semianalytical method introducing viscous dissipation to obtain a more accurate and efficient calculation method was used. Applying the Impulse Response Function (IRF) method, the motion responses of a moonpool-javelin wave energy converter (MJWEC) were determined by employing time-history analysis. Theoretical analysis, numerical calculation, and model testing were combined and compared, highlighting the underpinning physics of the coupling resonance between the two bodies. Mathematical and Numerical Model This paper is focused on the MJWEC, as seen in Figure 1-the moonpool platform and a javelin floating buoy on the water surface, connected by the PTO system. The motion responses of the moonpool and the javelin float to waves were of multiple degrees of freedom (DOFs). In this paper, only the heaving DOF was considered, which was the most important parameter related to the power take-off of the device. A sketch of the structure layout for the WEC is shown in Figure 1. It was a javelin float model, having a vertical axisymmetric floating body containing a cylinder and a "Berkeley Wedge" bottom. The surfaces could be obtained by creating a shape function; the fourth-order polynomial is as follows: A sketch of the structure layout for the WEC is shown in Figure 1. It was a javelin float model, having a vertical axisymmetric floating body containing a cylinder and a "Berkeley Wedge" bottom. The surfaces could be obtained by creating a shape function; the fourth-order polynomial is as follows: According to Madhi et al. [18], we chose the appropriate buoyancy and sharpness to make 0.05926 A = Diffraction Problems As shown in Figure 1, we divided the flow field around the javelin float into four parts ( ) By substituting the boundary conditions, we could obtain the series expression of the diffraction velocity potential of E and M: Diffraction Problems As shown in Figure 1, we divided the flow field around the javelin float into four parts Within the four parts, we divided the subdomain P into N parts. The diffraction wave velocity potential could be written as: By substituting the boundary conditions, we could obtain the series expression of the diffraction velocity potential of E and M: where H (·) = J (·) + iY (·), I (·) is the first type of Bessel function, and K (·) is the second type of Bessel function. We used the following equation to solve k β α and Z β α : By substituting the boundary conditions, we could obtain the series expression of the diffraction velocity potential of B: where, When we solved the diffraction velocity potential of P, the diffraction velocity potential of I 1 and I P (P = 2, 3, ..., N) had different boundary conditions, so we discuss their diffraction velocity potential separately. Using boundary conditions, we could obtain: From the above formula, where P = (1, 2, 3...N), we could obtain: K m,1 , I m,1 , K m,2 , I m,2 = K (λ m R P−1 ), I (λ m R P−1 ), K (λ m R P ), I (λ m R P ) K (λ m R P−1 )I (λ m R P ) − K (λ m R P )I (λ m R P−1 ) Within these formulae, we still had Fourier series α (E,M,B,P)0 , α (E,M,B,P)0 , α (E,M,B,P)m , and α (E,M,B,P)m (m ≥ 1) to solve. They were solved by the continuity conditions of the adjacent subdomain velocity potential and their derivative, and the lateral surface conditions. The continuity conditions between the adjacent subarea interface were: The impenetrable conditions on the side surface of the object were: The velocity potential and the partial derivative continuity condition at the coupling interface of the javelin float bottom domain r = R P , −h ≤ z ≤ h P − h(P = 1, 2 . . . N − 1) could be written as: The number of velocity potentials at the coupling interface should satisfy the continuity condition and the condition of the float surface. Using Green's function, the orthogonality of the velocity potential function was applied to obtain the analytical expression of the diffraction velocity potential. The wave force received by the moonpool platform and the javelin float in the waves was obtained by the Bernoulli equation: Radiation Problems Similarly to the diffraction problem, based on the uniform propagation of energy around the wave motion, the radiation velocity potential in the wave state could be expressed as: Substituting the boundary conditions, we could obtain: where , Q 0 (r), and K m,1 , I m,1 , K m,2 , I m,2 refer to the diffraction problem. Meanwhile, the special solution for each radiation velocity potential was: In the coupling plane of each subdomain, the boundary conditions and diffraction problems were consistent. At the same time, they also met the impervious conditions: due to the continuity of the velocity potential derivative at the coupling interface. Using the Green's function method and the orthogonality of the function, we obtained the unknown Fourier coefficients for the radiation velocity potential. Then, we could obtain the series expressions of the radiation velocity potential: where µ and λ represent additional mass and radiation damping. The subscript MJ stands for the moonpool platform radiation acting on the javelin float. The rest of the subscripts signify the same thing as MJ. The potential-flow theory does not take into account viscosity and energy dissipation. The value of the hydrodynamic calculation of the floating body under the ideal fluid conditions would be relatively large, and there would be significant distortion for the motion under resonance. In order to improve the accuracy of the potential-flow calculation results, a damped-cover method based on the pseudo-ideal fluid assumption was used to achieve a viscous correction effect by adding damping between the moonpool platform and the javelin float. Motion Equation and Capture Width Ratio As shown in Figure 2, the MJWEC in this paper was connected by a PTO damping system to form a Double-degree-of-freedom damped vibration system. In order to improve the accuracy of the potential-flow calculation results, a damped-cover method based on the pseudoideal fluid assumption was used to achieve a viscous correction effect by adding damping between the moonpool platform and the javelin float. Motion Equation and Capture Width Ratio As shown in Figure 2, the MJWEC in this paper was connected by a PTO damping system to form a Double-degree-of-freedom damped vibration system. The MJWEC mets the equilibrium state in still water. When it was subjected to waves, it made a heave motion. The displacement caused by the heave motion was the distance between the instantaneous position of the float and the equilibrium position. On this basis, the microwave was satisfied. According to Newton's second law, the motion equation in the frequency domain could be expressed as: The MJWEC mets the equilibrium state in still water. When it was subjected to waves, it made a heave motion. The displacement caused by the heave motion was the distance between the instantaneous position of the float and the equilibrium position. On this basis, the microwave was satisfied. According to Newton's second law, the motion equation in the frequency domain could be expressed as: where M M and M J represent the quality of the MJWEC, respectively; Z M and Z M represent the displacement of the moonpool platform and the javelin float relative to their hydrostatic equilibrium state; F E M and F E J represent the wave excitation force of the moonpool platform and the javelin float; F R MM , F R MJ , F R J M , and F R J J represent the radiation force of the moonpool platform on itself, the radiation force of the moonpool platform on the javelin float, the radiation force of the javelin float on the moonpool platform, and the radiation force of the javelin float on itself. They could be written as: where k M and k J represent the hydrostatic stiffness of the lunar platform and the javelin float. When the water line area of the float was constant, they could be expressed as ρ and g, representing the representative seawater density and gravitational acceleration, respectively. F p could be expressed as the speed-dependent damping force. Assuming that the PTO damping coefficient c was linear, substituting the Formulas (40)-(42) for Formula (39), we could obtain: .. After separating the time variable, the MJWEC equation of motion could be expressed as: The MJWEC was connected by the PTO system. Under the action of waves, the relative motion of the two floats used the displacement difference between the two to drive the damper in the PTO system to generate electricity. The power could be expressed as: where ω is the incident wave frequency, Z M and Z J are the magnitude of the motion response of the moonpool platform and the javelin float, and c is the PTO damping coefficient. When studying the influence of the size of the moonpool platform on the conversion efficiency of the wave energy device, only the influence of the output power could not be measured. Therefore, the concept of the capture width ratio could be introduced to better determine the wave energy conversion of the device. The ratio of the capture width η p represents the ratio of the average output power of the float to the power of the wave input within the corresponding float width. In Cartesian coordinates, the waveform of the incident wave with amplitude A, frequency w, and phase angle δ was as follows: Then, the energy input in the unit period and the float width 2R was derived from the following equation: The capture width ratio of the wave frequency w was: Time-Domain Solutions Based on the device sketch in Figure 1, the frequency-domain motion equations were established. The rotation axis o was taken as the reference point. For the wave energy device studied in this article, we established the time-domain motion equation as follows: .. .. .. where M 1 and M 2 represent the quality of the moonpool platform and the javelin float, respectively; µ ∞ 11 , µ ∞ 12 , µ ∞ 21 , and µ ∞ 22 indicate the time domain added mass; K(·) represents the delay function; and the subscript indicates the radiation effect of the front object on the back object. The signs x, . x represent the displacement, velocity, and acceleration, respectively. The signs b, b 1 , and b 2 represent the hydrostatic force coefficient of the lunar platform and the javelin float. F E 1 (t) and F E 2 (t) represent the wave forces acting on the moonpool platform and the javelin float. F P indicates the linear instantaneous PTO damping force acting on the moonpool platform-javelin float device, and x 2 (t) denotes in the linear condition. However, the actual situation of the PTO system involved a quadratic damping force and an even more complicated variation law. To more realistically simulate the mechanical motion of the whole system, the nonlinear PTO was decomposed into a secondary damping force: Then, the equation could be transformed into: .. .. .. .. The equation could be solved using the fourth order Runge-Kutta method. Introduce Dissipation in Potential Flow To eliminate the sharp resonance and to measure the viscous effect of the moonpooljavelin wave energy converter (MJWEC), dissipation was introduced into the potential flow by assuming an additional term in the boundary condition at the free surface of the moonpool. According to Chen and Dias [19], the boundary condition on the mean free surface for the potential Φ is modified velocity potential after adding the viscous solveras summarized in the introduction, by introducing a dissipative term: where the symbol µ is the dissipation coefficient and v = ω 2 /g. When the sharp resonance phenomenon appears, introducing a dissipative term reduces the motion of the fluid in a narrow space. The dispersion equation associated with the new free-surface boundary condition (53) became: where the symbols k 0 and k n are the wavenumbers, and this dispersion Equation (54) became a complex relation. The attenuation factor that affected the imaginary part of the wavenumber appeared: in which the real part of the wavenumbers k 0 and k n satisfied the usual dispersion equation, The imaginary part was given by: It is important to note that, in the semianalytical solution for wavenumbers k I 0 and k I n , all the Bessel functions mentioned became the complex variable Bessel function. Additionally, if the dissipation coefficient µ = 0, the imaginary parts of the wavenumbers k I 0 and k I n were zero and the semianalytical solution for introducing dissipation into the potential flow was equal to the semianalytical solution under potential flow. Reynolds Averaged Navier-Stokes (RANS) Equations Based on Figure 1, the frequency-domain motion equations were established. The rotation axis o was taken as the reference point. The unsteady incompressible flow field was described by the continuity equation and the Navier-Stokes equations, where ρ is the water density, U is the flow velocity vector, F b is the body force vector, and T is the stress tensor. Starting from Navier-Stokes equations, and imposing the continuity equation, the Cauchy momentum equation could be derived. Then, assuming that the water density ρ was constant in space (incompressible) and in time, the expression (58) was obtained. An unsteady RANS-based CFD model (Star-CCM+) was used to model fluid flow, where the governing equations were discretized over a computational mesh using a finitevolume method. The results were found to be in good agreement with the available experimental results in the literature. A volume of fluid (VOF) method was applied to calculate the free surface, and a mesh-morphing model was adopted to represent the moving boundary between the liquid and the central cylindrical buoy, which was moving. A Lagrangian-Eulerian method was implemented to handle the cell movement. A SIMPLEtype algorithm was applied to solve the system of equations, where the dynamic response of the floating body was calculated by integrating (over time) the acceleration obtained from the equation of motion solution, using an implicit algorithm. Figure 3 shows the computational domain and domain boundaries of the numerical wave tank used in the RANS simulations. To reduce the computational cost and exploit the problem symmetry, only half of the domain was modeled. The computational domain was 108 m long, 7 m wide, and 1.5 m high. As regards the boundary conditions (Figure 3), the seabed was 1.5 m below the mean water surface, nonpenetration and a no-slip condition boundary condition was imposed, and a 5th order Stokes wave velocity profile was specified at the inflow. The pressure outlet was implemented at the outflow boundary. Figure 3 shows the computational domain and domain boundaries of the numerical wave tank used in the RANS simulations. To reduce the computational cost and exploit the problem symmetry, only half of the domain was modeled. The computational domain was 108 m long, 7 m wide, and 1.5 m high. As regards the boundary conditions ( Figure 3), the seabed was 1.5 m below the mean water surface, nonpenetration and a no-slip condition boundary condition was imposed, and a 5th order Stokes wave velocity profile was specified at the inflow. The pressure outlet was implemented at the outflow boundary. Mesh Generation The mesh generation was performed using the automated mesh facility in Star-CCM+, resulting in a computational mesh with a total of about 14 million cells. A trimmed cell grid was used to generate a high-quality grid for complex grid generation problems. The ensuing mesh was formed primarily of unstructured hexahedral cells with trimmed cells adjacent to the surface. Figure 4 shows the grid resolution around the MJWEC model. The grid resolution was finer near the free surface and around the model, to capture both the wave dynamics and the details of the flow around the WEC. In addition, prism-layer cells were placed along the WEC surface, and the height of the first layer was set so that the value of y + (30~100) satisfied the turbulence model requirement. The Hydrodynamic Characteristic In this section, we only study the relative motion between the moonpool platform and the javelin float. We set the water depth at 50(m) h = and the seawater density at Mesh Generation The mesh generation was performed using the automated mesh facility in Star-CCM+, resulting in a computational mesh with a total of about 14 million cells. A trimmed cell grid was used to generate a high-quality grid for complex grid generation problems. The ensuing mesh was formed primarily of unstructured hexahedral cells with trimmed cells adjacent to the surface. Figure 4 shows the grid resolution around the MJWEC model. The grid resolution was finer near the free surface and around the model, to capture both the wave dynamics and the details of the flow around the WEC. In addition, prism-layer cells were placed along the WEC surface, and the height of the first layer was set so that the value of y + (30~100) satisfied the turbulence model requirement. wave tank used in the RANS simulations. To reduce the computational cost and exploit the problem symmetry, only half of the domain was modeled. The computational domain was 108 m long, 7 m wide, and 1.5 m high. As regards the boundary conditions ( Figure 3), the seabed was 1.5 m below the mean water surface, nonpenetration and a no-slip condition boundary condition was imposed, and a 5th order Stokes wave velocity profile was specified at the inflow. The pressure outlet was implemented at the outflow boundary. Mesh Generation The mesh generation was performed using the automated mesh facility in Star-CCM+, resulting in a computational mesh with a total of about 14 million cells. A trimmed cell grid was used to generate a high-quality grid for complex grid generation problems. The ensuing mesh was formed primarily of unstructured hexahedral cells with trimmed cells adjacent to the surface. Figure 4 shows the grid resolution around the MJWEC model. The grid resolution was finer near the free surface and around the model, to capture both the wave dynamics and the details of the flow around the WEC. In addition, prism-layer cells were placed along the WEC surface, and the height of the first layer was set so that the value of y + (30~100) satisfied the turbulence model requirement. The Hydrodynamic Characteristic In this section, we only study the relative motion between the moonpool platform and the javelin float. We set the water depth at 50(m) h = and the seawater density at In this section, we only study the relative motion between the moonpool platform and the javelin float. We set the water depth at h = 50(m) and the seawater density at ρ = 1025(kg/m 3 ); the waves were linear microwaves, their frequency was ω = 0 ∼ 5(rad/s), and the wave height was H = 1(m). During the research, we kept the moonpool platform horizontal thickness at R E − R M = 1(m). When the wave energy device draught remained, d M = 2(m), the inner diameter R m = 2 ∼ 6(m), and the PTO damping coefficient remained at c = 10(KNs/m). When the inner diameter of the wave energy device was maintained at R M = 2(m), the draught of the device was d M = 2 ∼ 6(m) and the PTO damping coefficient remained at c = 10(kN · s/m). When the moonpool size remained fixed at R M = 2(m), d M = 2(m), we changed the PTO damping coefficient to c = 0 ∼ 100(kN·s/m). The calculation results are dimensionlessly expressed as The Influence of R M 's Change on the Force of the Devices In Figure 5, (a) stands for the wave force on the moonpool and (b) for the wave force on the sea javelin. The wave forces on the moonpool platform all varied from 1. Three peaks appeared as the frequency increased, and the size of each peak decreased as the frequency increased. For the first peak, the size of the peak decreased with the increasing frequency as the radius increased, and the frequency corresponding to the peak also decreased with the increasing frequency. The peak corresponded to a range of frequencies. For the wave force received by the javelin float, there were also two to three peaks, but the value of the second peak was significantly larger than that of the other peaks. When the radius increased, the value of the peak increased gradually, and the frequency corresponding to the peak also increased gradually. It can be seen that the wave force variation pattern was the same for the moonpool platform and the javelin float, and the variation pattern was opposite at different wave peaks. In the low frequency state, the moonpool platform and the javelin float had the same frequency, corresponding to a very small value for the wave force. Furthermore, the wave force on the moonpool platform was larger than that on the javelin float at the same frequency. The Influence of 's Change on the Force of the Devices In Figure 5, (a) stands for the wave force on the moonpool and (b) for the wave force on the sea javelin. The wave forces on the moonpool platform all varied from 1. Three peaks appeared as the frequency increased, and the size of each peak decreased as the frequency increased. For the first peak, the size of the peak decreased with the increasing frequency as the radius increased, and the frequency corresponding to the peak also decreased with the increasing frequency. The peak corresponded to a range of frequencies. For the wave force received by the javelin float, there were also two to three peaks, but the value of the second peak was significantly larger than that of the other peaks. When the radius increased, the value of the peak increased gradually, and the frequency corresponding to the peak also increased gradually. It can be seen that the wave force variation pattern was the same for the moonpool platform and the javelin float, and the variation pattern was opposite at different wave peaks. In the low frequency state, the moonpool platform and the javelin float had the same frequency, corresponding to a very small value for the wave force. Furthermore, the wave force on the moonpool platform was larger than that on the javelin float at the same frequency. The Influence of 's Change on the Force of the Devices In Figure 6, (a) stands for the wave force on the moonpool and (b) for the wave force on the sea javelin. From Figure 6a, we can see that for the wave force received by the moonpool platform, the peak values of the first and second peak points decreased with the increase in the draught, but the frequency corresponding to the peak point changed only slightly, and the frequency of the first peak point was around 1.6(rad/s) ω = . The second peak-point frequency was around 3.8(rad/s) ω = . From Figure 6b, the value of the wave force received by the javelin float decreased with the increase in the draught at the first peak point, but the value of the second peak point hardly changed with the draught. From Figure 6a, we can see that for the wave force received by the moonpool platform, the peak values of the first and second peak points decreased with the increase in the draught, but the frequency corresponding to the peak point changed only slightly, and the frequency of the first peak point was around ω = 1.6(rad/s). The second peak-point frequency was around ω = 3.8(rad/s). From Figure 6b, the value of the wave force received by the javelin float decreased with the increase in the draught at the first peak point, but the value of the second peak point hardly changed with the draught. As shown in Figure 7, the increased mass first decreased, then increased with the frequency, and finally stabilized. Moreover, the frequency range is 0~1(rad/s) ω = , the size of the added mass changes very little, and increases with the increase of the radius. When the frequency range was 1~3(rad/s) ω = , the additional mass first decreased and then increased, and the peak value decreased with the increase in the radius. When the frequency range was 3~5(rad/s) ω = , the additional mass tended to be stable and changed very little with the increase in the radius. The radiation damping first increased and then decreased with the increase in the frequency, and there were two peaks of magnitude. The magnitude of the peak increased with the increase in the radius, while the frequency of the peak point decreased. It is worth noting that the additional mass and the radiation-damping radius of the javelin float were the same when the radiation effect of the javelin float was on itself. As shown in Figure 7, the increased mass first decreased, then increased with the frequency, and finally stabilized. Moreover, the frequency range is ω = 0 ∼ 1(rad/s), the size of the added mass changes very little, and increases with the increase of the radius. When the frequency range was ω = 1 ∼ 3(rad/s), the additional mass first decreased and then increased, and the peak value decreased with the increase in the radius. When the frequency range was ω = 3 ∼ 5(rad/s), the additional mass tended to be stable and changed very little with the increase in the radius. The radiation damping first increased and then decreased with the increase in the frequency, and there were two peaks of magnitude. The magnitude of the peak increased with the increase in the radius, while the frequency of the peak point decreased. It is worth noting that the additional mass and the radiation-damping radius of the javelin float were the same when the radiation effect of the javelin float was on itself. The Influence of d m 's Change on Hydrodynamic Coefficients of the Devices Figure 8a,b represent the added mass and radiation damping of the moonpool, (c) and (d) the added mass and radiation damping between the moonpool and the sea javelin, and (e) and (f) the added mass and radiation damping of the sea javelin. size of the added mass changes very little, and increases with the increase of the radius. When the frequency range was 1~3(rad/s) ω = , the additional mass first decreased and then increased, and the peak value decreased with the increase in the radius. When the frequency range was 3~5(rad/s) ω = , the additional mass tended to be stable and changed very little with the increase in the radius. The radiation damping first increased and then decreased with the increase in the frequency, and there were two peaks of magnitude. The magnitude of the peak increased with the increase in the radius, while the frequency of the peak point decreased. It is worth noting that the additional mass and the radiation-damping radius of the javelin float were the same when the radiation effect of the javelin float was on itself. The Influence of 's Change on Hydrodynamic Coefficients of the Devices Figure 8a,b represent the added mass and radiation damping of the moonpool, (c) and (d) the added mass and radiation damping between the moonpool and the sea javelin, and (e) and (f) the added mass and radiation damping of the sea javelin. As shown in Figure 8, the increased mass first decreased, then increased with the frequency, and finally stabilized. Furthermore, the frequency range was 0~1(rad/s) ω = -the size of the added mass changed very little and decreased with the increase in the draught. When the frequency range was 0~3(rad/s) ω = , the additional mass first decreased and then increased, and the peak value increased with the increase in the radius. When the frequency range is 3~5(rad/s) ω = the additional mass tended As shown in Figure 8, the increased mass first decreased, then increased with the frequency, and finally stabilized. Furthermore, the frequency range was ω = 0 ∼ 1(rad/s)the size of the added mass changed very little and decreased with the increase in the draught. When the frequency range was ω = 0 ∼ 3(rad/s), the additional mass first decreased and then increased, and the peak value increased with the increase in the radius. When the frequency range is ω = 3 ∼ 5(rad/s) the additional mass tended to be stable and changed very little with the increase in the radius. The radiation damping first increased and then decreased with the increase in the frequency, and there were two peaks of magnitude. The peak value decreased with the increase in the radius, while the frequency of the peak point also showed a decreasing trend. In conclusion, a comparison between Figures 7 and 8e,f shows that the size of the moonpool had no effect on the additional mass and radiation damping when the radiation generated by the javelin float was applied to the moonpool itself, and the short, thick moonpool platform had a large radiation damping force. The Motion Response The results of the response amplitude operator were given by Equation (44), which could be used to verify the influence of the geometrical parameter of the moonpool and javelin damping coefficients on the wave energy buoy motion. As can be seen from Figure 9, the overall trend is the same regardless of whether the radius of the moon pool platform or the draft changes, and there are one or two peaks. When a given moonpool platform drew water, as the radius increased, the value of the first peak point increased, while the frequency corresponding to the peak point decreased. The change law of the second peak point was the same. When the radius of a given lunar pool platform was calculated, the value of the first peak point increased with the increase in the draught and the frequency corresponding to the peak point decreased with the increase in the draught, while the second peak point did not change with it. The motion response rate of the moonpool platform was greater than that of a single javelin float in the range of frequency ω = 2 ∼ 4(rad/s), so the moonpool platform could improve the dynamic characteristics of the device within a certain range of wave frequency. Based upon the previous calculation, the optimum MP dimension could be obtained, so the measure of the MP was the radius and the draught d M = 2(m), and the measure of the WEB and wave condition remained unchanged. Figure 10 shows the motion response for the MP and the WEB with various PTO damping coefficients (c = 20, 40, 60, 80, and 100(kN·s/m)) and various frequencies (ω = 0.5, 1.0, 1.5, 2.0, 2.5(rad/s)), respectively. Based upon the previous calculation, the optimum MP dimension could be obtained, so the measure of the MP was the radius and the draught 2(m) M d = , and the measure of the WEB and wave condition remained unchanged. Figure 10 shows the motion response for the MP and the WEB with various PTO damping coefficients (c = 20, 40, 60, 80, and 100(kN • s/m)) and various frequencies ( 0.5 1.0 1.5 2.0 2.5(rad/s) ω = , , , , ), respectively. It can be seen from the figure on the left that, under different PTO damping coefficients, when the wave frequency occurred 0(rad/s) ω → , the relative motion amplitude of the moonpool float tended to be 0 m. When the wave frequency was zero, the curve kept rising and reached the first peak point when the wave frequency was zero. The peak value decreased with the increase in the PTO damping coefficient. When the frequency was 3.6(rad/s) ω = , the second peak point was reached, which was lower than the first peak and decreased with the increase in the PTO damping coefficient. As can be seen from the figure on the right, the trend of the motion response is the same for different frequencies depending on the PTO damping factor. When the frequency was constant 2.5(rad/s) ω = , the amplitude of the motion response decreased with the increase in the It can be seen from the figure on the left that, under different PTO damping coefficients, when the wave frequency occurred ω → 0(rad/s) , the relative motion amplitude of the moonpool float tended to be 0 m. When the wave frequency was zero, the curve kept rising and reached the first peak point when the wave frequency was zero. The peak value decreased with the increase in the PTO damping coefficient. When the frequency was ω = 3.6(rad/s), the second peak point was reached, which was lower than the first peak and decreased with the increase in the PTO damping coefficient. As can be seen from the figure on the right, the trend of the motion response is the same for different frequencies depending on the PTO damping factor. When the frequency was constant ω = 2.5(rad/s), the amplitude of the motion response decreased with the increase in the PTO damping coefficient and kept decreasing until it approached 0. When the wave frequency was ω = 2.5(rad/s), the motion response was much larger than the motion response at other frequencies. At this wave frequency, the PTO damping coefficient of 100 kN·s/m reduced the motion response by 0.51 m. The Capture Width Ratio The results of the capture width ratio were given by Equation (49), which could be used to verify the influence of the geometrical parameter of the MP and SJ damping coefficient. As can be seen from the Figure 11a, when the moonpool's draught was constant, the peak value of the first peak point initially increased and then decreased with the increase in the radius. This was because when the radius increased, the input power of the incident wave also increased, but the frequency corresponding to the peak point decreased with the increase in the radius. When the wave frequency range was ω = 2 ∼ 4(rad/s), the wave energy efficiency captured by the moonpool platform device was better than that of a single javelin float. It can be seen from the figure on the right that there were two obvious peaks in the capture width ratio of the floating device of the moonpool, which were caused by the resonance between the platform of the moonpool and the javelin float, and the frequency of the resonance point was not consistent. When the radius is fixed, as the month draft increases, the first peak value increases, but the second peak value reduces, rate of change is relatively small, peaks corresponding to the frequency of the peak in the first place have changed, but the peak in the second place, there is no change that the first peak point related to the moonpool platform resonance, the second peak point related to the javelin floating buoy resonance. As can be seen from the Figure 12 on the left, under the conditions of different PTO damping coefficients, when the wave frequency was ω = 2.5(rad/s), the first peak appeared in the capture width ratio of the moonpool float. With the increase in the PTO damping coefficient, the peak gradually decreased. When the wave frequency of ω = 3.5(rad/s) occurred, the second peak value appeared on the capture width ratio curve. The effect of the PTO damping coefficient on the peak value was smaller than that on the first peak value. The third peak was the same as the second peak. As can be seen from the Figure 12 on the right, when the wave frequency was ω = 0.5(rad/s), the motion response of the device was significantly higher than that of the other wave frequencies with the change of the PTO damping coefficient, and when the damping coefficient was c = 0 ∼ 20(kN·s/m), the motion response rapidly increased and then slowly decreased. When the wave frequencies were ω = 1.0(rad/s) and ω = 1.5(rad/s), the motion response of the device increased with the increase in the PTO damping coefficient, but the increase rate was small. When the wave frequencies were ω = 2.0(rad/s) and ω = 2.5(rad/s), the motion response of the device increased with the increase in the PTO damping coefficient, but the increase rate was small. The results of the capture width ratio were given by Equation (49), which could be used to verify the influence of the geometrical parameter of the MP and SJ damping coefficient. As can be seen from the Figure 11a, when the moonpool's draught was constant, the peak value of the first peak point initially increased and then decreased with the increase in the radius. This was because when the radius increased, the input power of the incident wave also increased, but the frequency corresponding to the peak point decreased with the increase in the radius. When the wave frequency range was 2~4(rad/s) ω = , the wave energy efficiency captured by the moonpool platform device was better than that of a single javelin float. It can be seen from the figure on the right that there were two obvious peaks in the capture width ratio of the floating device of the moonpool, which were caused by the resonance between the platform of the moonpool and the javelin float, and the frequency of the resonance point was not consistent. When the radius is fixed, as the month draft increases, the first peak value increases, but the second peak value reduces, rate of change is relatively small, peaks corresponding to the frequency of the peak in the first place have changed, but the peak in the second place, there is no change that the first peak point related to the moonpool platform resonance, the second peak point related to the javelin floating buoy resonance. As can be seen from the Figure 12 on the left, under the conditions of different PTO damping coefficients, when the wave frequency was 2.5(rad/s) ω = , the first peak appeared in the capture width ratio of the moonpool float. With the increase in the PTO damping coefficient, the peak gradually decreased. When the wave frequency of 3.5(rad/s) ω = occurred, the second peak value appeared on the capture width ratio curve. The effect of the PTO damping coefficient on the peak value was smaller than that on the first peak value. The third peak was the same as the second peak. As can be seen from the Figure 12 on the right, when the wave frequency was 0.5(rad/s) ω = , the motion response of the device was significantly higher than that of the other wave frequencies with the change of the PTO damping coefficient, and when the damping coefficient was = 0~20(kN • s/m), the motion response rapidly increased and then slowly decreased. When the wave frequencies were 1.0(rad/s) ω = and 1.5(rad/s) ω = , the motion response of the device increased with the increase in the PTO damping coefficient, but the increase rate The Motion Response In this section, the MP-WEB is introduced and equation (50) is solved.. The dissipation coefficients were 0.05, 0.10, 0.15 μ = and 0.20 . As can be seen from Figure 13, the four graphs (a)-(d) showed roughly the same change trends, but the changes under forced motion were still different, due to the effect of adding viscous damping. After stabilization, it can be seen from the comparison of Figure 13a,b that the damping coefficient played an inhibitory role in the amplitude of the instantaneous motion response, and the larger the value of μ , the stronger the effect of amplitude reduction. In order to find a more suitable dissipation coefficient, experiments should be carried out to discover the optimal value. For the later time-domain calculation, the viscous dissipation coefficient was selected as the fixed value for the subsequent calculation. The Motion Response In this section, the MP-WEB is introduced and equation (50) is solved.. The dissipation coefficients were µ = 0.05, 0.10, 0.15 and 0.20. As can be seen from Figure 13, the four graphs (a)-(d) showed roughly the same change trends, but the changes under forced motion were still different, due to the effect of adding viscous damping. After stabilization, it can be seen from the comparison of Figure 13a,b that the damping coefficient played an inhibitory role in the amplitude of the instantaneous motion response, and the larger the value of µ, the stronger the effect of amplitude reduction. In order to find a more suitable dissipation coefficient, experiments should be carried out to discover the optimal value. For the later time-domain calculation, the viscous dissipation coefficient was selected as the fixed value for the subsequent calculation. The numerical results were derived using Star-CCM+ software for comparison with the time-domain results achieved via the Potential-Flow Viscous Dissipation (PFVD) method. As shown in Figure 14, it can be seen from the CFD calculation results that the instantaneous motion response of the moonpool float showed irregular changes at the beginning but gradually became stable over time, and the curve change period after stabilization decreased with the increase in the frequency. When the frequency was low, the peak of the curve deviated and the two peaks were different. When the frequency was high, the curve showed a periodic reciprocating motion, and the phase difference was the same. A comparison between the CFD calculation method and the method based on potentialflow theory for the introduction of viscous dissipation showed that the amplitudes of the two methods were very close. Comparatively speaking, the CFD calculation result was larger than that of the potential-flow analysis algorithm for viscous dissipation. At the same time, there was a phase difference between the curves of the two methods, which was caused by the wave phase during the calculation. Compared with the CFD method, the convergence time is shorter. (a) Captured width ratio versus frequency curve (b) Variation curve of capture width ratio with damping coefficien Figure 12. Effect of damping and frequency on the capture width ratio of the MJWEC. The Motion Response In this section, the MP-WEB is introduced and equation (50) is solved.. The dissi tion coefficients were 0.05, 0.10, 0.15 μ = and 0.20 . As can be seen from Figure 13, the four graphs (a)-(d) showed roughly the sa change trends, but the changes under forced motion were still different, due to the eff of adding viscous damping. After stabilization, it can be seen from the comparison of F ure 13a,b that the damping coefficient played an inhibitory role in the amplitude of instantaneous motion response, and the larger the value of μ , the stronger the effec amplitude reduction. In order to find a more suitable dissipation coefficient, experime should be carried out to discover the optimal value. For the later time-domain calculati the viscous dissipation coefficient was selected as the fixed value for the subsequent c culation. The numerical results were derived using Star-CCM+ software for comparison w the time-domain results achieved via the Potential-Flow Viscous Dissipation (PFV method. As shown in Figure 14, it can be seen from the CFD calculation results that t instantaneous motion response of the moonpool float showed irregular changes at t beginning but gradually became stable over time, and the curve change period af stabilization decreased with the increase in the frequency. When the frequency was lo the peak of the curve deviated and the two peaks were different. When the frequency w high, the curve showed a periodic reciprocating motion, and the phase difference was t same. A comparison between the CFD calculation method and the method based potential-flow theory for the introduction of viscous dissipation showed that t amplitudes of the two methods were very close. Comparatively speaking, the CF calculation result was larger than that of the potential-flow analysis algorithm for visco dissipation. At the same time, there was a phase difference between the curves of the tw methods, which was caused by the wave phase during the calculation. Compared w the CFD method, the convergence time is shorter. The numerical results were derived using Star-CCM+ software for comparison with the time-domain results achieved via the Potential-Flow Viscous Dissipation (PFVD) method. As shown in Figure 14, it can be seen from the CFD calculation results that the instantaneous motion response of the moonpool float showed irregular changes at the beginning but gradually became stable over time, and the curve change period after stabilization decreased with the increase in the frequency. When the frequency was low, the peak of the curve deviated and the two peaks were different. When the frequency was high, the curve showed a periodic reciprocating motion, and the phase difference was the same. A comparison between the CFD calculation method and the method based on potential-flow theory for the introduction of viscous dissipation showed that the amplitudes of the two methods were very close. Comparatively speaking, the CFD calculation result was larger than that of the potential-flow analysis algorithm for viscous dissipation. At the same time, there was a phase difference between the curves of the two methods, which was caused by the wave phase during the calculation. Compared with the CFD method, the convergence time is shorter. The Capture Width Ratio According to Equation (49), the different dissipation coefficients could be applied to obtain the capture width ratio over time. In addition, the equipment parameters were the same as above. It can be seen from Figure 15 that the four pictures (a)-(d) show roughly the same change trends, but the changes were still different under forced motion, due to the effect of adding viscous damping. After stabilization, it can be seen from the comparison of Figure 15a,b that the damping coefficient played an inhibitory role in the amplitude of the capture width ratio, and the higher the value, the stronger the effect of amplitude reduction. To find a more suitable dissipation coefficient, experiments should be carried out to discover the optimal value. For the later time-domain calculation, the viscous dissipation coefficient was selected as the fixed value for the subsequent calculation. The Capture Width Ratio According to Equation (49), the different dissipation coefficients could be applied to obtain the capture width ratio over time. In addition, the equipment parameters were the same as above. It can be seen from Figure 15 that the four pictures (a)-(d) show roughly the same change trends, but the changes were still different under forced motion, due to the effect of adding viscous damping. After stabilization, it can be seen from the comparison of Figure 15a,b that the damping coefficient played an inhibitory role in the amplitude of the capture width ratio, and the higher the value, the stronger the effect of amplitude reduction. To find a more suitable dissipation coefficient, experiments should be carried out to discover the optimal value. For the later time-domain calculation, the viscous dissipation coefficient was selected as the fixed value for the subsequent calculation. The numerical results were derived using Star-CCM+ software for comparison with the time-domain results achieved via the PFVD method. In Figure 16, the calculation results of the dissipated potential-flow method and the CFD method are compared. Since the potential-flow method had a long stable-convergence time, the capture width ratio in the selected time period was 150~250(s) t = . As can be seen from the figure, the capture width ratio of the moonpool float showed irregular changes at the beginning but gradually became stable over time, and the change period of the curve after stabilization decreased with the increase in the frequency. A comparison of the curves obtained by the two methods showed that the period after stabilization was the same, but the amplitudes were different. The capture width ratio obtained by the CFD method was smaller than that obtained by the method of introducing dissipative potential flow. The potential-flow method for introducing viscous dissipation did not take into account the energy dissipation caused by the viscous effect at the bottom of the moonpool platform device. The numerical results were derived using Star-CCM+ software for comparison with the time-domain results achieved via the PFVD method. In Figure 16, the calculation results of the dissipated potential-flow method and the CFD method are compared. Since the potential-flow method had a long stable-convergence time, the capture width ratio in the selected time period was t = 150 ∼ 250(s). As can be seen from the figure, the capture width ratio of the moonpool float showed irregular changes at the beginning but gradually became stable over time, and the change period of the curve after stabilization decreased with the increase in the frequency. A comparison of the curves obtained by the two methods showed that the period after stabilization was the same, but the amplitudes were different. The capture width ratio obtained by the CFD method was smaller than that obtained by the method of introducing dissipative potential flow. The potential-flow method for introducing viscous dissipation did not take into account the energy dissipation caused by the viscous effect at the bottom of the moonpool platform device. Experimental Facility The experiments considered here were carried out in the wave tank at Harbin En neering University, which has a length and width of 108 m and 7 m, respectively. T depth of the test section was 1.5 m as shown in Figure 17. The push-type wave ma could generate waves with a height of up to 0.4 m and period between 0.4 and 4.0 s. irregular waves that could be generated could model ITTC, JONSWAP, and P-M w spectra with a wave height between 0 and 0.32 m. Wave Parameter At the first approximation, and under the conditions tested, it could be assumed t a linear relationship existed between the wave amplitude and the motion response of Experimental Facility The experiments considered here were carried out in the wave tank at Harbin Engineering University, which has a length and width of 108 m and 7 m, respectively. The depth of the test section was 1.5 m as shown in Figure 17. The push-type wave maker could generate waves with a height of up to 0.4 m and period between 0.4 and 4.0 s. The irregular waves that could be generated could model ITTC, JONSWAP, and P-M wave spectra with a wave height between 0 and 0.32 m. Experimental Facility The experiments considered here were carried out in the wave tank at Harb neering University, which has a length and width of 108 m and 7 m, respectiv depth of the test section was 1.5 m as shown in Figure 17. The push-type wav could generate waves with a height of up to 0.4 m and period between 0.4 and 4 irregular waves that could be generated could model ITTC, JONSWAP, and Pspectra with a wave height between 0 and 0.32 m. Wave Parameter At the first approximation, and under the conditions tested, it could be assum a linear relationship existed between the wave amplitude and the motion respons Wave Parameter At the first approximation, and under the conditions tested, it could be assumed that a linear relationship existed between the wave amplitude and the motion response of the tested device, and therefore, the motion response amplitude operators (RAOs) could be defined. The wave height was 0.12 m and the wave periods ranged from 1.2 s to 3.0 s, as shown in Table 1. A total number of 14 working conditions were considered in the model test. Experimental Results The influence of the wave period on the motion displacement and output power of the moonpool float was studied. Figure 18 shows the displacement curve and the power curve of the moonpool float device. Experimental Results The influence of the wave period on the motion displacement and output power of the moonpool float was studied. Figure 18 shows the displacement curve and the power curve of the moonpool float device. As can be seen from Figure 18, when the period of the high-frequency part was 1.2 s-1.9 s, the period had a great influence on the displacement and power of the javelin float, and the greater the period, the greater the displacement and power of the moonpool float. When the period of the low-frequency part was 2.2 s-3.0 s, the larger the period was, the smaller the displacement and output power of the moonpool float. When the period of the low-frequency part is 2.2 s-3.0 s, the larger the period is, the smaller the displacement and output power of the moonpool floater will be. When the cycle is 1.9 (s) and 2.4 (s), the motion and power appeared two peaks, the device has the highest energy conversion efficiency, the period of actual waters 5.985 and 7.56 (s) accordingly, higher the wave height is, higher the output power of the moonpool floater, which is of great significance for the follow-up study of similar devices. A comparison between the displacement and output power of the wave energy device of the moonpool float and the javelin float showed that when the period was 1.8 s-2.4 s, the displacement and power of the moonpool float were significantly higher than those of the javelin float, even reaching about two times higher at the peak. Therefore, the application of the moonpool platform in actual wave energy devices in engineering could effectively improve their energy conversion efficiency. As can be seen from Figure 19, the amplitude of the temporal instantaneous motion response calculated by the viscous modified potential-flow method was very close to the As can be seen from Figure 18, when the period of the high-frequency part was 1.2 s-1.9 s, the period had a great influence on the displacement and power of the javelin float, and the greater the period, the greater the displacement and power of the moonpool float. When the period of the low-frequency part was 2.2 s-3.0 s, the larger the period was, the smaller the displacement and output power of the moonpool float. When the period of the low-frequency part is 2.2 s-3.0 s, the larger the period is, the smaller the displacement and output power of the moonpool floater will be. When the cycle is 1.9 (s) and 2.4 (s), the motion and power appeared two peaks, the device has the highest energy conversion efficiency, the period of actual waters 5.985 and 7.56 (s) accordingly, higher the wave height is, higher the output power of the moonpool floater, which is of great significance for the follow-up study of similar devices. A comparison between the displacement and output power of the wave energy device of the moonpool float and the javelin float showed that when the period was 1.8 s-2.4 s, the displacement and power of the moonpool float were significantly higher than those of the javelin float, even reaching about two times higher at the peak. Therefore, the application of the moonpool platform in actual wave energy devices in engineering could effectively improve their energy conversion efficiency. As can be seen from Figure 19, the amplitude of the temporal instantaneous motion response calculated by the viscous modified potential-flow method was very close to the value obtained by the model test, but the amplitude of the oscillation obtained by the theoretical calculation was larger than that obtained by the model test. Because the phase of the wave was different, there was a phase difference between the two curves. By comparing the time-domain relative displacements for different viscous dissipation coefficients with the results of the model test method, the appropriate viscous dissipation correction coefficients were selected as µ= 0.15. Conclusions In this paper, to improve the efficiency of converting energy from ocean waves, a new moonpool was adopted in the MJWEC. The diffraction and radiation of the linear wave caused by the MJWEC in water were examined in accordance with a semianalytical method. Case studies with different geometry parameters of the moonpool paddocks were verified. Mechanistic research was obtained for the PTO system regarding its energy conversion characteristics. Furthermore, CFD and experimental methods could be adopted to verify the accuracy of the PFVD method. The results indicate: (1) A comparison of axisymmetric buoys with and without a moonpool platform showed that the moonpool had an effect on the hydrodynamic coefficient of the central buoy. For the frequency-domain dynamic characteristics of the MJWEC under potential flow, when the wave frequency was 2.0~4.0(rad/s) ω = , the motion and capture width ratio of the wave energy device with the moonpool platform were significantly better than those of the single javelin float. (2) For CFD analysis, the platform device of the moonpool did not change the wave period inside the platform of the moonpool, but it did change the wave height inside the platform, improving the motion amplitude of the float. Therefore, the platform device of the moonpool significantly improved the energy conversion quality of the whole wave energy device. The CFD calculation method and viscous dissipation method based on potential-flow theory were very close to each other in terms of the amplitude of the motion response. Comparatively speaking, the CFD calculation result was higher than the potential-flow analysis algorithm for viscous dissipation. (3) According to the analysis of the test results of the model test, according to the linear wave theory, the influence of the wave height on the motion response and power of the MJWEC was positively linear. The wave period's effects on different devices were not the same; a single javelin float peaked at 1.6 s, while the MJWEC had two peaks at 2.0 s and 2.4 s. Contrastingly, two experiments found that for certain wave periods from 1.8 s to 2.4 s, the javelin float performed better than the MJWEC in terms of displacement and power. Considering the peak values in particular, which were two times higher for the moonpool, the moonpool platform, when applied to the engineering practice of wave energy devices, could effectively improve the efficiency of their energy conversion. By comparing the results of the viscous dissipation and the pool test, the optimal viscous dissipation coefficient =0.15 μ was found to be suitable. Conclusions In this paper, to improve the efficiency of converting energy from ocean waves, a new moonpool was adopted in the MJWEC. The diffraction and radiation of the linear wave caused by the MJWEC in water were examined in accordance with a semianalytical method. Case studies with different geometry parameters of the moonpool paddocks were verified. Mechanistic research was obtained for the PTO system regarding its energy conversion characteristics. Furthermore, CFD and experimental methods could be adopted to verify the accuracy of the PFVD method. The results indicate: (1) A comparison of axisymmetric buoys with and without a moonpool platform showed that the moonpool had an effect on the hydrodynamic coefficient of the central buoy. For the frequency-domain dynamic characteristics of the MJWEC under potential flow, when the wave frequency was ω = 2.0 ∼ 4.0(rad/s), the motion and capture width ratio of the wave energy device with the moonpool platform were significantly better than those of the single javelin float. (2) For CFD analysis, the platform device of the moonpool did not change the wave period inside the platform of the moonpool, but it did change the wave height inside the platform, improving the motion amplitude of the float. Therefore, the platform device of the moonpool significantly improved the energy conversion quality of the whole wave energy device. The CFD calculation method and viscous dissipation method based on potential-flow theory were very close to each other in terms of the amplitude of the motion response. Comparatively speaking, the CFD calculation result was higher than the potential-flow analysis algorithm for viscous dissipation. (3) According to the analysis of the test results of the model test, according to the linear wave theory, the influence of the wave height on the motion response and power of the MJWEC was positively linear. The wave period's effects on different devices were not the same; a single javelin float peaked at 1.6 s, while the MJWEC had two peaks at 2.0 s and 2.4 s. Contrastingly, two experiments found that for certain wave periods from 1.8 s to 2.4 s, the javelin float performed better than the MJWEC in terms of displacement and power. Considering the peak values in particular, which were two times higher for the moonpool, the moonpool platform, when applied to the engineering practice of wave energy devices, could effectively improve the efficiency of their energy conversion. By comparing the results of the viscous dissipation and the pool test, the optimal viscous dissipation coefficient µ = 0.15 was found to be suitable.
2021-12-19T17:16:16.711Z
2021-12-16T00:00:00.000
{ "year": 2021, "sha1": "7b901b7bc1f9e1d56801a454e1e1d7bd21b9723e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1312/9/12/1444/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "63685b8a79f78d4d4682859dba9bd4f99ca2c814", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
251366285
pes2o/s2orc
v3-fos-license
Efficient Process for the Production of Alkyl Esters This article reports a scalable process development for the production of alkyl esters through the esterification route by utilizing fly ash as a catalyst. The catalyst consisting of mixed oxides such as alumina, iron oxide, calcium oxide, magnesium oxide, and silica was employed for the esterification reaction without modification. The catalyst was evaluated for the conversion of feedstock containing variable amounts of free fatty acids, mono/dibasic acid, and alcohol/polyols into the corresponding alkyl esters. Three types of fly ash catalysts, viz., FS-1, FP-1, and FC-1, were chosen from three different industrial sources. Synthesis of dimethyl adipate was studied as a model reaction. FS-1 fly ash gave the highest yield of dimethyl adipate, whereas FC-1 gave a low yield of dimethyl adipate. The recyclability of FS-1 was evaluated for three cycles, and no loss of yield was observed. Furthermore, the catalyst FS-I was found to be capable of producing good yields for various esterification reactions with different substrates. INTRODUCTION Generation of minimum waste is one of the important characteristics for designing and developing a green process. Another hallmark is developing and utilizing suitable catalysts that accelerate an otherwise slower or improbable reaction. Alkyl esters find application in various industries including oleochem, cosmetics, paints, fuels, emulsifiers, fragrances, and pharmaceuticals. 1−5 Fly ash is one of the biggest artificial wastes generated due to industrialization. The source of this waste is the coal used as fuel in the industry. An inorganic that is compositionally different due to the source utilized as fuel is a challenge to the industry with regard to its disposal. Fly ash consists of mixed metal oxides (mainly SiO 2 , Al 2 O 3 , Fe 2 O 3 , CaO, and MgO) and metal silicates. 6−8 Metal oxides and mixed metal oxides are known to possess catalytic properties. 9−15 The modified catalyst in the form of fly ash-supported metal oxides has been employed as a recyclable solid catalyst for organic reactions, viz., Knoevenagel condensation, esterification, and transesterification reactions. 16−22 Scientists have recently developed a patented process for the industrially applicable synthesis of biodiesel and biolubricant base oils using fly ash as a heterogeneous catalyst. 23,24 Apart from this, there are limited applications of fly ash as a catalyst, reported from industry. 25 As mentioned earlier, reported methods use fly ash as merely a support, and active metal oxides are doped on fly ash with chemical treatment. These additional modification steps may add extra costs, leading to a less economical process. This article reports a scalable process for the production of alkyl esters through the esterification route by utilizing fly ash as a catalyst. This single-step heterogeneous catalytic method for the production of alkyl esters uses fly ash as a catalyst without any modification of the fly ash. This allows complete conversion of feedstock containing any free mono-or dibasic acid and long-chain fatty acids, alcohols, or diol into the corresponding alkyl esters with minimal downstream processing and generates minimum effluent waste at the end of the process. Conventionally, alkyl esters are produced through esterification (Scheme 1, equation 1) and transesterification reactions (Scheme 1, equation 2) using acid and alkali catalytic systems. 1,26−47 In this study, synthesis of dimethyl adipate was taken as a model reaction using adipic acid and methanol as reactants (Scheme 2). Dimethyl adipate is a dibasic ester. Dibasic esters are an important class of alkyl esters and used in different industrial applications, viz., cosmetics, paints, and plasticizers. 1−3 The fly ash catalyst samples were collected from different industrial sources; those collected from the steel manufacturing industry were named as FS-1, with FP-1 for those from the thermal power industry and FC-1 for those from the alkali chemical industry. After preliminary investigation, the fly ash FS-1 catalyst which gave the highest yield for dimethyl adipate was selected for further process optimization and scale-up studies. Catalyst recovery and recycling were studied in detail along with catalyst characterization before and after the reaction. Other industrially important alkyl esters such as methyl stearate, dioctyl phthalate, and ethylene glycol stearate were also synthesized by this process. EXPERIMENTAL SECTION 2.1. Chemicals. All chemicals, viz., alcohols (methanol, isopropyl alcohol, ethylene glycol, octanol) and carboxylic acid feedstock (adipic acid, stearic acid, and ptathalic anhydride) were purchased from commercial vendors. GC analytical standards were purchased from Sigma. Individual oxides, viz., silica (SiO 2 100−200 mesh size), alumina (C504-type), and iron oxide were purchased from Sigma-Merck. Reactants were used without purification. Various fly ash samples were collected from different industries; those collected from the steel manufacturing industry were named as FS-1, with FP-1 for those from the thermal power industry and FC-1 for those from the alkali chemical industry. All fly ash catalysts were used without modification. During the reuse experiments, the fly ash was calcined at 500°C for 1 h. For the control esterification reaction, the mixture of oxides was prepared through physical mixing, and the percentage of individual oxides in the mixture was kept as per the composition of the FS-1 fly ash catalyst. Analytical Techniques. XRD analysis of fly ash was performed using a Panalytical diffractometer equipped with a quartz monochromator and Cu Kα radiation (λ = 0.154059 nm). The X-ray diffraction (XRD) patterns were analyzed using standard ICDD (International Center for Diffraction data) files. Morphology was measured using Philips XL30 scanning electron microscopy (SEM). An energy dispersive Xray detector (EDX) mounted on the microscope was used for the elemental analysis of the fly ash samples. The elemental composition of various fly ash samples was determined by atomic emission spectrometry with an inductively coupled plasma atomic emission spectrometer (Agilent ICP-MPAES 4010), whereas silica was analyzed gravimetrically. A pH of 5% for the fly ash solution was measured using a pH meter under (200 rpm) constant stirring. Fourier-transform infrared spectroscopy (FT-IR) analysis of fly ash samples was done on a FT-IR spectrometer (Bruker Vertex) using the KBr palate method having an IR scan range of 400−4000 cm −1 . The BET measurements of the fly ash catalysts were carried out on a Micromeritics BET analyzer instrument (TriStar II 3020 Version 3.02). The acidity of fly ash catalysts was determined as mmol per gram of the catalyst using n-butyl amine titration. 48,49 Catalyst samples were freshly dried at 120°C and cooled at room temperature in a desiccator before use. Next, 0.2% Hammett indicators and 0.1 M n-butyl amine were prepared in anhydrous benzene. A total of 0.2 g of the dried catalyst was taken in 10 mL of anhydrous benzene. Next, 2 mL of the Hammett indicator was added to the catalyst solution. The solution was titrated against 0.1 M n-butyl amine solution. After stepwise addition of n-butyl amine, the titration mixture was stirred for 4 h at room temperature, and color change was observed. The end point of titration gave the acid strength of the catalyst. Esterification reaction monitoring and product purity analysis were done on a Thermo scientific GC-FID 800+ series machine with a GC column having a 5% phenylpolydimethylsiloxane-bonded stationary phase. The operational temperature was up to 400°C. The column length was 15 m, and the internal diameter was 0.32 mm. GC standards and reaction product samples were prepared in the THF solvent for GC analysis. GC operation conditions for all ester molecules were as follows ( Table 1). The product purity was determined through the % area method. Reaction Parameters. For preliminary investigation, the reaction was carried out for 1 mol acid feedstock. To obtain complete conversion of feedstock, experiments were carried out by varying the percentage of the catalyst (2.5−12.5 wt %), temperature (80−220°C), and reaction time (1−5 h). Optimized Process for Dimethyl Adipate for 1 Mol Acid Feedstock. The reaction was performed in a stainless-steel high-pressure reactor (inner volume = 1 L; maximum pressure = 100 bar; maximum temperature = 250°C ). In a typical experiment, 192 g of methanol (6 equivalents) was mixed with 14.5 g of the fly ash catalyst (10% wt/wt of acid feedstock) and transferred into a reactor, followed by addition of 146 g of adipic acid (1 mol). The reactor was closed, and the temperature of the reactor was set at 200°C with 200 rpm agitation under autogenous pressure. The reaction was carried out for 4 h. After completion of the reaction, the catalyst was recovered by simple filtration. The excess solvent and water generated were evaporated to get the reaction product. The product was analyzed by GC-FID. After As shown in the process flow diagram (Figure 1), 0.590 kg of the FS-1 fly ash catalyst, 5.840 kg of adipic acid (40 mol), and 7.680 kg of methanol (240 mol, 6 equiv) were charged in a 25 L batch high-pressure reactor. The reaction was performed for 4 h at 200°C under autogenous pressure with agitation. After completion of the reaction, the reaction mass was filtered in a filtration assembly. After filtration, the catalyst was washed with methanol and kept in a furnace at 500°C for 1 h and reused for the next batch. Catalyst recovery was 98% (0.584 kg), and the remaining was handling loss. The filtrate was again transferred to a reactor, and the excess methanol and water generated during the reaction was removed by vacuum evaporation. A 7.715 kg product with 98.0% yield was obtained after methanol removal against 7.760 kg theoretical yield. The product was stored in a storage tank, while the recovered methanol with water was stored in a recovery storage tank (Figure 1). Characterization of Fly Ash Samples. The chemical composition of fly ash is shown in Table 2. It mainly consists of mixed metal oxides such as iron oxide, alumina, calcium oxide, magnesium oxide, silica, etc. FS-1 and FP-1 have more amounts of iron oxide, alumina, and silica, which impart an acidic nature to the catalyst. FC-1 fly ash has a higher percentage of calcium oxide, which indicates that FC-1 is basic in nature. This was also confirmed by EDX analysis (Figure 3), which showed that there was a high amount of alumina and silica in FS-1 and FP-1 fly ash samples, while FC-1 has a high amount of calcium and sulfur as compared to others. The pH of the water solution of fly ash samples also confirmed the nature of the catalyst (Table 2). FS-1 and FP-1 show very a mild acidic pH, while FC-1 has a basic pH. The BET measurement of fly ash catalysts (Table 2) shows that the surface area of the FS-1 catalyst was higher than that of the FP-1 and FC-1 fly ash catalysts. The FT-IR analysis of the fly ash catalyst is shown in Figure 2 catalysts. However, in the case of the FS-1 catalyst, the peak at 1072 cm −1 is sharper than that of the FP-1 catalyst with an additional peak at 790 cm −1 , which is due to the symmetric stretching vibrations of Si-O-Al. This finding shows that the FS-1 catalyst could have a prominent aluminosilicate framework compared to the FP-1 catalyst, while in the case of the FC-1 catalyst, intense peaks at 1456 cm −1 represent carbonate groups, which may have resulted due to the formation of CaCO 3 from Ca(OH) 2 , which is associated with the CaO present in the catalyst. The intense stretching vibration at 1148 cm −1 is due to the sulfate group present in the FC-1 catalyst. The stretching vibration at 1004 cm −1 with lower intensity indicates that the FC-1 catalyst has a lower percentage of aluminosilicate framework compared to the FS-1 and FP-1 catalysts. The peak at 3629 cm −1 in the FC-1 catalyst is due to OH stretching vibrations due to presence of Ca(OH) 2 . All three fly ash catalysts show bending vibrations in between 1500 and 1600 cm −1 , indicating the presences of a small quantity of water in the fly ash catalyst. All the above-mentioned IR values and their interpretation match with literature data. 36,37 Figure 4 shows the SEM images of FS-1, FP-1, and FC-1, before and after the reaction. The FS-1 catalyst showed ACS Omega http://pubs.acs.org/journal/acsodf Article irregular particle shape and size in the range of 5−100 μm ( Figure 4). However, FP-1 shows spherical particles shape having a range of 2−50 μm. The FC-1 catalyst particles are irregular in shape and size in the range of 10−100 μm. The morphology of all three catalysts remains the same after the reaction, which confirms that the fly ash catalyst remains intact after the reaction. This was further confirmed by XRD analysis of the catalyst ( Figure 5). XRD analysis shows the peaks of silica and alumina in all three catalysts, while there are additional peaks in the FC-1 catalyst, which represents calcium similar to the literature value. Even after the reaction, the XRD pattern remains the same for all the catalyst. Catalytic Evaluation and Recycling of the Fly Ash Catalyst for the Model Reaction. Various reaction parameters were studied for the conversion of adipic acid to dimethyl adipate. 3.2.1. Effect of Temperature. Figure 6 shows the effect of the temperature on the conversion of adipic acid for the fly ash catalyst. It is important to note here that no significant conversion of feedstock takes place at a temperature below 100°C in the presence of the catalyst for all three fly ash samples. As shown in Figure 6, the highest conversion of feedstock was obtained at 200°C with 100% selectivity. Above 200°C, no further improvement in conversion was observed, whereas FC-1 showed less conversion at 200°C compared to FS-1 and FP-1 catalysts. It is noteworthy to mention here that the blank experiment showed approximately 55% of conversion at 200°C , and their after it remains constant even after an increase in the reaction temperature. Figure 7. Almost 70% reaction was completed in first 2 h. At the 4th h, the conversion was 98% with 100% selectivity for FS-1, 94% for FP-1, and 72% for FC-1. By the 5th h, no further increase was observed in conversion, which means that to obtain the maximum yield, 4 h is needed, whereas for the blank experiment, approximately, 53% of conversion was obtained at 3 h; with a further increase in time, the conversion remains constant. Effect of the Catalyst Amount and Catalyst Nature. The effect of the catalyst amount is shown in Figure 8. Without the catalyst, the conversion was 55%. For all three catalysts, 2.5% catalyst amount (wt/wt to the acid feedstock) did not show much increase in conversion. For 5 and 7.5% catalyst amounts, there were increases in conversion. The 10% catalyst gave the highest conversion. Even at a higher catalyst amount, i.e., 12.5%, conversion remained the same. Under optimized reaction conditions (Table 3), the maximum conversion was given by FS-1 (98%), whereas FP-1 gave 94% and FC-1 fly ash gave the lowest (72%) at 200°C. This is because FS-1 and FP-1 fly ash samples have a higher percentage of iron oxide, alumina, and silica and negligible percentage of calcium and magnesium oxide. The prominent silica and alumina framework present in the FS-1 fly ash catalyst has a synergistic effect on the reaction with high efficiency. Several Lewis acid systems which have active metal centers such as aluminum, iron, zinc, tin, etc. catalyze the esterification reaction very efficiently. 13−15,37−46 The reported heteropolyacid catalytic systems developed for the esterification reaction showed that metal centers (Ti, Al, Fe, Zn, etc.) present in the catalyst possess Lewis acidity, and they have shown the influence of Lewis acidity on catalytic acitivies. 44 Similarly, the catalytic system having the aluminosilicate framework which has both Lewis and Bronsted sites showed a synergistic effect in the esterification reaction. 38 The FC-1 has a higher percentage of calcium oxide, which is basic in nature. It is known that the esterification reaction slows down in the presence of calcium oxide. 47 From the above-mentioned observations, it was found that the FS-1 fly ash catalyst facilitates the reaction completion and gave the highest conversion. As the FS-1 catalyst gave the highest yield of dimethyl adipate, we used the FS-1 fly ash catalyst for further investigation and scale-up studies under optimized reaction conditions. The catalyst after the reaction was further treated and studied for recovery and recyclability. To support our observations for the FS-1 catalyst, we have carried out the dimethyl adipate model reaction with individual oxide and their mixture under optimized reaction conditions (Table 4). It was observed that silica gave 89% yield of dimethyl adipate, which was higher compared to that of individual alumina and iron oxide. However, the mixture of all three oxides gave 82% yield of dimethyl adipate. Silica comprises milder acidity due to the acidic proton, whereas aluminum metal imparts Lewis acidity to alumina due to its empty d orbital, which catalyzes the reaction toward the product side. Furthermore, the acidity of the fly ash catalyst and mixture of individual oxides were determined using n-butyl amine titration with different Hammett indicators to support our observations (Table 5). All the catalysts gave positive results to the methyl red indicator, but the other two indicators were nonresponsive to the tested solutions. The analysis shows that the FS-1 catalyst has a higher acidity (2.93 mmol/g) than that of FP-1, FC-1, and the mixture of individual oxides. The acidity data of the catalysts are in agreement with our observation for the catalytic activity of the FS-1 catalyst, which was the highest among the tested catalysts for the esterification reaction. Similarly, the BET surface area of the FS-1 catalyst was higher than that of the FP-1 and FC-1 fly ash catalysts. These observations indicate that the FS-1 fly ash catalyst is more active than the other tested catalysts. The silica and alumina framework present in the FS-1 fly ash catalyst has a synergistic effect in the esterification reaction, giving a higher yield of dimethyl adipate (98%) than the individual oxides and physical mixture of oxides. The probable reaction mechanism is given in Figure 9. The acid sites present on the silica and alumina framework of the fly ash catalyst interact with carbonyl oxygen of free acids. 13,38 This interaction makes carbonyl carbon more electron-deficient. Alcohol is introduced in the reaction, which acts as a nucleophile and attacks the electron-deficient carbonyl carbon of free acid. As the reaction proceeds, the dehydration reaction takes place with loss of water molecules, giving alkyl ester as the end product. Catalyst Recycling. Catalyst recycling was investigated for three cycles (Figure 10). For the first two cycles, the catalyst showed almost similar activity (98% conversion) with 100% selectivity as compared to the fresh catalyst. In the third recycling, catalyst activity decreased slightly (97% conversion). From recycling experiments, it was found that the fly ash catalyst was successfully recycled for three cycles. Specifications of Dimethyl Adipate Synthesized in Scale-Up Studies. The important properties and specification of dimethyl adipate obtained at the 40 mole scale process are shown in Table 6. Dimethyl adipate obtained by our process has 98% purity and meets the specification required for different industrial applications. Metal leaching from the fly ash catalyst into the product was determined and found to be below the detection level. 3.5. Process Scope for Different Industrially Important Alkyl Esters. We further synthesized industrially important esters from long-chain free fatty acids/dibasic acids and alcohol/diol with an average yield of 97% (Table 7), which demonstrates the versatility and scalability of our process. CONCLUSIONS This study demonstrates the utilization of fly ash as a catalyst, which is able to convert the feedstock having free mono/ dibasic acid and alcohols/polyols into alkyl esters with the highest selectivity and yield. The use of this fly ash catalyst for the esterification reaction is more economic and advantageous over processes using modified fly ash catalysts. The catalyst can be separated by simple filtration after completion of the reaction, with no water requirement. The catalyst is successfully recovered and recycled. Among all fly ash catalysts, the highest conversion was seen with FS-1 (98%) followed by FP-1 and FC-1 fly ash catalysts. Successful demonstration of the process at the kilogram scale for dimethyl adipate as a model molecule by utilizing the FS-1 fly ash catalyst is Figure 9. Probable reaction mechanism for the fly ash-catalyzed esterification reaction.
2022-08-06T15:17:08.676Z
2022-08-04T00:00:00.000
{ "year": 2022, "sha1": "8ac12cb4d689cb9b5fe1a8e6ba5cbda50c7d8437", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "6c80e49fbea2a031423e044bb8a8c45213274761", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
231834644
pes2o/s2orc
v3-fos-license
The effect of GPS refresh rate on measuring police patrol in micro-places With the increasing prevalence of police interventions implemented in micro hot-spots of crime, the accuracy with which officer foot patrols can be measured is increasingly important for the robust evaluation of such strategies. However, it is currently unknown how the accuracy of GPS traces impact upon our understanding of where officers are at a given time and how this varies for different GPS refresh rates. Most existing studies that use GPS data fail to acknowledge this. This study uses GPS data from police officer radios and ground truth data to estimate how accurate GPS data are for different GPS refresh rates. The similarity of the assumed paths are quantitatively evaluated and the analysis shows that different refresh rates lead to diverging estimations of where officers have patrolled. These results have significant implications for the measurement of police patrols in micro-places and evaluations of micro-place based interventions. Introduction Police patrols are often targeted at areas of above-average risk of crime occurring-areas known as crime hot-spots. These patrols are generally acknowledged to be effective (Braga et al. 2019), though how effective they can be and how to optimise patrol strategies remains a topic for debate. A critical issue with measuring the effectiveness of police patrols lies in the accurate measurement of patrols themselves. Throughout the Twentieth century there existed only two practical methods for measuring where and when officers were on patrol: asking officers to record when they entered or exited a hot-spot, or deploying additional observers to record this information instead. Both methods require an additional resource burden to be met and are subject to various sources of error. More recently a third and far less labour-intensive alternative has become possible: the use of Global Positioning System (GPS) data collected from officer-worn radios. Several studies have now used GPS data to measure police patrols (Ariel et al. 2016;Williams and Coupe 2017) and they make a necessary and welcome addition to the literature. While the use of GPS data to measure patrols holds many advantages over the previous methods a question remains: how accurate is that measurement? A critical contributing factor to this question is how often the GPS data are recorded on a given patrol. Due to limitations associated with police radio batteries, it is not feasible to confirm an officer's location every second or even every 10 s, and so the frequency with which an officer's location is measured (in the form of a GPS 'ping') will affect how accurately their true patrol path can be estimated. Based on conversations with several UK police services 1 the standard 'refresh rate' for officer-worn radios is between 2 and 5 min and their path between these pings must be interpolated in order to estimate their patrol routes. This is becoming particularly relevant as patrol areas are being designated at more granular spatial and temporal resolutions; down to hot-spots that measure only a few 100 m across and that are 'hot' for less than a day (see Mohler et al. 2015, for an example). This paper presents two experiments which attempt to measure the accuracy of GPS data under real-life conditions. The first experiment was conducted with a police force in the North of England to verify how accurately a GPS ping from an officer-worn radio represented the true location of the officer at the time the ping was recorded. The second experiment was conducted with the Metropolitan Police Service (MPS) London and used a faster than standard ping refresh rate in order to measure how different refresh rates impacted on the assumed paths which officers took (and thus would influence the estimate police patrol time in crime hot-spots). The rest of this section provides an overview of the police patrol literature with a particular focus on how it has been measured. This is followed by the first experiment which demonstrates that GPS data do in most cases accurately represent the officer's location at the point of measurement and thus provide an adequate basis from which to try and interpolate their complete path. The second experiment interpolates different officer patrol paths by sampling from the GPS data at different rates. A similarity metric is then used to compare how far apart an officer's assumed path would be based on different GPS refresh rates. The final section discusses how the results of these experiments have implications for both past and future research. Measuring police patrol Whilst it is now accepted that hot-spot policing can reduce crime, a question that remains is how much time should officers spend in a hot-spot to maximise efficiency and effectiveness? Research focussed on the relationship between the amount of time police officers spend in a hot-spot-also called dosage-and the benefits produced remains sparse (Bowers et al. 2004). There are some notable studies, such as an analysis by Koper (1995) of a preventative patrol experiment conducted in Minneapolis (Sherman and Weisburd 1995). The work was based on the recordings of trained observers who were positioned in 100 active hot-spots; they recorded the length of time officers spent in the hot-spot and the crime and disorderly behaviour in and around it. Koper found that fewer than 10 min of police presence had no noticeable improvement in deterrence when compared to a quick drive-by. 15 min of police presence did have an effect but more than 15 min had diminishing returns. However, he cautioned about interpreting the results as definitive as the effect was not statistically significant. This peak in efficiency at 15 min (known as the 'Koper Curve') has become a 'golden rule' in preventative patrol (Perry et al. 2013), despite a lack of further study. Separate studies have tested whether 15-min patrols are effective (e.g. Telep et al. 2014) and found this can lead to a significant reduction in crime, but the effect of different amounts of dosage have received little scrutiny. One reason for this is the significant challenge associated with quantifying exactly how long officers spend in hot-spots. Until recently, this could only realistically be achieved with very laborious methods: stationing observers within a hot-spot (e.g. Koper 1995); asking police officers to record exactly when they entered and exited the area (e.g. Ratcliffe et al. 2011;Sherman and Weisburd 1995); or analysing police logs (e.g. Telep et al. 2014). The alternative has been to assume that treatment had occurred without actually measuring it. However, the fact that the patrols are assumed to take place in the correct locations, rather than being empirically measured, is an inherent weakness of such a study due to the potential for implementation failure as was observed in the studies by Sherman et al. (1989), Sherman and Weisburd (1995) and Telep et al. (2014). A failure to deliver what was planned in full or in part is a frequent problem in crime prevention practice (see Knutsson and Clarke 2006), and means that activity realised in practice may differ substantially from what was intended. This has the potential to undermine studies which adopt an 'intention-to-treat' evaluation model (e.g. Andresen and Hodgkinson 2018;Novak et al. 2016), where implementation activity is assumed but not measured and highlights the importance of directly measuring policing dosage. The quantification of dosage has been and remains a considerable challenge. Even when observers or police logs are used, there are often issues of precision; officers can stray outside the patrol zone without realising it (Sorg et al. 2014) or ...through boredom or a perception that they were displacing crime to nearby streets would stray for a time if they were aware of areas of interest just beyond the foot patrol area... (Ratcliffe et al. 2011). Since Koper's 1995 study, few studies have looked at how the amount of police patrol dosage impacts on crime. As patrols are directed to increasingly small micro-areas, it follows that the measurement of patrols needs to be increasingly precise and this remains a challenge. However, there has been a steady growth in the usage of GPS devices by police forces. These provide a new method for tracking where officers move and thus a new way of estimating police dosage. The next section discusses the strengths and weaknesses of this approach along with an overview of the few studies which have so far utilised these data. Using GPS data The growth in usage of GPS devices by police forces provides a capability for tracking where officers move with much greater accuracy than has previously been possible. Potentially the first study to utilise GPS data to measure dosage was conducted in Peterborough (UK) using radio data for the movement of Police Community Support Officers (PCSO) (Ariel et al. 2016). In that study, GPS pings were recorded every minute, and the dosage in a hot-spot was measured as the time between the first GPS ping within the hot-spot, and the first ping without. This study found that police patrols had a significant impact upon crime and disorder, with patrols lasting up to around 15 min and on average lasting around 8 min. However, in the analysis conducted, patrol dosage was measured in the aggregate over the entire study period, meaning that variation in patrol dosage per day, or per police shift, was not considered. Williams and Coupe (2017) also measured police patrol dosage using GPS data. Specifically, the study was concerned with whether more frequent but shorter periods of patrol dosage (nine periods of 5 min each) had a greater or lesser impact on crime than less frequent but longer (three periods of 15 min) patrols. The 'crackdown, back off ' theory hypothesised by Sherman (1990) stated that the deterrent effect generated by police patrols 'decays' once there is no police presence. Williams and Coupe hypothesised that more frequent patrols, "might arguably allow less time for what Sherman calls "deterrence decay" to kick in, so that there would be less crime. " However, their findings suggest that the longer, less frequent patrols were more effective at preventing crime. GPS data are not a panacea for measuring police patrols and do come with some drawbacks. Chief among these is the fact that signals do not account for every step in an officer's path. In their study, Ariel et al. (2016) were able to use 1-min refresh rates. However, as mentioned earlier, the standard ping rate (at least for patrol officer radios) is generally every 2 to 5 min due largely to data collection costs and radio battery life considerations. Williams and Coupe (2017) did not report the time between GPS pings in their study. Given the delays between the recording of foot-patrol locations (even if this is only 1 min), the paths taken between GPS pings require interpolation. If employed as a micro-level measure of dosage, this can introduce errors into patrol evaluations (which will increase with the latency between GPS pings). Vehicle-based police patrols are less likely to be impacted by the GPS issues discussed for two key reasons. First, Automated Vehicle Locator (AVL) GPS pings usually occur much more frequently; either every 10 to 15 s or every few 100 m of travel (for an example, see Weisburd et al. 2015) both because a vehicle is likely to be travelling much faster than a person on foot and because battery life is less of a concern when the GPS device is installed in a vehicle. Second, vehicles are confined to the road network and as such their potential paths between pings are much more restricted and thus easier to interpolate accurately when compared to foot-based officers who have no such restriction. Map-matching algorithms seek to turn GPS data into digitised complete paths and mitigate the measurement errors inherent in GPS data. A large number of map-matching algorithms now exist though none is recognised as 'the best' (Houda 2016). In part this is due to the requirements of the algorithms and the input data. Algorithms perform better or worse dependent on street network densities and the rate of data collection (the sample rate), and the computational power available can limit the complexity of the algorithm in use. The use-case can also have a significant impact on which algorithms can be used, for instance, whether all data are historic (allowing the algorithm to 'look ahead' in the data) or whether data input is in real time (also known as off-line and on-line models). For this reason, although we focus our attention on foot-based patrol dosage we remain mindful that the issues discussed may also occur, albeit in a more limited way, for vehicle-based patrols. Another concern with GPS data is that systematic bias can exist within GPS location measurements and come from various sources; satellite orbital errors and clock bias, receiver clock errors, refraction in the ionosphere and troposphere, and signal multipath (He et al. 2011). Whilst there are methods for reducing or removing most of these errors, signal multipath-where the signal between the satellite and receiver is reflected by large objects, causing non-direct paths to be taken-is distinctly problematic. Within an urban environment this is known as an 'urban canyon' issue; whereby tall buildings (or other structures) interfere with the GPS tracking system, causing it to incorrectly interpolate the location as signals are reflected off of buildings, (see Fig. 1). A study in the town of Gorlitz, Germany found that the average measurement error in areas with broad streets and few tall buildings was 2.5 m, whilst areas with narrow streets and mostly tall buildings had a average measurement error of 15.4 m (Modsching et al. 2006). Despite these challenges, GPS data provide a valuable method by which patrol data could be more accurately computed than was possible using previous methods. This paper is motivated by a desire to quantify footpatrol measurement errors when GPS data are the basis of that measurement. What follows is an investigation of how significantly measurement errors might impact on the estimation of police dosage in micro-places and thus evaluations that try to account for police dosage. With this in mind, it is important to consider the tools and expertise that practitioners and researchers might have at their disposal. This analysis is in no way trying to improve on the sophisticated and proprietary algorithms used by companies such as Google, Microsoft, Uber, or CityMapper-companies that have all invested heavily in mapping systems which can take raw GPS tracking data and interpolate an individual's path through the urban network, determine their likely mode of transport, and account for other factors such as traffic conditions, environmental factors, and potentially more accurate path data from other users of their services. These companies also have a broader range of tools with which to measure movement. These include: multilateration (measuring the time that energy waves sent from a mobile phone take to reach different network towers in the area); mobile phone signal strength measured from several network cell towers; crowd-sourced WiFi data from nearby receivers; and a much more extensive dataset of pedestrian movement. Instead, this paper focusses on how accurately police patrol dosage can be estimated, particularly from the viewpoint of a police analyst or academic researcher who might have access to GPS data from officer-worn radios. The number of studies that have already utilised GPS data in the measurement of police dosage is small as the technology is relatively new in law enforcement and thus evaluations have only recently been able to utilise GPS data to measure patrols. Some have counted the number of GPS pings within patrol locations and then multiplied that by the (standard) time between pings. For example, the experiment by Ariel et al. (2016) used GPS data where the location was recorded every minute. If, for instance, three pings occurred within a patrol box, they would count this as 3 min of dosage. Alternatively, a 'join-the-dots' approach has been used, whereby an officers path was assumed to be a straight line between GPS pings and that they moved at a constant speed between pings. When one ping falls outside a patrol location and the next ping falls inside the patrol location, the entry time can be calculated accordingly (Hutt et al. 2018). For both these strategies, the frequency with which pings occur, and where along an officer's path they occur may have a significant impact on where dosage is assigned. The next section outlines the first experiment and confirms that GPS data can be used as reliable measure of where officers were at the time. This is followed by the second experiment, conducted with the MPS in London to measure how different GPS data refresh rates affect the interpolated paths officers have been estimated to have taken. The paper concludes with a discussion of the findings and their implications for future evaluations of micro-place policing interventions. Experiment 1: Comparing police GPS data against a known path A preliminary experiment was conducted in conjunction with a police force in the North of England to assess the accuracy of GPS data collected via officer-worn radios. The main objective was to establish whether the officer GPS data accurately reflect where the officer actually was. The researcher accompanied officers on foot patrols in two areas-one more suburban and a city centreover 2 consecutive days recording the author's location approximately every 15 to 20 s using a smartphone-based GPS recording application. Notes were made during the patrols of the exact paths taken and these, along with the researcher's recollection of the paths, were used to verify the accuracy of the smartphone data. A very small number of smartphone data points were incorrect and these were removed from the analysis so that the true paths were accurately specified. GPS data collected from officer radios every 2 min was then provided by the police force for the officers that had been accompanied. Figure 2 shows the officer 2 and researcher GPS pings for the city centre patrol. For ease of interpretation, the figure shows an interpolated path between successive GPS pings as a direct line between them. Each officer GPS ping was matched to the researcher ping that occurred nearest in time so that the spatial distance between them could be measured. A total of 41 matched pairs were recorded in the residential area and 30 matched pairs recorded in the city centre with median distances between matched pings of 15.3 m and 18.4 m respectively. To account for the fact that the researcher and officers were moving, only pairs of pings that occurred within 15 s of each other are included, with the median time between matched pings being 7 s for both groups. It was expected that there would be some distance between the matched pairs for several reasons. First, the average pedestrian walking speed is 1-1.4 m/s (Levine and Norenzayan 1999;Snaterse et al. 2011) and so if the matched pings do not occur at exactly the same time there will be some distance between them even if spatial measurement were perfect. Second, the GPS receivers were not being held by the same individual and so again, even if all other measurements were perfectly accurate there would be an expected distance of several metres between matched pings. And finally, there is a measurement error associated with the devices being used to record both the officer and researcher locations. On this final point, the GPS receivers in the officer radios have a Circular Error Probability of 5 m meaning that for 50% of GPS pings the true location is within 5 m of the reported location (95% within 10 m). A study of smart phone positional accuracy by Merry and Bettinger (2019) found that the error in an urban environment was in the range of 7 to 11 m for an environment similar to that of this study. Thus taking these three measurement uncertainties together it should be expected that there is some distance between the matched ping pairs. The distributions of the matched pair distances are shown in Fig. 3. These results do not assess the exact measurement error of the GPS receivers used by the officers; that would have required measuring the exact location of an officer and comparing it to a GPS ping recorded when they were stationary in that location. However, these results do provide some reassurance that the data is not wildly inaccurate, (at least in the majority of cases) and can broadly represent the paths officers have taken. An important auxiliary finding was made from this experiment: there are times when officers are not traversing known roads or paths. On both patrols there were instances of officers traversing areas which on a map would appear to have no clear through-way, whether they be unmapped paths between streets, across parks, or through other public spaces. A similar pattern was discovered later whilst conducting patrols with officers in London; there were several occasions when the officers path diverges from roads and pavements as they make their way through housing estates and other non-standard walkways. These paths often meander as officers try to cover the area of their patrol, which highlights another disparity with standard movement-the objective of the patrol is not to get from one point to another as quickly These phenomena highlight an important issue in trying to accurately map foot-patrol paths; attempting to 'correct' GPS paths by matching them to known road and path structures (a process known as map matching) may in fact add errors, particularly as most map-matching algorithms assume a certain efficiency of path finding between points. Foot patrols, unlike vehicle patrols, are not constrained by the 'official' path that exists nor by the standard desire to get between points as efficiently as possible. Building on these insights, the next experiment was conducted in a densely populated area of Southern England and was designed to measure how the paths interpolated from GPS data differ when using data collected at different rates. Whilst it was not feasible to systematically record the true path that officers took, the purpose of the experiment was to evaluate whether different GPS ping rates would lead to significantly different patrol paths being assumed. Experiment 2: The effect of GPS refresh rate on interpolated officer paths In 2017, approximately 40 police officers from the MPS agreed to wear a secondary radio whilst out on foot patrol. These radios transmitted the officer's location every 30 s 3 -a significantly faster refresh rate than the 5 min interval used for standard MPS body-worn radios. The purpose of this more frequent refresh rate was to allow for foot patrol paths to be measured at a higher resolution. Due in part to radio malfunction, a total of 31 officers ultimately participated in the experiment and recorded 214,342 'location pings' using the secondary radios. This equates to approximately 1786 h of recording or 223 8 h police shifts ranging from the 8th November 2017 to the 16th January 2018. Figure 4 shows the distribution of the data over the study period and that over time officers were less likely to carry their secondary radio (or potentially that the radios developed technical issues), particularly after the Christmas break. All the data have been included in the analyses that follow. A notable feature of the data is that they were not just recorded when officers were conducting patrols. For this reason, the data have been cleaned so that only data relating to actual foot patrols were used for the analysis; removing, for instance, time spent in police stations or travelling either in police vehicles or on public transport. Whether an officer's movements were on foot or by vehicle was determined by calculating the average speed between GPS pings and discounting any travel at more than 2 m/s. As discussed above, the standard pedestrian walking speed is 1 to 1.4 m/s. Given that police officers are carrying a considerable extra weight in the form of their vests and equipment it was assumed that they would not be walking any faster than a standard pedestrian when they are on foot patrol. This also had the effect of removing some of the more extreme cases of the urban canyon effectwhereby an officer would suddenly appear several 100 m away from their previous ping location just 30 s previous. In other words, by removing the GPS data which implies the officers were walking improbably fast the remaining data more accurately reflect the 'ground truth' of patrol routes. Officer paths were interpolated based on the 30 s refresh rate GPS data, to create a baseline 'assumed path' . Prior to Experiment 1 (above) being conducted, it was anticipated that a map-matching algorithm would be used to try and accurately interpolate officer paths in this experiment. However, given the observation that officers were not always constrained to known pathways, this was discarded and the method of interpolation used was a simple 'as the crow flies' direct line between consecutive GPS pings. Subsets of the GPS data are then created such that paths are interpolated based on 60, 90, 120 (and so on) second intervals up to 5 min intervals. There are many ways of measuring the similarity of two paths. Magdy et al. (2015) provide a useful review, separating the methods into Spatial and Spatio-temporal similarity measures. The spatio-temporal methods can immediately be discounted for the present analysis. The paths we wish to compare (30 s ping rates versus more sparse ping rates) are derived from the same dataset and thus any temporal analysis is not sensible. Of the spatial similarity measures, the Fréchet metric (or distance) is the most popular similarity measure in use according to Gudmundsson et al. (2011) and its use here is also appropriate. Fig. 4 GPS ping distribution during trial period The Fréchet distance can be described as follows. Assume we have two paths, A and B. At the start of path A there is a dog and at the start of path B the dog's owner. Both the owner and the dog can walk along their respective paths and they can each vary their speed, though they can not move backwards. The Fréchet distance is the minimum length of leash that would be necessary to connect the dog and its owner for the entire journey from the start to the end of their respective paths. Alternatively, consider the path A as being made up of infinitely many points. For each point, calculate the shortest distance to path B. The Fréchet distance is the maximum of all these shortest distances. It is important to note that for our data, the distances are not only calculated where the GPS pings occur, but along the entire path. These paths are also often referred to as trajectories. As one of the trajectories being compared in this analysis is created from a subset of points from the other, (e.g. the 60 s trajectory points are a subset of the 30 s points) there will of course be regular intervals where the trajectories 'touch' . However, this is not an issue as the Fréchet distance is the minimum leash length over the entire trajectory. That said, in measuring the similarity of two trajectories, using the Fréchet distance over the full patrol would provide very little useful information as it would only provide us with the maximum distance between the trajectories. The analysis would be highly susceptible to noise within the data such as erroneous pings caused by urban canyon effects. To account for this, instead of computing the Fréchet distance for the entire path, it is segmented into multiple path sub-patrols, defined as the shortest sequence of consecutive GPS pings in each trajectory which share a common start and end ping. The Fréchet distance is then calculated for each sub-patrol. This provides a distribution for the similarity between the two trajectories. As a basic example, Fig. 5 shows a set of points in black which might represent a patrol officer's GPS pings starting from the location (0.2, 0). Let us imagine that these pings occur every 30 s and label them p 1 , p 2 , p 3 , p 4 etc. Three trajectories have been created: • T 1 : based on taking every ping and interpolating the 'assumed path' (the solid black line) as the route taken by walking in a straight line between each ping. • T 2 : an interpolated path constructed by using every other ping p 1 , p 3 , p 5 , p 7 ... (the dashed blue line). This would be a potential interpolated path if GPS pings were recorded every 60 s rather than every 30 s. • T 3 : an interpolated path constructed by using every third ping (dashed red line). Note that the paths need not all start from the same point and so this trajectory is formed of p 3 , p 6 , p 9 , p 12 ...; the important distinction is the frequency with which the points are used in the interpolation-i.e. this represents a 90 s ping refresh rate. There are two alternative interpolations that use a 90 s GPS ping rate-one which starts at p 1 and one which starts at p 2 . The sub-patrols for T 1 and T 2 are defined by {p 1 , p 2 , p 3 } , {p 2 , p 3 , p 4 } , {p 3 , p 4 , p 5 } , etc. and the sub-patrols for T 1 andT 3 are defined by {p 1 , p 2 , p 3 , p 4 } , {p 2 , p 3 , p 4 , p 5 } , {p 3 , p 4 , p 5 , p 6 } , etc. The maximum Fréchet distance between T 1 and both the 60 s ping-rate possibilities [i.e. using every odd ping ( T 2 ] or every even ping (not shown)) is approximately 1.5 and is highlighted on Fig. 5. Whilst the Fréchet distance between T 1 and 90 s ping rate possibilities (exemplified by T 3 ) is 1.98 and is again shown on Fig. 5. Rather than use just this single number, the Fréchet distance is computed for each sub-patrol as defined above. This provides a distribution of how similar the trajectories are by evaluating sections of the trajectories. Given the very simple (and low sample count) example provided, the distributions for this example are given as boxplots in Fig. 6. The basis of the main experiment on police trajectories is to compare more sparse GPS data against an assumed path using a 30 s ping rate and thus it is important that the assumed path being used as a baseline does in fact contain a GPS ping every 30 s. As such a second phase of data cleaning was required. Data for each officer were split into 'sub-patrols' by selecting sequences of pings that occurred within 40 s of each other and where the maximum distance between two pings was 150 m. A threshold of 40 s was allowed as the radios did not always ping exactly every 30 s, perhaps due to delays in the data being received. This produced 22,951 'sub-patrols' . Any sub-patrol with only one or two GPS pings was removed and after inspection of the data, any sub-patrol with more than 60 GPS pings (i.e. more than 15 min of persistent 30-s pings) were also removed. This reduced the number of subpatrols to 10,925. The long un-interrupted patrols were found to be due to a radio being switched on and left in one place for an extended period of time or because an officer was stationary. For instance, if an officer was at a hospital or school. The purpose of the data collection was to measure actual patrol movements and so removing these 'stationary periods' is not a concern. Similarly, if during a sub-patrol an officer did not move on average 0.3 m/s the patrol was discarded due to the officer being mainly stationary. This left a final sample of 6064 sub-patrols to use for the analysis. The median number of GPS pings in a sub-patrol was 8.98 and the inter-quartile range was 4 to 11. One such sub-patrol is given in Fig. 7 with examples of potential paths interpolated using different ping refresh rates. It is interesting to observe that there is quite some variation in the routes, particularly between the 30 s and 5 min ping rate paths. Fig. 6 Distributions of Fréchet distances for example trajectories It is also worth highlighting that in this particular example, the roads and walkways that an observer might assume the officer walked are likely to differ between the 30 s and 5 min refresh rates, and this is an important issue to consider given the increasing use of road-segments as the unit of analysis both for defining crime risk (Rosser et al. 2017;Tompson et al. 2009) and for analysing patrol routes (Davies and Bowers 2019). Experiment 2: Results The Fréchet distances were calculated between officer patrol paths using 30 s GPS ping rates (the officer's assumed path) and more sparse data sampled from this same dataset-hence the data shown for each ping rate represents a comparison against the 30 s ping rate as a 'baseline' and describes the distance between the path for the given ping rate when compared against the path using the assumed true path-the 30 s ping rate. The distributions of these distances are shown in Fig. 8. The similarity of the patrol paths reduces as the assumed path is interpolated based on sparser data. A 60 s ping rate provides a very similar path to that produced by a 30 s ping rate and although there is a long tail to the distribution the median distance between paths using 30 s ping rates and paths using 60 s ping rates is 0 m. However, less frequent refresh rates quickly lead to dissimilar paths: the median Fréchet distance when comparing a path using 30 s ping rates to a path using 5 min ping rates-the standard for MPS radios-was 60.1 m and again, there is a long tail to the distribution. Table 1 provides some basic descriptive statistics of the distributions. It illustrates that the decrease in accuracy is non-linear. That is, accuracy decreases rapidly (for refresh rates of 90, 120 and 150 s) but then declines much less quickly. Whilst an median divergence of about 60 m between the 30 s and 5 min ping rates does not seem great (approximately two thirds of the length of a Premier League football pitch), it is important to highlight the spatial resolution at which police patrols are now being measured. With hot-spots of crime being defined in the tens or hundreds of metres, these data uncertainties could begin to have a significant effect on the perceived efficacy of police patrols and on the utility of the more sophisticated crime prediction systems being developed. Indeed, some predictive crime mapping systems (such as PredPol) define hot-spots that are roughly 150 m across; meaning that a 60 m average error could be the difference between an officer appearing near the centre of the hotspot or outside it entirely. Conclusions The intention of this paper was to explore the potential impact of measurement error in the use of GPS data to measure police foot-patrol. The advent and wider availability of GPS technologies to measure movement offer new opportunities to develop better evidence to support policing. However, the use of GPS data in analysis should not just assume that this reflects the true picture of policing activity. A useful conclusion would identify minimum acceptable standards in terms of GPS ping rate when it comes to accurately representing patrolling locations. If we ignore for a moment the additional costs associated with more frequent ping rates and only consider the accuracy of the path in order define the 'best' ping rate, then a 60 or 90 s rate provides very little deviation from a 30 s ping rate and so in an ideal world this might be the suggested optimal rate (when measuring officer movements in micro-places). However, practical considerations make defining what is 'best' unwise at such a general level. As mentioned earlier, there are substantial additional costs associated with increased ping rates: the battery in the officer's radio will deplete at a faster rate, shortening both the time between charges (and thus the time officers can be on patrol) and the overall life of the battery through more frequent charges. The latter amounts to a substantial financial burden if a police force is made to replace all officer radios more frequently. More frequent ping rates also require greater data storage capacity for the organisation to keep the same period's worth of data; the cost of a force's radio contract is also contingent on the frequency of pings as more frequent data collection has greater overheads for the service provider. Finally, more frequent pings afford a greater volume of data to analyse, increasing computational need for the analyst or system displaying that data. A secondary consideration is what the data are being used for. If a police force has no need to measure officer movement at such granularity, or if there are very few areas with dense urban street networks, then a more sparse ping rate may suffice for their needs. For these reasons it is unwise to state what the 'best' ping rate is; it will be different for each police force and only they have a true understanding of their needs. It is important to note that the analysis presented in this paper, and the previous studies which used GPS data discussed earlier are concerned with foot-based police patrols, which are used in the UK to conduct preventative patrols-particularly in crime hot-spots (College of Policing 2012). This paper has not sought to evaluate these paths against hot-spots for two reasons. First, the officer's involved in this study were conducting foot patrols, but the focus of where they were supposed to be patrolling and what they were trying to achieve when in those areas were not known; it would be inappropriate to try and measure their impact on local crime issues without being sure which issues they were trying to address. 4 Second, the measurement of police dosage in a given area requires a clear definition of the spatial and temporal units of analysis-i.e. over what area and what time period should dosage be measured? Such analyses are certainly necessary and worthy of future study, but they were beyond the scope of this experiment. To summarise, assumed officer patrol paths were interpolated using GPS data collected with a 30 s refresh rate. The similarity of these paths to paths based on increasingly sparse data were compared to evaluate how the frequency of data collection affects the path an officer is assumed to have taken. Police forces in the UK are known to use GPS refresh rates of between 2 and 5 min and so based on the results of this study they would produce assumed paths which, at any given point, deviate from these more regularly measured paths by a median distance of 28 to 60 m. Whilst this distance may seem relatively small it is only an average path deviation and when the sum total of patrol officer paths is considered it may lead to substantially different estimates of patrol dosage. This is particularly the case when analysis is conducted at small spatial resolutions-increasingly the standard when both defining and evaluating hot-spot policing strategies. What this analysis highlights is that evaluations of predictive crime mapping systems in particular must pay attention to the accuracy of the data being used. As discussed in the introduction, some evaluations do now note the issue of data inaccuracy but the authors are unaware of any study to date which has investigated the size of the potential errors caused by the measurement uncertainty. This is not to say that past evaluations have been wrong or insufficiently robust, only that the exact magnitude of any evaluated effects of patrol at micro-places need to be properly considered. For police forces that are considering implementing or procuring micro-level hot-spotting systems, their is a clear need to demonstrate that the system they intend to use is firstly, better than any alternative systems (by accurately measuring the difference) and secondly, a better use of resources than a separate project entirely. The implication of this paper then is that more sparse ping rates may substantially alter the evaluated impact of patrol by mis-recording patrol intensity in micro-places which in turn could impact what intervention is deployed. From a more bureaucratic viewpoint, it is also important to ensure that any system not procured is unable to challenge the results due to measurement uncertainty. Shorter ping refresh rates can thus reduce the chances of such challenges being mounted and provide a more robust defence against challenges. Several limitations exist within the analysis described here. The true paths walked by the officers is unknown. The GPS data still only approximates the officers' locations; their path between pings has been interpolated and measurement error still exists within the collection of GPS data. Urban canyon effects can significantly distort the estimated location of an officer although concerns over these issues have been somewhat mitigated by the first experiment with the northern police force which showed that GPS data did provide a relatively accurate representation of the true path taken. Also, an attempt was made to mitigate some of these issues by removing 'noisy' data-where the distance between pings was unrealistically far or the speed travelled between pings too great for the movement to have been by foot. As such, only trajectories which could confidently be classified as realistic patrol routes were analysed. The use of GPS data to estimate police patrol dosage is a relatively new development. So far, very few studies (Ariel et al. 2016;Hutt et al. 2018;Williams and Coupe 2017) have used GPS data to try and measure the deterrent effect of police patrols and to the authors' knowledge this is the first to examine the similarity of police patrol paths using GPS data. However, as patrols have become more targeted the impact of the accuracy of the measurement of police patrols requires greater consideration. GPS data do not provide a perfect solution to the challenges of measurement, however they have several advantages over the methods previously used. This includes greatly reduced manual effort to record the necessary data. GPS data are already being recorded in order to be able to locate officers when necessary and so using the same data for patrol estimation requires no additional expenditure. The passive nature of the data collection also provides flexibility in the analysis. As the data are not reliant on officers remembering to record a state change at a given point (such as entering or exiting a hot-spot) if the parameters of the measurement change (for example, if officers were told to patrol certain streets rather than specific areas or if the size of the hot-spots is altered) then the data is still valid as it provides a history of the officer's movement rather than just entry and exit from a fixed area. With a growing number of commercial micro-place hot-spot systems emerging it is important for police forces to be able to accurately evaluate implementations of such systems. Robust comparisons between such systems need to be feasible in order for police management to determine where their resources can be most efficiently used and thus the accuracy with which police patrols are measured needs to be clearly stated in any evaluation if practitioners are to have confidence in the results.
2021-02-07T14:07:22.425Z
2021-02-05T00:00:00.000
{ "year": 2021, "sha1": "7cec811852fcbb9048e4c91704c4a530077377df", "oa_license": "CCBY", "oa_url": "https://crimesciencejournal.biomedcentral.com/track/pdf/10.1186/s40163-021-00140-1", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b876712c193b3112c9a75b7cc0a056931ef8e36a", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
119352505
pes2o/s2orc
v3-fos-license
Lorentz force effects for graphene Aharonov-Bohm interferometers We investigate magnetic deflection of currents that flow across the Aharonov-Bohm interferom- eters defined in graphene. We consider devices induced by closed n-p junctions in nanoribbons as well as etched quantum rings. The deflection effects on conductance are strictly correlated with the properties of the ring-localized quasibound states. The energy of these states, their lifetime and the periodicity of the conductance oscillations are determined by orientation of the current circulating within the interferometer. Formation of high harmonics of conductance at high magnetic field and the role of intervalley scattering are discussed. I. INTRODUCTION Mesoscopic and nanosize looped conducting channels -known as quantum rings (QRs) [1][2][3][4][5] -are the simplest electron interferometers that can be defined in solid. In coherent transport conditions the QR conductance is determined by superposition of wave functions passing through the arms of the ring. The vector potential introduces relative phase shifts to the wave functions [6] which result in the Aharonov-Bohm (AB) conduction oscillations with the period of the magnetic flux quantum φ 0 = e/h threading the ring. The shifts appear also when the magnetic field (B) is present only within the inner core of the ring that is impenetrable for the electron wave function. Nevertheless, for nanosize devices the conductance measurements are usually performed in homogeneous magnetic field which is therefore present in the scattering region. The effect of the magnetic field is a deflection of the average electron trajectory [7][8][9][10][11][12][13] for currents injected to the device and formation of the edge currents in the quantum Hall conditions [14]. The AB conductance oscillations were studied for QRs etched in graphene by both experiment [15][16][17][18][19][20] and theory [21][22][23][24][25][26][27][28][29][30][31]. The purpose of the present paper is to describe an interplay of the magnetic deflection and the conductance oscillations. We consider the conductance calculated by the Landauer approach for atomistic tight binding Hamiltonian and analyze the resonant states localized in the quantum ring with the stabilization method [32]. The localized states interfere with the incident Fermi level wave functions which leaves traces on the conductance oscillations. We indicate that it is possible to distinguish two series of AB oscillations which correspond to localized states with clockwise and anticlockwise persistent current circu-lation, that forms magnetic dipole moment parallel or antiparallel to the external magnetic field, respectively. The series differ in period and width and we explain that the magnetic deflection is responsible for both effects. We find that the deflection produces high harmonics of conductance at high B. In the literature the presence of the high harmonics [5,18] is considered a signature of phase coherence length being much larger than the circumference of the ring, with the period of φ 0 /n corresponding to electrons encircling the ring n times [33,34]. For strongly disodered conductors the Al'tshuler-Aronov-Spivak [35] periodicity of φ 0 /2 dominates over the Aharonov-Bohm φ 0 period. An appearance of the high harmonics for graphene quantum rings at high magnetic fields was recently observed [18] and attributed to reduction of scattering involving electron spin flips at high B. Here we demonstrate that also in the absence of any dephasing effects the high harmonics are activated by high field. Note, that the Aharonov-Bohm oscillations were recently studied for systems of multiple quantum dots connected in parallel, in the context of the conductance harmonics in the nonlinear and electron-electron interaction effects [36][37][38]. For n-p junctions that are defined in graphene by external voltages the electron trajectories are deflected in opposite directions at both sides of the junction leading to snake-orbits [39][40][41][42][43][44] and the resulting current confinement along the junction [45][46][47][48]. For a circular n-p junction [44] induced in a graphene ribbon by an external potential -of a scanning probe in particular -the AB oscillations appear due to the coupling of the edge and junction currents [49]. The series of localized states in interferometers both etched and potential-induced are discussed. The comparison of the two types of interferometers reveals a role of the intervalley scattering for appearance of the conductance oscillations. II. THEORY We use the tight-binding Hamiltonian for π electrons: where the first summation runs over the nearest neighbors, and V (r i ) is the external potential at position r i of the i-th ion. We consider two types of devices: the etched and the induced ones. The etched rings are connected to input and output leads by two narrow graphene nanoribbons [see Fig. 1 (a)]. The induced device consists of a wider graphene nanoribbon with a scanning gate microscope tip floating above [ Fig. 1 (b)]. For the etched rings the potential V (r i ) is taken zero everywhere. For the induced interferometers V describes the effective potential of the tip. Due to screening of the Coulomb potential of the tip by the two-dimensional electron gas, we assume effective potential given by a Lorentzian function, according to the Schrödinger-Poisson modeling [50] where n = 2, r t = (x t , y t , 0) stands for the tip position, d for the width of the effective tip potential, and V t is the maximal value of the tip potential. The hopping elements of the first sum in Eq. (1) include the Peierl's phase, t ij = t exp( 2πi φ0 )´r j ri A · dl, where t is the hopping parameter. The magnetic field is applied perpendicular to the plane of confinement B = (0, 0, B 0 ), and we use the Landau gauge, A = (−yB 0 , 0, 0). We consider the energy range near the Dirac point. The numerical complexity of the problem can be reduced by the scaling approach of Ref. [51] that we apply here. The ribbons modeled here are scaled up with the condition a = a 0 s f and t = t 0 /s f , where t 0 = −2.7 eV is the unscaled hopping parameter, a 0 = 2.46 Å is the graphene lattice constant. We apply a scaling factor of s f = 4. The rescaled magnetic field is B 0 = Bs 2 f , with B being the actual magnetic field applied to the modeled sample. In order to evaluate the transmission probability, we use the wave function matching method (WFM), as described in Ref. [52]. The transmission probability from the input lead to mode m in the output lead is with t mn being the probability amplitude for the transmission from the mode n in the input lead to mode m in the output lead. The linear conductance [53] is evaluated as G = G 0 T tot , with T tot = m T m and G 0 = 2e 2 /h. The current flow between the atoms m and n, as derived from the Schrödinger equation [54], is where Ψ n is the wave function at the nth atom. The probability current flux can be evaluated as: where the first sum runs over the atoms along a crosssection of the ribbon, and the second sum runs over their neighbors n m localized to the right. The results provided below are analyzed with respect to the density of the localized resonant states. The density is evaluated with the stabilization method [32] [see the Appendix]. A. Narrow etched ring Let us begin the discussion by a narrow ring (the internal radius R 1 = 41.05 nm, and the external one In order to identify these lines a cross section of Fig. 4(b)]. This current orientation produces the magnetic dipole µ which is antiparallel to the external magnetic field [56]. The interaction of this dipole with the magnetic field (∆E = − µ · B) leads to the growth of the energies with B that is visible in Fig. 2(a). In Fig. 5(a) the transfer probability -still for E = 0.0586 eV is confronted with the localized resonances counter F . For the peaks of the counter the dips of conductance appear, hence the reversal of the current occurs at interference of the incoming electron with the ring localized quasi-bound states. For the anticlockwise current circulation the Lorentz force keeps the current confined within the ring [ Fig. 4(c)] hence the very pronounced localized resonances in the stabilization diagram [ Fig. 2(b)]. On the other hand, the localized states that correspond to the opposite current circulation are destabilized by the Lorentz force, so they leave only a trace on the stability diagram of Fig. 2(b) -but still they can be noticed at the high energy region in the upper right corner of the plot. In graphene the cyclotron radius for the Fermi energy For higher E F the cyclotron radius becomes larger than the size of the ring-ribbon junction, which reduces the magnetic deflection that for this current orientation tends to eject the resonance states out of the ring and hence delocalize the resonant states. Figure 5(b) shows the Fourier transform of G(B) calculated in finite ranges of B that are marked with different colors in Fig. 5(a). The plots of Fourier transform are normalized so that for each plot the first peak has the same amplitude as in the blue curve in Fig. 5(b). We find two distinct features: (i) the period of the resonances increases with B and (ii) the higher harmonics are enhanced at high magnetic field. Both these features can be explained as consequences of the Lorentz force. Feature (i) is due to a reduced effective radius of the ring which results from the Lorentz force pushing the wave function to the inner core of the ring [cf. Fig. 4(b)], hence the AB period ∆B = h eA = 2 eR 2 increases. The feature (ii) results from the stabilization of the anticlockwise loop inside the ring that increases the number of turns of the electron circulation around the ring. The results discussed in Figs. 2, 3 and 5 were obtained for the lowest subband transport. For higher filling factors -when the intersubband scattering is present -the AB oscillation becomes pronounced only for B larger than 10T [ Fig. 2(a)] , i.e. when the resonant lines are formed in Fig. 2(b). The results for high harmonics are summarized in Fig. 6 which shows the Fourier transform of the signal for varied energies within the range of (0,10)T [ Fig. 6(a)] and (10,30)T [ Fig. 6(b)]. Also at higher energies the magnetic field enhances the higher harmonics. The stabilization of the higher harmonics in the stronger magnetic fields was recently found in Ref. 18. B. Wide etched ring The magnetic deflection effects discussed above were limited by the narrow width of the ring. In this part we change the ring radii to R 1 = 23.4 nm and R 2 = 48.95 nm. The results for conductance and the resonance counter are given in Fig. 7. For B ∈ (0, 10)T one observes both the lines which grow and decrease in the energy with B -due to the resonant states of both current orientations. Above 10T the conductance plot [ Fig. 7(a)] resolves only the states which go down in the energy (clockwise currents) with an increase of B, and in contrast -the stability diagram [ Fig. 7(b)] retains only the states that increase in the energy with growing B. The cross section of Fig. 7 is given in Fig. 8 and shows that at high magnetic field wide peaks and narrow dips of conductance appear. The wide peaks cor- respond to a clockwise current circulation [ Fig. 9(a)] while the narrow dips appear with an anticlockwise current [ Fig. 9(b)]. The shifts of the peaks [ Fig. 7(a)] and dips [ Fig. 7(b)] in energy agree with the orientation of the produced dipole moment with respect to the external magnetic field. At high B -the dips become too thin to be resolved on the conductance plot [ Fig. 8] and the peaks extend to almost any magnetic field. The width of the resonances and antiresonances is related to the lifetime of the quasibound states. The lifetime of the dip-related resonances becomes very large -as they become decoupled from the states of the channel -and disappear from the conductance plot of Fig. 7(a). On the other hand, the lifetime of the states with the opposite current circulation becomes very small which removes them from the stability plot of Fig. 7(b). In addition, in Fig. 7(a,b) we can see that below 10T both the lines that grow and decrease with B appear at same energy with the same period in the external magnetic field. However, for higher magnetic field the states that decrease with B in Fig. 7(a) appear with a much shorter period than the ones that grow in the energy in Fig. 7(b). The change of the periodicity is related to the wave function being shifted to the internal or external edge of the ring depending on the current circulation and the orientation of the Lorentz force. The opposite shifts for opposite current circulation are well visible in Fig. 9. C. Induced quantum rings Let us now briefly discuss an AB interferometer that is formed in a graphene nanoribbon by external potential of e.g. a scanning gate microscope placed near the center of the ribbon. We consider E F > 0 and V F > E F so that the tip forms a p-type conductivity region (see Fig. 10) within the n-type nanoribbon [49]. This n-p junction stabilizes an anticlockwise current [48] with the Lorentz force acting towards the junction on its both sides (red arrows in Fig. 10). For numerical calculations we took the armchair nanoribbon of width W = 98.4 nm, and the parameters of the potential induced by the tip were V t = 0.4 eV and d = 24.6 nm. In the conductance dependence on the magnetic field only the resonances that increase with external magnetic field are observed [cf. Fig. 11(a)]. The current distribution -calculated from the transport problem -and the electron density at the resonance -calculated by diagonalization of the closed version of the system -is given in Fig. 12(a), which shows the passage of the edge currents around the junction. The Fourier transform of conductance [ Fig. 11(b)] indicates the presence of the higher harmonics. In contrast to the results for the etched rings, the oscillation period increases with the Fermi energy -since the diameter of the p region is reduced to zero when the Fermi energy is increased (in the limit of E F = V the p region disappears). The stability diagram [ Fig. 11(c)] contains the resonances that are present in the conductance dependence [see the higher energy part of These lines correspond to states which are confined entirely within the p potential -under the tip, which (i) form an clockwise current loop (for the "paramagnetic" states that decrease in external magnetic field) or (ii) produce a weak magnetic dipole with anticlockwise current that is induced by the external magnetic field ("diamagnetic" states). An example of the latter is displayed in Fig. 12(b). The resonant states localized under the tip are screened from the edge currents by the anticlockwise loop formed at the n-p interface. Note, the sharp resonances found in the conductance of Fig. 11 could be used as a sensor of the magnetic field inhomogeneity with the resolution of the sensor given by the width of the conductance peak. D. Zigzag leads For an etched ring connected to zigzag ribbons acting as leads (conductance given in Fig. 13) with the internal radius R 1 = 23.3 nm, the external one R 2 = 49.1 nm, and leads 17.6 nm wide, one obtains very similar results to the one obtained above for the semiconducting armchair ribbon [ Fig. 7(a)]: with resonances of both current orientations visible at low field and only the clockwise surviving at high field. A difference with respect to Fig. 7(a) is the non-zero conductance for low E that is observed only for the zigzag ribbon -due to their metallic character. However, an attempt to form the AB interferometer induced within the zigzag ribbon by an external potential apparently fails -see the blue line in Fig. 14. No AB-periodic conductance oscillations are formed at high field. The reason for the absence of the AB oscillation for the interferometer induced in the zigzag ribbon is the absence of backscattering at ν = 2. The lack of backscattering [57,58] in the lowest subband for the zigzag ribbon results from the chiral character of the carriers, or -in other words -the fact that the backscattering would require an intervalley scattering and the latter cannot be induced by a smooth, long-range external potential forming the p region inside the ribbon [57,58]. For the etched structure [ Fig. 13] the edges of the ring (Fig. A.1(b)) contain armchair segments which mix the valleys and activate the backscattering. For the induced ring this is no longer the case. In order to observe the AB oscillations in Fig. 14 we needed to introduce a short-range disorder to activate the intervalley scattering. The disorder is introduced at the edge, with a random removal of carbon atoms in a region of width 10 nm and length 540 nm, close to the upper ribbon edge (with probability of atom removal 0.01). The width of the zigzag ribbon both with and without defects is 97.42 nm. The conductance for the ribbon with defects is plotted with the red line in Fig. 14 and exhibits the oscillations missing for the perfect zigzag edge. Without the defects and for the Lorentzian tip potential the quasibound states are valley degenerate (see Appendix A) and they do not introduce any intervalley mixing. In order to observe the intevalley scattering by the quasibound states the external potential needs to possess a component that is short range on the scale of the crystal constant. We replaced the exponent n = 2 in (2) by 20 and 40 and obtained the results that are displayed in Fig. 15. For higher exponent the potential is steeper, and for n = 40 the variation of the potential in space induces a significant backscattering by the quasibound states that appear with an Aharonov-Bohm periodicity. IV. SUMMARY AND CONCLUSIONS We have studied the coherent transport through Aharonov-Bohm (AB) interferometers with magnetic deflection of electron currents. We considered both etched quantum rings and the ones induced by external potential. We solved the quantum scattering problem and determined the quasibound states using an atomistic tight-binding approach. We identified two series of quasibound states with opposite orientation of the currents. The orientation of the magnetic dipole and thus the energy shift of the resonances and antiresonances was observed in conductance. The magnetic forces pushing the currents to the internal or external edge depending on the orientation distinctly influence the AB oscillation period at the magnetic field scale. Moreover, the Lorentz force increases or decreases the coupling of the ringlocalized resonances with the leads depending on the orientation of the current. This amounts in modification of the lifetime of the quasibound states that in turn determines the width of conductance extrema that appear due to the interference of the incident electron wave function with the resonant localized states. Stabilization of the resonances with currents that produce the magnetic dipole that is antiparallel to the external field produces high harmonics of the conductance at high magnetic field. Appendix A: Stabilization method The stabilization method [32] is used for detection of the resonant states localized within the ring. The number of eigenvalues is then normalized to 1, giving the fraction of energy levels in an energy range. The method [32] extracts the states that are localized within the ring as their wave functions penetrate only weakly the leads and the dependence of the energy levels on the length of the leads is also weak. Throughout the paper, as a resonance counter we plot a fraction of energy levels F found in a given energy range. We solve the eigenproblem of the Hamiltonian (1) for a closed system of the ring connected to finite size ribbons, which are extended as marked in Fig. A.1. The energy spectrum is calculated as a function of the lengths of the leads. For the etched ring we start from the ring alone, then add subsequently single layers of carbon atoms at each side of the ring, each time evaluating the eigenvalues, until we reach 30 (20) layers for devices with armchair (zigzag) leads. For the nanoribbon with tip, we begin with a ribbon 116 nm (148 nm) long for armchair (zigzag) ribbon, and proceed similarly as for the nanoring. An example of the energy spectrum as a function of the leads length is shown in Fig. A.2 for a zigzag nanoribbon with a Lorentzian potential as discussed in Section III.D, for B = 6.6 T, in a narrow range of Energy spectrum calculated as a function of the lengths of the leads of a zigzag nanoribbon with a Lorentzian potential discussed in the paper, for B = 6.6 T. Every second energy level is marked with a cross or a circle. The energy levels that ar nearly independent of the leads length -corresponding to a localized resonance -are twofold degenerate. energy. Then, we count the eigenvalues in a small energy window obtained for varied length of the leads , using the detection counter [32,59]: where the first sum runs over the number of segments in the cell L, and the second sum over the eigenenergies i of the closed system. Here, we define The results presented in the paper were obtained for zero temperature and in the linear response regime. Here we evaluate the effects of a finite temperature and a finite bias for the conductance and its harmonics. For non-zero temperature, we calculate conductance by integration of conductance over the energy window [53] G(E F ; T ) = For conductance in a finite source-drain bias, applied to cover the non-linear transport regime, we apply voltages + V SD 2 − V SD 2 in the left (right) lead, and assume a linear change of the voltage along the system. We calculate the current with the formula [53] The linear conductance regime -see the inset to Fig. B.5 covers the bias up to 15 meV. We considered V sd = 10 meV -outside the linear regime and calculated the conductance dependence on the external magnetic field. The effect of a finite bias for the conductance that we find here is similar to the one obtained for a finite temperature: the visibility of the conductance oscillations are reduced and the higher harmonics are attenuated. For both the finite temperature and the bias we deal with a finite energy window for the transport. In the nonlinear transport conditions the effects of the electron-electron interaction [36][37][38] may lead to dI/dB| B=0 = 0 and to an even-odd effects for conductance harmonics [36]. The present modelling neglects the interaction and hence these effects are not observed.
2019-04-13T16:29:32.695Z
2016-11-09T00:00:00.000
{ "year": 2016, "sha1": "940d89acc180e5398690145e0a91d50ce9c63875", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1611.02843", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "940d89acc180e5398690145e0a91d50ce9c63875", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }